NASA Astrophysics Data System (ADS)
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
Projection methods for the numerical solution of Markov chain models
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
Projection methods for computing stationary probability distributions for Markov chain models are presented. A general projection method is a method which seeks an approximation from a subspace of small dimension to the original problem. Thus, the original matrix problem of size N is approximated by one of dimension m, typically much smaller than N. A particularly successful class of methods based on this principle is that of Krylov subspace methods which utilize subspaces of the form span(v,av,...,A(exp m-1)v). These methods are effective in solving linear systems and eigenvalue problems (Lanczos, Arnoldi,...) as well as nonlinear equations. They can be combined with more traditional iterative methods such as successive overrelaxation, symmetric successive overrelaxation, or with incomplete factorization methods to enhance convergence.
Stone, J.J. Jr.; Bettis, E.S.; Mann, E.R.
1957-10-01
The electronic digital computer is designed to solve systems involving a plurality of simultaneous linear equations. The computer can solve a system which converges rather rapidly when using Von Seidel's method of approximation and performs the summations required for solving for the unknown terms by a method of successive approximations.
A Gaussian-based rank approximation for subspace clustering
NASA Astrophysics Data System (ADS)
Xu, Fei; Peng, Chong; Hu, Yunhong; He, Guoping
2018-04-01
Low-rank representation (LRR) has been shown successful in seeking low-rank structures of data relationships in a union of subspaces. Generally, LRR and LRR-based variants need to solve the nuclear norm-based minimization problems. Beyond the success of such methods, it has been widely noted that the nuclear norm may not be a good rank approximation because it simply adds all singular values of a matrix together and thus large singular values may dominant the weight. This results in far from satisfactory rank approximation and may degrade the performance of lowrank models based on the nuclear norm. In this paper, we propose a novel nonconvex rank approximation based on the Gaussian distribution function, which has demanding properties to be a better rank approximation than the nuclear norm. Then a low-rank model is proposed based on the new rank approximation with application to motion segmentation. Experimental results have shown significant improvements and verified the effectiveness of our method.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
Solution of Cubic Equations by Iteration Methods on a Pocket Calculator
ERIC Educational Resources Information Center
Bamdad, Farzad
2004-01-01
A method to provide students a vision of how they can write iteration programs on an inexpensive programmable pocket calculator, without requiring a PC or a graphing calculator is developed. Two iteration methods are used, successive-approximations and bisection methods.
Some Surprising Errors in Numerical Differentiation
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2012-01-01
Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.
Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F
2015-10-15
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays
Lutton, Rebecca E.M.; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A.David; Donnelly, Ryan F.
2015-01-01
A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14 × 14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. PMID:26302858
Regularization of the double period method for experimental data processing
NASA Astrophysics Data System (ADS)
Belov, A. A.; Kalitkin, N. N.
2017-11-01
In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.
Aben, Ilse; Tanzi, Cristina P; Hartmann, Wouter; Stam, Daphne M; Stammes, Piet
2003-06-20
A method is presented for in-flight validation of space-based polarization measurements based on approximation of the direction of polarization of scattered sunlight by the Rayleigh single-scattering value. This approximation is verified by simulations of radiative transfer calculations for various atmospheric conditions. The simulations show locations along an orbit where the scattering geometries are such that the intensities of the parallel and orthogonal polarization components of the light are equal, regardless of the observed atmosphere and surface. The method can be applied to any space-based instrument that measures the polarization of reflected solar light. We successfully applied the method to validate the Global Ozone Monitoring Experiment (GOME) polarization measurements. The error in the GOME's three broadband polarization measurements appears to be approximately 1%.
Gariepy, Aileen M.; Creinin, Mitchell D.; Schwarz, Eleanor B.; Smith, Kenneth J.
2011-01-01
OBJECTIVE To estimate the probability of successful sterilization after hysteroscopic or laparoscopic sterilization procedure. METHODS An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. RESULTS In the base case analysis, the proportion of women having a successful sterilization procedure on first attempt is 99% for laparoscopic, 88% for hysteroscopic in the operating room and 87% for hysteroscopic in the office. The probability of having a successful sterilization procedure within one year is 99% with laparoscopic, 95% for hysteroscopic in the operating room, and 94% for hysteroscopic in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. CONCLUSIONS Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within one year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization. PMID:21775842
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Gu, Q; Ding, Y S; Zhang, T L
2010-05-01
We use approximate entropy and hydrophobicity patterns to predict G-protein-coupled receptors. Adaboost classifier is adopted as the prediction engine. A low homology dataset is used to validate the proposed method. Compared with the results reported, the successful rate is encouraging. The source code is written by Matlab.
Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin
2016-01-01
This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Overview of Krylov subspace methods with applications to control problems
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
An overview of projection methods based on Krylov subspaces are given with emphasis on their application to solving matrix equations that arise in control problems. The main idea of Krylov subspace methods is to generate a basis of the Krylov subspace Span and seek an approximate solution the the original problem from this subspace. Thus, the original matrix problem of size N is approximated by one of dimension m typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now just becoming popular for solving nonlinear equations. It is shown how they can be used to solve partial pole placement problems, Sylvester's equation, and Lyapunov's equation.
Krylov subspace methods - Theory, algorithms, and applications
NASA Technical Reports Server (NTRS)
Sad, Youcef
1990-01-01
Projection methods based on Krylov subspaces for solving various types of scientific problems are reviewed. The main idea of this class of methods when applied to a linear system Ax = b, is to generate in some manner an approximate solution to the original problem from the so-called Krylov subspace span. Thus, the original problem of size N is approximated by one of dimension m, typically much smaller than N. Krylov subspace methods have been very successful in solving linear systems and eigenvalue problems and are now becoming popular for solving nonlinear equations. The main ideas in Krylov subspace methods are shown and their use in solving linear systems, eigenvalue problems, parabolic partial differential equations, Liapunov matrix equations, and nonlinear system of equations are discussed.
FFT multislice method--the silver anniversary.
Ishizuka, Kazuo
2004-02-01
The first paper on the FFT multislice method was published in 1977, a quarter of a century ago. The formula was extended in 1982 to include a large tilt of an incident beam relative to the specimen surface. Since then, with advances of computing power, the FFT multislice method has been successfully applied to coherent CBED and HAADF-STEM simulations. However, because the multislice formula is built on some physical approximations and approximations in numerical procedure, there seem to be controversial conclusions in the literature on the multislice method. In this report, the physical implication of the multislice method is reviewed based on the formula for the tilted illumination. Then, some results on the coherent CBED and the HAADF-STEM simulations are presented.
NASA Astrophysics Data System (ADS)
Tavousi, Alireza; Mansouri-Birjandi, Mohammad Ali; Saffari, Mehdi
2016-09-01
Implementing of photonic sampling and quantizing analog-to-digital converters (ADCs) enable us to extract a single binary word from optical signals without need for extra electronic assisting parts. This would enormously increase the sampling and quantizing time as well as decreasing the consumed power. To this end, based on the concept of successive approximation method, a 4-bit full-optical ADC that operates using the intensity-dependent Kerr-like nonlinearity in a two dimensional photonic crystal (2DPhC) platform is proposed. The Silicon (Si) nanocrystal is chosen because of the suitable nonlinear material characteristic. An optical limiter is used for the clamping and quantization of each successive levels that represent the ADC bits. In the proposal, an energy efficient optical ADC circuit is implemented by controlling the system parameters such as ring-to-waveguide coupling coefficients, the ring's nonlinear refractive index, and the ring's length. The performance of the ADC structure is verified by the simulation using finite difference time domain (FDTD) method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Bin; Pettitt, Bernard M.
Electrostatic free energies of solvation for 15 neutral amino acid side chain analogs are computed. We compare three methods of varying computational complexity and accuracy for three force fields: free energy simulations, Poisson-Boltzmann (PB), and linear response approximation (LRA) using AMBER, CHARMM, and OPLSAA force fields. We find that deviations from simulation start at low charges for solutes. The approximate PB and LRA produce an overestimation of electrostatic solvation free energies for most of molecules studied here. These deviations are remarkably systematic. The variations among force fields are almost as large as the variations found among methods. Our study confirmsmore » that success of the approximate methods for electrostatic solvation free energies comes from their ability to evaluate free energy differences accurately.« less
NASA Astrophysics Data System (ADS)
Holota, Petr; Nesvadba, Otakar
2017-04-01
The aim of this paper is to discuss the solution of the linearized gravimetric boundary value problem by means of the method of successive approximations. We start with the relation between the geometry of the solution domain and the structure of Laplace's operator. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. Laplace's operator has a relatively simple structure in terms of ellipsoidal coordinates which are frequently used in geodesy. However, the physical surface of the Earth substantially differs from an oblate ellipsoid of revolution, even if it is optimally fitted. Therefore, an alternative is discussed. A system of general curvilinear coordinates such that the physical surface of the Earth is imbedded in the family of coordinate surfaces is used. Clearly, the structure of Laplace's operator is more complicated in this case. It was deduced by means of tensor calculus and in a sense it represents the topography of the physical surface of the Earth. Nevertheless, the construction of the respective Green's function is more simple, if the solution domain is transformed. This enables the use of the classical Green's function method together with the method of successive approximations for the solution of the linear gravimetric boundary value problem expressed in terms of new coordinates. The structure of iteration steps is analyzed and where useful also modified by means of the integration by parts. Comparison with other methods is discussed.
METHOD OF HOT ROLLING URANIUM METAL
Kaufmann, A.R.
1959-03-10
A method is given for quickly and efficiently hot rolling uranium metal in the upper part of the alpha phase temperature region to obtain sound bars and sheets possessing a good surface finish. The uranium metal billet is heated to a temperature in the range of 1000 deg F to 1220 deg F by immersion iii a molten lead bath. The heated billet is then passed through the rolls. The temperature is restored to the desired range between successive passes through the rolls, and the rolls are turned down approximately 0.050 inch between successive passes.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
The method of fundamental solutions for computing acoustic interior transmission eigenvalues
NASA Astrophysics Data System (ADS)
Kleefeld, Andreas; Pieronek, Lukas
2018-03-01
We analyze the method of fundamental solutions (MFS) in two different versions with focus on the computation of approximate acoustic interior transmission eigenvalues in 2D for homogeneous media. Our approach is mesh- and integration free, but suffers in general from the ill-conditioning effects of the discretized eigenoperator, which we could then successfully balance using an approved stabilization scheme. Our numerical examples cover many of the common scattering objects and prove to be very competitive in accuracy with the standard methods for PDE-related eigenvalue problems. We finally give an approximation analysis for our framework and provide error estimates, which bound interior transmission eigenvalue deviations in terms of some generalized MFS output.
Sign Language Translator Application Using OpenCV
NASA Astrophysics Data System (ADS)
Triyono, L.; Pratisto, E. H.; Bawono, S. A. T.; Purnomo, F. A.; Yudhanto, Y.; Raharjo, B.
2018-03-01
This research focuses on the development of sign language translator application using OpenCV Android based, this application is based on the difference in color. The author also utilizes Support Machine Learning to predict the label. Results of the research showed that the coordinates of the fingertip search methods can be used to recognize a hand gesture to the conditions contained open arms while to figure gesture with the hand clenched using search methods Hu Moments value. Fingertip methods more resilient in gesture recognition with a higher success rate is 95% on the distance variation is 35 cm and 55 cm and variations of light intensity of approximately 90 lux and 100 lux and light green background plain condition compared with the Hu Moments method with the same parameters and the percentage of success of 40%. While the background of outdoor environment applications still can not be used with a success rate of only 6 managed and the rest failed.
Spacecraft attitude control using neuro-fuzzy approximation of the optimal controllers
NASA Astrophysics Data System (ADS)
Kim, Sung-Woo; Park, Sang-Young; Park, Chandeok
2016-01-01
In this study, a neuro-fuzzy controller (NFC) was developed for spacecraft attitude control to mitigate large computational load of the state-dependent Riccati equation (SDRE) controller. The NFC was developed by training a neuro-fuzzy network to approximate the SDRE controller. The stability of the NFC was numerically verified using a Lyapunov-based method, and the performance of the controller was analyzed in terms of approximation ability, steady-state error, cost, and execution time. The simulations and test results indicate that the developed NFC efficiently approximates the SDRE controller, with asymptotic stability in a bounded region of angular velocity encompassing the operational range of rapid-attitude maneuvers. In addition, it was shown that an approximated optimal feedback controller can be designed successfully through neuro-fuzzy approximation of the optimal open-loop controller.
NASA Technical Reports Server (NTRS)
Walden, H.
1974-01-01
Methods for obtaining approximate solutions for the fundamental eigenvalue of the Laplace-Beltrami operator (also referred to as the membrane eigenvalue problem for the vibration equation) on the unit spherical surface are developed. Two specific types of spherical surface domains are considered: (1) the interior of a spherical triangle, i.e., the region bounded by arcs of three great circles, and (2) the exterior of a great circle arc extending for less than pi radians on the sphere (a spherical surface with a slit). In both cases, zero boundary conditions are imposed. In order to solve the resulting second-order elliptic partial differential equations in two independent variables, a finite difference approximation is derived. The symmetric (generally five-point) finite difference equations that develop are written in matrix form and then solved by the iterative method of point successive overrelaxation. Upon convergence of this iterative method, the fundamental eigenvalue is approximated by iteration utilizing the power method as applied to the finite Rayleigh quotient.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
Practical solution of plastic deformation problems in elastic-plastic range
NASA Technical Reports Server (NTRS)
Mendelson, A; Manson, S
1957-01-01
A practical method for solving plastic deformation problems in the elastic-plastic range is presented. The method is one of successive approximations and is illustrated by four examples which include a flat plate with temperature distribution across the width, a thin shell with axial temperature distribution, a solid cylinder with radial temperature distribution, and a rotating disk with radial temperature distribution.
Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation
NASA Astrophysics Data System (ADS)
Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.
1998-01-01
A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.
NASA Astrophysics Data System (ADS)
McCurdy, C. William; Lucchese, Robert L.; Greenman, Loren
2017-04-01
The complex Kohn variational method, which represents the continuum wave function in each channel using a combination of Gaussians and Bessel or Coulomb functions, has been successful in numerous applications to electron-polyatomic molecule scattering and molecular photoionization. The hybrid basis representation limits it to relatively low energies (< 50 eV) , requires an approximation to exchange matrix elements involving continuum functions, and hampers its coupling to modern electronic structure codes for the description of correlated target states. We describe a successful implementation of the method using completely adaptive overset grids to describe continuum functions, in which spherical subgrids are placed on every atomic center to complement a spherical master grid that describes the behavior at large distances. An accurate method for applying the free-particle Green's function on the grid eliminates the need to operate explicitly with the kinetic energy, enabling a rapidly convergent Arnoldi algorithm for solving linear equations on the grid, and no approximations to exchange operators are made. Results for electron scattering from several polyatomic molecules will be presented. Army Research Office, MURI, WN911NF-14-1-0383 and U. S. DOE DE-SC0012198 (at Texas A&M).
Hesford, Andrew J.; Chew, Weng C.
2010-01-01
The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438
Improved Discrete Approximation of Laplacian of Gaussian
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2004-01-01
An improved method of computing a discrete approximation of the Laplacian of a Gaussian convolution of an image has been devised. The primary advantage of the method is that without substantially degrading the accuracy of the end result, it reduces the amount of information that must be processed and thus reduces the amount of circuitry needed to perform the Laplacian-of- Gaussian (LOG) operation. Some background information is necessary to place the method in context. The method is intended for application to the LOG part of a process of real-time digital filtering of digitized video data that represent brightnesses in pixels in a square array. The particular filtering process of interest is one that converts pixel brightnesses to binary form, thereby reducing the amount of information that must be performed in subsequent correlation processing (e.g., correlations between images in a stereoscopic pair for determining distances or correlations between successive frames of the same image for detecting motions). The Laplacian is often included in the filtering process because it emphasizes edges and textures, while the Gaussian is often included because it smooths out noise that might not be consistent between left and right images or between successive frames of the same image.
NASA Astrophysics Data System (ADS)
Morris, Titus; Bogner, Scott
2016-09-01
The In-Medium Similarity Renormalization Group (IM-SRG) has been applied successfully to the ground state of closed shell finite nuclei. Recent work has extended its ability to target excited states of these closed shell systems via equation of motion methods, and also complete spectra of the whole SD shell via effective shell model interactions. A recent alternative method for solving of the IM-SRG equations, based on the Magnus expansion, not only provides a computationally feasible route to producing observables, but also allows for approximate handling of induced three-body forces. Promising results for several systems, including finite nuclei, will be presented and discussed.
A finite element analysis of viscoelastically damped sandwich plates
NASA Astrophysics Data System (ADS)
Ma, B.-A.; He, J.-F.
1992-01-01
A finite element analysis associated with an asymptotic solution method for the harmonic flexural vibration of viscoelastically damped unsymmetrical sandwich plates is given. The element formulation is based on generalization of the discrete Kirchhoff theory (DKT) element formulation. The results obtained with the first order approximation of the asymptotic solution presented here are the same as those obtained by means of the modal strain energy (MSE) method. By taking more terms of the asymptotic solution, with successive calculations and use of the Padé approximants method, accuracy can be improved. The finite element computation has been verified by comparison with an analytical exact solution for rectangular plates with simply supported edges. Results for the same plates with clamped edges are also presented.
Design of microstrip patch antennas using knowledge insertion through retraining
NASA Astrophysics Data System (ADS)
Divakar, T. V. S.; Sudhakar, A.
2018-04-01
The traditional way of analyzing/designing neural network is to collect experimental data and train neural network. Then, the trained neural network acts as global approximate function. The network is then used to calculate parameters for unknown configurations. The main drawback of this method is one does not have enough experimental data, cost of prototypes being a major factor [1-4]. Therefore, in this method the author collected training data from available approximate formulas with in full design range and trained the network with it. After successful training, the network is retrained with available measured results. This simple way inserts experimental knowledge into the network [5]. This method is tested for rectangular microstrip antenna and circular microstrip antenna.
Poisson Approximation-Based Score Test for Detecting Association of Rare Variants.
Fang, Hongyan; Zhang, Hong; Yang, Yaning
2016-07-01
Genome-wide association study (GWAS) has achieved great success in identifying genetic variants, but the nature of GWAS has determined its inherent limitations. Under the common disease rare variants (CDRV) hypothesis, the traditional association analysis methods commonly used in GWAS for common variants do not have enough power for detecting rare variants with a limited sample size. As a solution to this problem, pooling rare variants by their functions provides an efficient way for identifying susceptible genes. Rare variant typically have low frequencies of minor alleles, and the distribution of the total number of minor alleles of the rare variants can be approximated by a Poisson distribution. Based on this fact, we propose a new test method, the Poisson Approximation-based Score Test (PAST), for association analysis of rare variants. Two testing methods, namely, ePAST and mPAST, are proposed based on different strategies of pooling rare variants. Simulation results and application to the CRESCENDO cohort data show that our methods are more powerful than the existing methods. © 2016 John Wiley & Sons Ltd/University College London.
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Gariepy, Aileen M; Creinin, Mitchell D; Schwarz, Eleanor B; Smith, Kenneth J
2011-08-01
To estimate the probability of successful sterilization after an hysteroscopic or laparoscopic sterilization procedure. An evidence-based clinical decision analysis using a Markov model was performed to estimate the probability of a successful sterilization procedure using laparoscopic sterilization, hysteroscopic sterilization in the operating room, and hysteroscopic sterilization in the office. Procedure and follow-up testing probabilities for the model were estimated from published sources. In the base case analysis, the proportion of women having a successful sterilization procedure on the first attempt is 99% for laparoscopic sterilization, 88% for hysteroscopic sterilization in the operating room, and 87% for hysteroscopic sterilization in the office. The probability of having a successful sterilization procedure within 1 year is 99% with laparoscopic sterilization, 95% for hysteroscopic sterilization in the operating room, and 94% for hysteroscopic sterilization in the office. These estimates for hysteroscopic success include approximately 6% of women who attempt hysteroscopically but are ultimately sterilized laparoscopically. Approximately 5% of women who have a failed hysteroscopic attempt decline further sterilization attempts. Women choosing laparoscopic sterilization are more likely than those choosing hysteroscopic sterilization to have a successful sterilization procedure within 1 year. However, the risk of failed sterilization and subsequent pregnancy must be considered when choosing a method of sterilization.
Laleian, Artin; Valocchi, Albert J.; Werth, Charles J.
2015-11-24
Two-dimensional (2D) pore-scale models have successfully simulated microfluidic experiments of aqueous-phase flow with mixing-controlled reactions in devices with small aperture. A standard 2D model is not generally appropriate when the presence of mineral precipitate or biomass creates complex and irregular three-dimensional (3D) pore geometries. We modify the 2D lattice Boltzmann method (LBM) to incorporate viscous drag from the top and bottom microfluidic device (micromodel) surfaces, typically excluded in a 2D model. Viscous drag from these surfaces can be approximated by uniformly scaling a steady-state 2D velocity field at low Reynolds number. We demonstrate increased accuracy by approximating the viscous dragmore » with an analytically-derived body force which assumes a local parabolic velocity profile across the micromodel depth. Accuracy of the generated 2D velocity field and simulation permeability have not been evaluated in geometries with variable aperture. We obtain permeabilities within approximately 10% error and accurate streamlines from the proposed 2D method relative to results obtained from 3D simulations. Additionally, the proposed method requires a CPU run time approximately 40 times less than a standard 3D method, representing a significant computational benefit for permeability calculations.« less
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Surgery for disc-associated wobbler syndrome in the dog--an examination of the controversy.
Jeffery, N D; McKee, W M
2001-12-01
Controversy surrounds treatment of disc-associated 'wobbler' syndrome in the dog, centring on the choice of method of surgical decompression used. In this review, details of previously published case series are summarised and critically examined in an attempt to compare success rates and complications of different types of surgery. Unequivocally accurate comparisons were difficult because of differences in methods of case recording between series. Short-term success rates were high (approximately 80 per cent), but there was a high rate of recurrence (around 20 per cent) after any surgical treatment, suggesting the possibility that the syndrome should be considered a multifocal disease of the caudal cervical region. Statistical analysis revealed no significant differences in success rates between the various reported decompressive surgical techniques
NASA Astrophysics Data System (ADS)
Pribram-Jones, Aurora
Warm dense matter (WDM) is a high energy phase between solids and plasmas, with characteristics of both. It is present in the centers of giant planets, within the earth's core, and on the path to ignition of inertial confinement fusion. The high temperatures and pressures of warm dense matter lead to complications in its simulation, as both classical and quantum effects must be included. One of the most successful simulation methods is density functional theory-molecular dynamics (DFT-MD). Despite great success in a diverse array of applications, DFT-MD remains computationally expensive and it neglects the explicit temperature dependence of electron-electron interactions known to exist within exact DFT. Finite-temperature density functional theory (FT DFT) is an extension of the wildly successful ground-state DFT formalism via thermal ensembles, broadening its quantum mechanical treatment of electrons to include systems at non-zero temperatures. Exact mathematical conditions have been used to predict the behavior of approximations in limiting conditions and to connect FT DFT to the ground-state theory. An introduction to FT DFT is given within the context of ensemble DFT and the larger field of DFT is discussed for context. Ensemble DFT is used to describe ensembles of ground-state and excited systems. Exact conditions in ensemble DFT and the performance of approximations depend on ensemble weights. Using an inversion method, exact Kohn-Sham ensemble potentials are found and compared to approximations. The symmetry eigenstate Hartree-exchange approximation is in good agreement with exact calculations because of its inclusion of an ensemble derivative discontinuity. Since ensemble weights in FT DFT are temperature-dependent Fermi weights, this insight may help develop approximations well-suited to both ground-state and FT DFT. A novel, highly efficient approach to free energy calculations, finite-temperature potential functional theory, is derived, which has the potential to transform the simulation of warm dense matter. As a semiclassical method, it connects the normally disparate regimes of cold condensed matter physics and hot plasma physics. This orbital-free approach captures the smooth classical density envelope and quantum density oscillations that are both crucial to accurate modeling of materials where temperature and pressure effects are influential.
NASA Technical Reports Server (NTRS)
Huff, Vearl N; Gordon, Sanford; Morrell, Virginia E
1951-01-01
A rapidly convergent successive approximation process is described that simultaneously determines both composition and temperature resulting from a chemical reaction. This method is suitable for use with any set of reactants over the complete range of mixture ratios as long as the products of reaction are ideal gases. An approximate treatment of limited amounts of liquids and solids is also included. This method is particularly suited to problems having a large number of products of reaction and to problems that require determination of such properties as specific heat or velocity of sound of a dissociating mixture. The method presented is applicable to a wide variety of problems that include (1) combustion at constant pressure or volume; and (2) isentropic expansion to an assigned pressure, temperature, or Mach number. Tables of thermodynamic functions needed with this method are included for 42 substances for convenience in numerical computations.
DOT National Transportation Integrated Search
2003-06-01
It is estimated that approximately 8,500 abandoned underground mines are present in Ohio and mine-related : subsidence has been a problem dating back to the 1920's. Many investigative methods have been utilized with : varying degrees of success in an...
A NURBS-enhanced finite volume solver for steady Euler equations
NASA Astrophysics Data System (ADS)
Meng, Xucheng; Hu, Guanghui
2018-04-01
In Hu and Yi (2016) [20], a non-oscillatory k-exact reconstruction method was proposed towards the high-order finite volume methods for steady Euler equations, which successfully demonstrated the high-order behavior in the simulations. However, the degeneracy of the numerical accuracy of the approximate solutions to problems with curved boundary can be observed obviously. In this paper, the issue is resolved by introducing the Non-Uniform Rational B-splines (NURBS) method, i.e., with given discrete description of the computational domain, an approximate NURBS curve is reconstructed to provide quality quadrature information along the curved boundary. The advantages of using NURBS include i). both the numerical accuracy of the approximate solutions and convergence rate of the numerical methods are improved simultaneously, and ii). the NURBS curve generation is independent of other modules of the numerical framework, which makes its application very flexible. It is also shown in the paper that by introducing more elements along the normal direction for the reconstruction patch of the boundary element, significant improvement in the convergence to steady state can be achieved. The numerical examples confirm the above features very well.
Stibinger, Jakub
2017-05-01
The research was focused on approximation of clogging in a leachate collection system in municipal solid waste landfill in Osecna, situated near the location Osecna, region Liberec, Northern Bohemia, Czech Republic, by analysis of numerical experiment results. To approximate the clogging of the leachate collection system after fifteen years of landfill operation (1995-2009) were successfully tested modified De Zeeuw-Hellinga transient drainage theory. This procedure allows application of the reduction factors to express clogging of the leachate collection system in Osecna landfill. The results proved that the modified De Zeeuw-Hellinga method with reduction factors can serve as a good tool for clogging approximation in a leachate collection system in Osecna landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Approximating quantum many-body wave functions using artificial neural networks
NASA Astrophysics Data System (ADS)
Cai, Zi; Liu, Jinguo
2018-01-01
In this paper, we demonstrate the expressibility of artificial neural networks (ANNs) in quantum many-body physics by showing that a feed-forward neural network with a small number of hidden layers can be trained to approximate with high precision the ground states of some notable quantum many-body systems. We consider the one-dimensional free bosons and fermions, spinless fermions on a square lattice away from half-filling, as well as frustrated quantum magnetism with a rapidly oscillating ground-state characteristic function. In the latter case, an ANN with a standard architecture fails, while that with a slightly modified one successfully learns the frustration-induced complex sign rule in the ground state and approximates the ground states with high precisions. As an example of practical use of our method, we also perform the variational method to explore the ground state of an antiferromagnetic J1-J2 Heisenberg model.
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dikin, I.
1994-12-31
We survey results about the convergence of the primal affine scaling method at solutions of a completely degenerate problem of linear programming. Moreover we are studying the case when a next approximation is on the boundary of the affine scaling ellipsoid. Convergence of successive approximation to an interior point u of the solution for the dual problem is proved. Coordinates of the vector u are determined only by the input data of the problem; they do not depend of the choice of the starting point.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.
Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul
2015-01-01
In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems. PMID:25811858
Calculations of reliability predictions for the Apollo spacecraft
NASA Technical Reports Server (NTRS)
Amstadter, B. L.
1966-01-01
A new method of reliability prediction for complex systems is defined. Calculation of both upper and lower bounds are involved, and a procedure for combining the two to yield an approximately true prediction value is presented. Both mission success and crew safety predictions can be calculated, and success probabilities can be obtained for individual mission phases or subsystems. Primary consideration is given to evaluating cases involving zero or one failure per subsystem, and the results of these evaluations are then used for analyzing multiple failure cases. Extensive development is provided for the overall mission success and crew safety equations for both the upper and lower bounds.
Approximation Set of the Interval Set in Pawlak's Space
Wang, Jin; Wang, Guoyin
2014-01-01
The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721
Metaheuristic optimisation methods for approximate solving of singular boundary value problems
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Yadav, Neha; Gao, Kaizhou; Su, Rong
2017-07-01
This paper presents a novel approximation technique based on metaheuristics and weighted residual function (WRF) for tackling singular boundary value problems (BVPs) arising in engineering and science. With the aid of certain fundamental concepts of mathematics, Fourier series expansion, and metaheuristic optimisation algorithms, singular BVPs can be approximated as an optimisation problem with boundary conditions as constraints. The target is to minimise the WRF (i.e. error function) constructed in approximation of BVPs. The scheme involves generational distance metric for quality evaluation of the approximate solutions against exact solutions (i.e. error evaluator metric). Four test problems including two linear and two non-linear singular BVPs are considered in this paper to check the efficiency and accuracy of the proposed algorithm. The optimisation task is performed using three different optimisers including the particle swarm optimisation, the water cycle algorithm, and the harmony search algorithm. Optimisation results obtained show that the suggested technique can be successfully applied for approximate solving of singular BVPs.
NASA Astrophysics Data System (ADS)
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Error analysis and correction of discrete solutions from finite element codes
NASA Technical Reports Server (NTRS)
Thurston, G. A.; Stein, P. A.; Knight, N. F., Jr.; Reissner, J. E.
1984-01-01
Many structures are an assembly of individual shell components. Therefore, results for stresses and deflections from finite element solutions for each shell component should agree with the equations of shell theory. This paper examines the problem of applying shell theory to the error analysis and the correction of finite element results. The general approach to error analysis and correction is discussed first. Relaxation methods are suggested as one approach to correcting finite element results for all or parts of shell structures. Next, the problem of error analysis of plate structures is examined in more detail. The method of successive approximations is adapted to take discrete finite element solutions and to generate continuous approximate solutions for postbuckled plates. Preliminary numerical results are included.
Analytical ground state for the Jaynes-Cummings model with ultrastrong coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yuanwei; Institute of Theoretical Physics, Shanxi University, Taiyuan 030006; Chen Gang
2011-06-15
We present a generalized variational method to analytically obtain the ground-state properties of the Jaynes-Cummings model with the ultrastrong coupling. An explicit expression for the ground-state energy, which agrees well with the numerical simulation in a wide range of the experimental parameters, is given. In particular, the introduced method can successfully solve this Jaynes-Cummings model with the positive detuning (the atomic resonant level is larger than the photon frequency), which cannot be treated in the adiabatical approximation and the generalized rotating-wave approximation. Finally, we also demonstrate analytically how to control the mean photon number by means of the current experimentalmore » parameters including the photon frequency, the coupling strength, and especially the atomic resonant level.« less
Jakobsson, Hugo; Farmaki, Katerina; Sakinis, Augustinas; Ehn, Olof; Johannsson, Gudmundur; Ragnarsson, Oskar
2018-01-01
PURPOSE Primary aldosteronism (PA) is a common cause of secondary hypertension. Adrenal venous sampling (AVS) is the gold standard for assessing laterality of PA, which is of paramount importance to decide adequate treatment. AVS is a technically complicated procedure with success rates ranging between 30% and 96%. The aim of this study was to investigate the success rate of AVS over time, performed by a single interventionalist. METHODS This was a retrospective study based on consecutive AVS procedures performed by a single operator between September 2005 and June 2016. Data on serum concentrations of aldosterone and cortisol from right and left adrenal vein, inferior vena cava, and peripheral vein were collected and selectivity index (SI) calculated. Successful AVS was defined as SI >5. RESULTS In total, 282 AVS procedures were performed on 269 patients, 168 men (62%) and 101 women (38%), with a mean age of 55±11 years (range, 26–78 years). Out of 282 AVS procedures, 259 were successful, giving an overall success rate of 92%. The most common reason for failure was inability to localize the right adrenal vein (n=16; 76%). The success rates were 63%, 82%, and 94% during the first, second, and third years, respectively. During the last 8 years the success rate was 95%, and on average 27 procedures were performed annually. CONCLUSION Satisfactory AVS success rate was achieved after approximately 36 procedures and satisfactory success rate was maintained by performing approximately 27 procedures annually. AVS should be limited to few operators that perform sufficiently large number of procedures to achieve, and maintain, satisfactory AVS success rate. PMID:29467114
Galler, Patrick; Limbeck, Andreas; Boulyga, Sergei F; Stingeder, Gerhard; Hirata, Takafumi; Prohaska, Thomas
2007-07-01
This work introduces a newly developed on-line flow injection (FI) Sr/Rb separation method as an alternative to the common, manual Sr/matrix batch separation procedure, since total analysis time is often limited by sample preparation despite the fast rate of data acquisition possible by inductively coupled plasma-mass spectrometers (ICPMS). Separation columns containing approximately 100 muL of Sr-specific resin were used for on-line FI Sr/matrix separation with subsequent determination of (87)Sr/(86)Sr isotope ratios by multiple collector ICPMS. The occurrence of memory effects exhibited by the Sr-specific resin, a major restriction to the repetitive use of this costly material, could successfully be overcome. The method was fully validated by means of certified reference materials. A set of two biological and six geological Sr- and Rb-bearing samples was successfully characterized for its (87)Sr/(86)Sr isotope ratios with precisions of 0.01-0.04% 2 RSD (n = 5-10). Based on our measurements we suggest (87)Sr/(86)Sr isotope ratios of 0.713 15 +/- 0.000 16 (2 SD) and 0.709 31 +/- 0.000 06 (2 SD) for the NIST SRM 1400 bone ash and the NIST SRM 1486 bone meal, respectively. Measured (87)Sr/(86)Sr isotope ratios for five basalt samples are in excellent agreement with published data with deviations from the published value ranging from 0 to 0.03%. A mica sample with a Rb/Sr ratio of approximately 1 was successfully characterized for its (87)Sr/(86)Sr isotope signature to be 0.718 24 +/- 0.000 29 (2 SD) by the proposed method. Synthetic samples with Rb/Sr ratios of up to 10/1 could successfully be measured without significant interferences on mass 87, which would otherwise bias the accuracy and uncertainty of the obtained data.
Three-dimensional inversion of multisource array electromagnetic data
NASA Astrophysics Data System (ADS)
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM data collected by INCO Exploration over the Voisey's Bay area in Labrador, Canada. The results of the 3-D inversion successfully delineate the shallow massive sulfides and show that the method can produce reasonable results even in areas of complex geology and large resistivity contrasts.
Automated prediction of protein function and detection of functional sites from structure.
Pazos, Florencio; Sternberg, Michael J E
2004-10-12
Current structural genomics projects are yielding structures for proteins whose functions are unknown. Accordingly, there is a pressing requirement for computational methods for function prediction. Here we present PHUNCTIONER, an automatic method for structure-based function prediction using automatically extracted functional sites (residues associated to functions). The method relates proteins with the same function through structural alignments and extracts 3D profiles of conserved residues. Functional features to train the method are extracted from the Gene Ontology (GO) database. The method extracts these features from the entire GO hierarchy and hence is applicable across the whole range of function specificity. 3D profiles associated with 121 GO annotations were extracted. We tested the power of the method both for the prediction of function and for the extraction of functional sites. The success of function prediction by our method was compared with the standard homology-based method. In the zone of low sequence similarity (approximately 15%), our method assigns the correct GO annotation in 90% of the protein structures considered, approximately 20% higher than inheritance of function from the closest homologue.
A Day of Great Illumination: B. F. Skinner's Discovery of Shaping
ERIC Educational Resources Information Center
Peterson, Gail B.
2004-01-01
Despite the seminal studies of response differentiation by the method of successive approximation detailed in chapter 8 of "The Behavior of Organisms" (1938), B. F. Skinner never actually shaped an operant response by hand until a memorable incident of startling serendipity on the top floor of a flour mill in Minneapolis in 1943. That occasion…
NASA Astrophysics Data System (ADS)
Morris, Titus; Bogner, Scott
2015-10-01
The In-Medium Similarity Renormalization Group (IM-SRG) has been applied successfully not only to several closed shell finite nuclei, but has recently been used to produce effective shell model interactions that are competitive with phenomenological interactions in the SD shell. A recent alternative method for solving of the IM-SRG equations, called the Magnus expansion, not only provides a computationally feasible route to producing observables, but also allows for approximate handling of induced three-body forces. Promising results for several systems, including finite nuclei, will be presented and discussed.
Sparse dynamics for partial differential equations
Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D.; Osher, Stanley
2013-01-01
We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms. PMID:23533273
NASA Technical Reports Server (NTRS)
Butera, M. K.
1979-01-01
The success of remotely mapping wetland vegetation of the southwestern coast of Florida is examined. A computerized technique to process aircraft and LANDSAT multispectral scanner data into vegetation classification maps was used. The cost effectiveness of this mapping technique was evaluated in terms of user requirements, accuracy, and cost. Results indicate that mangrove communities are classified most cost effectively by the LANDSAT technique, with an accuracy of approximately 87 percent and with a cost of approximately 3 cent per hectare compared to $46.50 per hectare for conventional ground survey methods.
Sparse dynamics for partial differential equations.
Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley
2013-04-23
We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Hanaoka, Nozomu; Matsutani, Minenosuke; Satoh, Masaaki; Ogawa, Motohiko; Shirai, Mutsunori; Ando, Shuji
2017-01-24
We developed a novel loop-mediated isothermal amplification (LAMP) method to detect Rickettsia spp., including Rickettsia prowazekii and R. typhi. Species-specific LAMP primers were developed for orthologous genes conserved among Rickettsia spp. The selected modified primers could detect all the Rickettsia spp. tested. The LAMP method was successfully used to detect 100 DNA copies of Rickettsia spp. within approximately 60 min at 63℃. Therefore, this method may be an excellent tool for the early diagnosis of rickettsiosis in a laboratory or in the field.
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
NASA Technical Reports Server (NTRS)
Paknys, J. R.
1982-01-01
The reflector antenna may be thought of as an aperture antenna. The classical solution for the radiation pattern of such an antenna is found by the aperture integration (AI) method. Success with this method depends on how accurately the aperture currents are known beforehand. In the past, geometrical optics (GO) has been employed to find the aperture currents. This approximation is suitable for calculating the main beam and possibly the first few sidelobes. A better approximation is to use aperture currents calculated from the geometrical theory of diffraction (GTD). Integration of the GTD currents over and extended aperture yields more accurate results for the radiation pattern. This approach is useful when conventional AI and GTD solutions have no common region of validity. This problem arises in reflector antennas. Two dimensional models of parabolic reflectors are studied; however, the techniques discussed can be applied to any aperture antenna.
Development of programmable artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J.
1993-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed to mate the adaptability of the ANN with the speed and precision of the digital computer. This method was successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2018-03-01
We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.
An inductive method for automatic generation of referring physician prefetch rules for PACS.
Okura, Yasuhiko; Matsumura, Yasushi; Harauchi, Hajime; Sukenobu, Yoshiharu; Kou, Hiroko; Kohyama, Syunsuke; Yasuda, Norihiro; Yamamoto, Yuichiro; Inamura, Kiyonari
2002-12-01
To prefetch images in a hospital-wide picture archiving and communication system (PACS), a rule must be devised to permit accurate selection of examinations in which a patient's images are stored. We developed an inductive method to compose prefetch rules from practical data which were obtained in a hospital using a decision tree algorithm. Our methods were evaluated on data acquired in Osaka University Hospital for one month. The data collected consisted of 58,617 cases of consultation reservations, 643,797 examination histories of patients, and 323,993 records of image requests in PACS. Four parameters indicating whether the images of the patient were requested or not for each consultation reservation were derived from the database. As a result, the successful selection sensitivity for consultations in which images were requested was approximately 0.8, and the specificity for excluding consultations accurately where images were not requested was approximately 0.7.
NASA Astrophysics Data System (ADS)
Urano, Shoichi; Mori, Hiroyuki
This paper proposes a new technique for determining of state values in power systems. Recently, it is useful for carrying out state estimation with data of PMU (Phasor Measurement Unit). The authors have developed a method for determining state values with artificial neural network (ANN) considering topology observability in power systems. ANN has advantage to approximate nonlinear functions with high precision. The method evaluates pseudo-measurement state values of the data which are lost in power systems. The method is successfully applied to the IEEE 14-bus system.
NASA Astrophysics Data System (ADS)
Doha, E. H.; Bhrawy, A. H.; Abdelkawy, M. A.; Van Gorder, Robert A.
2014-03-01
A Jacobi-Gauss-Lobatto collocation (J-GL-C) method, used in combination with the implicit Runge-Kutta method of fourth order, is proposed as a numerical algorithm for the approximation of solutions to nonlinear Schrödinger equations (NLSE) with initial-boundary data in 1+1 dimensions. Our procedure is implemented in two successive steps. In the first one, the J-GL-C is employed for approximating the functional dependence on the spatial variable, using (N-1) nodes of the Jacobi-Gauss-Lobatto interpolation which depends upon two general Jacobi parameters. The resulting equations together with the two-point boundary conditions induce a system of 2(N-1) first-order ordinary differential equations (ODEs) in time. In the second step, the implicit Runge-Kutta method of fourth order is applied to solve this temporal system. The proposed J-GL-C method, used in combination with the implicit Runge-Kutta method of fourth order, is employed to obtain highly accurate numerical approximations to four types of NLSE, including the attractive and repulsive NLSE and a Gross-Pitaevskii equation with space-periodic potential. The numerical results obtained by this algorithm have been compared with various exact solutions in order to demonstrate the accuracy and efficiency of the proposed method. Indeed, for relatively few nodes used, the absolute error in our numerical solutions is sufficiently small.
Bussières, Philippe
2014-05-12
Because it is difficult to obtain transverse views of the plant phloem sieve plate pores, which are short tubes, to estimate their number and diameters, a method based on longitudinal views is proposed. This method uses recent methods to estimate the number and the sizes of approximately circular objects from their images, given by slices perpendicular to the objects. Moreover, because such longitudinal views are obtained from slices that are rather close to the plate centres whereas the pore size may vary with the pore distance from the plate edge, a sieve plate reconstruction model was developed and incorporated in the method to consider this bias. The method was successfully tested with published longitudinal views of phloem of Soybean and an exceptional entire transverse view from the same tissue. The method was also validated with simulated slices in two sieve plates from Cucurbita and Phaseolus. This method will likely be useful to estimate and to model the hydraulic conductivity and the architecture of the plant phloem, and it could have applications for other materials with approximately cylindrical structures.
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Mariana; Manolopoulos, David E.; Ceriotti, Michele
Two of the most successful methods that are presently available for simulating the quantum dynamics of condensed phase systems are centroid molecular dynamics (CMD) and ring polymer molecular dynamics (RPMD). Despite their conceptual differences, practical implementations of these methods differ in just two respects: the choice of the Parrinello-Rahman mass matrix and whether or not a thermostat is applied to the internal modes of the ring polymer during the dynamics. Here, we explore a method which is halfway between the two approximations: we keep the path integral bead masses equal to the physical particle masses but attach a Langevin thermostatmore » to the internal modes of the ring polymer during the dynamics. We justify this by showing analytically that the inclusion of an internal mode thermostat does not affect any of the established features of RPMD: thermostatted RPMD is equally valid with respect to everything that has actually been proven about the method as RPMD itself. In particular, because of the choice of bead masses, the resulting method is still optimum in the short-time limit, and the transition state approximation to its reaction rate theory remains closely related to the semiclassical instanton approximation in the deep quantum tunneling regime. In effect, there is a continuous family of methods with these properties, parameterised by the strength of the Langevin friction. Here, we explore numerically how the approximation to quantum dynamics depends on this friction, with a particular emphasis on vibrational spectroscopy. We find that a broad range of frictions approaching optimal damping give similar results, and that these results are immune to both the resonance problem of RPMD and the curvature problem of CMD.« less
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
ERIC Educational Resources Information Center
Heiser, Willem J.; And Others
1997-01-01
The least squares loss function of cluster differences scaling, originally defined only on residuals of pairs allocated to different clusters, is extended with a loss component for pairs allocated to the same cluster. Findings show that this makes the method equivalent to multidimensional scaling with cluster constraints on the coordinates. (SLD)
A walk through the approximations of ab initio multiple spawning
NASA Astrophysics Data System (ADS)
Mignolet, Benoit; Curchod, Basile F. E.
2018-04-01
Full multiple spawning offers an in principle exact framework for excited-state dynamics, where nuclear wavefunctions in different electronic states are represented by a set of coupled trajectory basis functions that follow classical trajectories. The couplings between trajectory basis functions can be approximated to treat molecular systems, leading to the ab initio multiple spawning method which has been successfully employed to study the photochemistry and photophysics of several molecules. However, a detailed investigation of its approximations and their consequences is currently missing in the literature. In this work, we simulate the explicit photoexcitation and subsequent excited-state dynamics of a simple system, LiH, and we analyze (i) the effect of the ab initio multiple spawning approximations on different observables and (ii) the convergence of the ab initio multiple spawning results towards numerically exact quantum dynamics upon a progressive relaxation of these approximations. We show that, despite the crude character of the approximations underlying ab initio multiple spawning for this low-dimensional system, the qualitative excited-state dynamics is adequately captured, and affordable corrections can further be applied to ameliorate the coupling between trajectory basis functions.
A walk through the approximations of ab initio multiple spawning.
Mignolet, Benoit; Curchod, Basile F E
2018-04-07
Full multiple spawning offers an in principle exact framework for excited-state dynamics, where nuclear wavefunctions in different electronic states are represented by a set of coupled trajectory basis functions that follow classical trajectories. The couplings between trajectory basis functions can be approximated to treat molecular systems, leading to the ab initio multiple spawning method which has been successfully employed to study the photochemistry and photophysics of several molecules. However, a detailed investigation of its approximations and their consequences is currently missing in the literature. In this work, we simulate the explicit photoexcitation and subsequent excited-state dynamics of a simple system, LiH, and we analyze (i) the effect of the ab initio multiple spawning approximations on different observables and (ii) the convergence of the ab initio multiple spawning results towards numerically exact quantum dynamics upon a progressive relaxation of these approximations. We show that, despite the crude character of the approximations underlying ab initio multiple spawning for this low-dimensional system, the qualitative excited-state dynamics is adequately captured, and affordable corrections can further be applied to ameliorate the coupling between trajectory basis functions.
Adaptive control using neural networks and approximate models.
Narendra, K S; Mukhopadhyay, S
1997-01-01
The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.
On a method for generating inequalities for the zeros of certain functions
NASA Astrophysics Data System (ADS)
Gatteschi, Luigi; Giordano, Carla
2007-10-01
In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.
Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.
2016-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.
Sky and Elemental Planetary Mapping Via Gamma Ray Emissions
NASA Technical Reports Server (NTRS)
Roland, John M.
2011-01-01
Low-energy gamma ray emissions ((is) approximately 30keV to (is) approximately 30MeV) are significant to astrophysics because many interesting objects emit their primary energy in this regime. As such, there has been increasing demand for a complete map of the gamma ray sky, but many experiments to do so have encountered obstacles. Using an innovative method of applying the Radon Transform to data from BATSE (the Burst And Transient Source Experiment) on NASA's CGRO (Compton Gamma-Ray Observatory) mission, we have circumvented many of these issues and successfully localized many known sources to 0.5 - 1 deg accuracy. Our method, which is based on a simple 2-dimensional planar back-projection approximation of the inverse Radon transform (familiar from medical CAT-scan technology), can thus be used to image the entire sky and locate new gamma ray sources, specifically in energy bands between 200keV and 2MeV which have not been well surveyed to date. Samples of these results will be presented. This same technique can also be applied to elemental planetary surface mapping via gamma ray spectroscopy. Due to our method's simplicity and power, it could potentially improve a current map's resolution by a significant factor.
Fast multilevel radiative transfer
NASA Astrophysics Data System (ADS)
Paletou, Frédéric; Léger, Ludovick
2007-01-01
The vast majority of recent advances in the field of numerical radiative transfer relies on approximate operator methods better known in astrophysics as Accelerated Lambda-Iteration (ALI). A superior class of iterative schemes, in term of rates of convergence, such as Gauss-Seidel and Successive Overrelaxation methods were therefore quite naturally introduced in the field of radiative transfer by Trujillo Bueno & Fabiani Bendicho (1995); it was thoroughly described for the non-LTE two-level atom case. We describe hereafter in details how such methods can be generalized when dealing with non-LTE unpolarised radiation transfer with multilevel atomic models, in monodimensional geometry.
NASA Technical Reports Server (NTRS)
Kanemasu, E. T.; Asrar, Ghassem; Myneni, Ranga; Martin, Robert, Jr.; Burnett, R. Bruce
1987-01-01
Research activities for the following study areas are summarized: single scattering of parallel direct and axially symmetric diffuse solar radiation in vegetative canopies; the use of successive orders of scattering approximations (SOSA) for treating multiple scattering in a plant canopy; reflectance of a soybean canopy using the SOSA method; and C-band scatterometer measurements of the Konza tallgrass prairie.
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
The General Necessary Condition for the Validity of Dirac's Transition Perturbation Theory
NASA Technical Reports Server (NTRS)
Quang, Nguyen Vinh
1996-01-01
For the first time, from the natural requirements for the successive approximation the general necessary condition of validity of the Dirac's method is explicitly established. It is proved that the conception of 'the transition probability per unit time' is not valid. The 'super-platinium rules' for calculating the transition probability are derived for the arbitrarily strong time-independent perturbation case.
Technical Note: Approximate Bayesian parameterization of a complex tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2013-08-01
Inverse parameter estimation of process-based models is a long-standing problem in ecology and evolution. A key problem of inverse parameter estimation is to define a metric that quantifies how well model predictions fit to the data. Such a metric can be expressed by general cost or objective functions, but statistical inversion approaches are based on a particular metric, the probability of observing the data given the model, known as the likelihood. Deriving likelihoods for dynamic models requires making assumptions about the probability for observations to deviate from mean model predictions. For technical reasons, these assumptions are usually derived without explicit consideration of the processes in the simulation. Only in recent years have new methods become available that allow generating likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional MCMC, performs well in retrieving known parameter values from virtual field data generated by the forest model. We analyze the results of the parameter estimation, examine the sensitivity towards the choice and aggregation of model outputs and observed data (summary statistics), and show results from using this method to fit the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss differences of this approach to Approximate Bayesian Computing (ABC), another commonly used method to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.
NASA Astrophysics Data System (ADS)
Li, Xinji; Hui, Mei; Zhao, Zhu; Liu, Ming; Dong, Liquan; Kong, Lingqin; Zhao, Yuejin
2018-05-01
A differential computation method is presented to improve the precision of calibration for coaxial reverse Hartmann test (RHT). In the calibration, the accuracy of the distance measurement greatly influences the surface shape test, as demonstrated in the mathematical analyses. However, high-precision absolute distance measurement is difficult in the calibration. Thus, a differential computation method that only requires the relative distance was developed. In the proposed method, a liquid crystal display screen successively displayed two regular dot matrix patterns with different dot spacing. In a special case, images on the detector exhibited similar centroid distributions during the reflector translation. Thus, the critical value of the relative displacement distance and the centroid distributions of the dots on the detector were utilized to establish the relationship between the rays at certain angles and the detector coordinates. Experiments revealed the approximately linear behavior of the centroid variation with the relative displacement distance. With the differential computation method, we increased the precision of traditional calibration 10-5 rad root mean square. The precision of the RHT was increased by approximately 100 nm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John
We report that the rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several suchmore » devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. Lastly, we also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing.« less
Design and biocompatibility of endovascular aneurysm filling devices
Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John; ...
2014-08-04
We report that the rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several suchmore » devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. Lastly, we also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing.« less
Design and biocompatibility of endovascular aneurysm filling devices
Rodriguez, Jennifer N.; Hwang, Wonjun; Horn, John; Landsman, Todd L.; Boyle, Anthony; Wierzbicki, Mark A.; Hasan, Sayyeda M.; Follmer, Douglas; Bryant, Jesse; Small, Ward; Maitland, Duncan J.
2014-01-01
The rupture of an intracranial aneurysm, which can result in severe mental disabilities or death, affects approximately 30,000 people in the United States annually. The traditional surgical method of treating these arterial malformations involves a full craniotomy procedure, wherein a clip is placed around the aneurysm neck. In recent decades, research and device development have focused on new endovascular treatment methods to occlude the aneurysm void space. These methods, some of which are currently in clinical use, utilize metal, polymeric, or hybrid devices delivered via catheter to the aneurysm site. In this review, we present several such devices, including those that have been approved for clinical use, and some that are currently in development. We present several design requirements for a successful aneurysm filling device and discuss the success or failure of current and past technologies. We also present novel polymeric based aneurysm filling methods that are currently being tested in animal models that could result in superior healing. PMID:25044644
Cambron, Jerrilyn A; Dexheimer, Jennifer M; Chang, Mabel; Cramer, Gregory D
2010-01-01
The purpose of this article is to describe the methods for recruitment in a clinical trial on chiropractic care for lumbar spinal stenosis. This randomized, placebo-controlled pilot study investigated the efficacy of different amounts of total treatment dosage over 6 weeks in 60 volunteer subjects with lumbar spinal stenosis. Subjects were recruited for this study through several media venues, focusing on successful and cost-effective strategies. Included in our efforts were radio advertising, newspaper advertising, direct mail, and various other low-cost initiatives. Of the 1211 telephone screens, 60 responders (5.0%) were randomized into the study. The most successful recruitment method was radio advertising, generating more than 64% of the calls (776 subjects). Newspaper and magazine advertising generated approximately 9% of all calls (108 subjects), and direct mail generated less than 7% (79 subjects). The total direct cost for recruitment was $40 740 or $679 per randomized patient. The costs per randomization were highest for direct mail ($995 per randomization) and lowest for newspaper/magazine advertising ($558 per randomization). Success of recruitment methods may vary based on target population and location. Planning of recruitment efforts is essential to the success of any clinical trial. Copyright 2010 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.
PPM mixtures of formaldehyde in gas cylinders: Stability and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.C.; Miller, S.B.; Patterson, L.M.
1999-07-01
Scott Specialty Gases has been successful in producing stable calibration gases of formaldehyde at low concentration. Critical to this success has been the development of a treatment process for high pressure aluminum cylinders. Formaldehyde cylinders having concentrations of 20ppm and 4ppm were found to show only small decline in concentrations over a period of approximately 12 months. Since no NIST traceable formaldehyde standards (or Standard Reference Material) are available, all Scott's formaldehyde cylinders were originally certified by traditional impinger method. This method involves an extremely tedious purification procedure for 2,4-dinitrophenylhydrazine (2,4-DNPH). A modified version of the impinger method has beenmore » developed and does not require extensive reagent purification for formaldehyde analysis. Extremely low formaldehyde blanks have been obtained with the modified method. The HPLC conditions in the original method were used for chromatographic separations. The modified method results in a lower analytical uncertainty for the formaldehyde standard mixtures. Consequently, it is possible to discern small differences between analytical results that are important for stability study.« less
Herschlag, Gregory J; Mitran, Sorin; Lin, Guang
2015-06-21
We develop a hierarchy of approximations to the master equation for systems that exhibit translational invariance and finite-range spatial correlation. Each approximation within the hierarchy is a set of ordinary differential equations that considers spatial correlations of varying lattice distance; the assumption is that the full system will have finite spatial correlations and thus the behavior of the models within the hierarchy will approach that of the full system. We provide evidence of this convergence in the context of one- and two-dimensional numerical examples. Lower levels within the hierarchy that consider shorter spatial correlations are shown to be up to three orders of magnitude faster than traditional kinetic Monte Carlo methods (KMC) for one-dimensional systems, while predicting similar system dynamics and steady states as KMC methods. We then test the hierarchy on a two-dimensional model for the oxidation of CO on RuO2(110), showing that low-order truncations of the hierarchy efficiently capture the essential system dynamics. By considering sequences of models in the hierarchy that account for longer spatial correlations, successive model predictions may be used to establish empirical approximation of error estimates. The hierarchy may be thought of as a class of generalized phenomenological kinetic models since each element of the hierarchy approximates the master equation and the lowest level in the hierarchy is identical to a simple existing phenomenological kinetic models.
Partial Wave Dispersion Relations: Application to Electron-Atom Scattering
NASA Technical Reports Server (NTRS)
Temkin, A.; Drachman, Richard J.
1999-01-01
In this Letter we propose the use of partial wave dispersion relations (DR's) as the way of solving the long-standing problem of correctly incorporating exchange in a valid DR for electron-atom scattering. In particular a method is given for effectively calculating the contribution of the discontinuity and/or poles of the partial wave amplitude which occur in the negative E plane. The method is successfully tested in three cases: (i) the analytically solvable exponential potential, (ii) the Hartree potential, and (iii) the S-wave exchange approximation for electron-hydrogen scattering.
Numerical solution of differential equations by artificial neural networks
NASA Technical Reports Server (NTRS)
Meade, Andrew J., Jr.
1995-01-01
Conventionally programmed digital computers can process numbers with great speed and precision, but do not easily recognize patterns or imprecise or contradictory data. Instead of being programmed in the conventional sense, artificial neural networks (ANN's) are capable of self-learning through exposure to repeated examples. However, the training of an ANN can be a time consuming and unpredictable process. A general method is being developed by the author to mate the adaptability of the ANN with the speed and precision of the digital computer. This method has been successful in building feedforward networks that can approximate functions and their partial derivatives from examples in a single iteration. The general method also allows the formation of feedforward networks that can approximate the solution to nonlinear ordinary and partial differential equations to desired accuracy without the need of examples. It is believed that continued research will produce artificial neural networks that can be used with confidence in practical scientific computing and engineering applications.
Parallel Computing of Upwelling in a Rotating Stratified Flow
NASA Astrophysics Data System (ADS)
Cui, A.; Street, R. L.
1997-11-01
A code for the three-dimensional, unsteady, incompressible, and turbulent flow has been implemented on the IBM SP2, using message passing. The effects of rotation and variable density are included. A finite volume method is used to discretize the Navier-Stokes equations in general curvilinear coordinates on a non-staggered grid. All the spatial derivatives are approximated using second-order central differences with the exception of the convection terms, which are handled with special upwind-difference schemes. The semi-implicit, second-order accurate, time-advancement scheme employs the Adams-Bashforth method for the explicit terms and Crank-Nicolson for the implicit terms. A multigrid method, with the four-color ZEBRA as smoother, is used to solve the Poisson equation for pressure, while the momentum equations are solved with an approximate factorization technique. The code was successfully validated for a variety test cases. Simulations of a laboratory model of coastal upwelling in a rotating annulus are in progress and will be presented.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
Neural network for solving convex quadratic bilevel programming problems.
He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie
2014-03-01
In this paper, using the idea of successive approximation, we propose a neural network to solve convex quadratic bilevel programming problems (CQBPPs), which is modeled by a nonautonomous differential inclusion. Different from the existing neural network for CQBPP, the model has the least number of state variables and simple structure. Based on the theory of nonsmooth analysis, differential inclusions and Lyapunov-like method, the limit equilibrium points sequence of the proposed neural networks can approximately converge to an optimal solution of CQBPP under certain conditions. Finally, simulation results on two numerical examples and the portfolio selection problem show the effectiveness and performance of the proposed neural network. Copyright © 2013 Elsevier Ltd. All rights reserved.
Time-dependent preparation of gelatin-stabilized silver nanoparticles by pulsed Nd:YAG laser
NASA Astrophysics Data System (ADS)
Darroudi, Majid; Ahmad, M. B.; Zamiri, Reza; Abdullah, A. H.; Ibrahim, N. A.; Sadrolhosseini, A. R.
2011-03-01
Colloidal silver nanoparticles (Ag-NPs) were successfully prepared using a nanosecond pulsed Nd:YAG laser, λ = 1064 nm, with laser fluence of approximately about 360 mJ/pulse, in an aqueous gelatin solution. In this work, gelatin was used as a stabilizer, and the size and optical absorption properties of samples were studied as a function of the laser ablation times. The results from the UV-vis spectroscopy demonstrated that the mean diameter of Ag-NPs decrease as the laser ablation time increases. The Ag-NPs have mean diameters ranging from approximately 10 nm to 16 nm. Compared with other preparation methods, this work is clean, rapid, and simple to use.
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
3D ultrasound computer tomography: update from a clinical study
NASA Astrophysics Data System (ADS)
Hopp, T.; Zapf, M.; Kretzek, E.; Henrich, J.; Tukalo, A.; Gemmeke, H.; Kaiser, C.; Knaudt, J.; Ruiter, N. V.
2016-04-01
Ultrasound Computer Tomography (USCT) is a promising new imaging method for breast cancer diagnosis. We developed a 3D USCT system and tested it in a pilot study with encouraging results: 3D USCT was able to depict two carcinomas, which were present in contrast enhanced MRI volumes serving as ground truth. To overcome severe differences in the breast shape, an image registration was applied. We analyzed the correlation between average sound speed in the breast and the breast density estimated from segmented MRIs and found a positive correlation with R=0.70. Based on the results of the pilot study we now carry out a successive clinical study with 200 patients. For this we integrated our reconstruction methods and image post-processing into a comprehensive workflow. It includes a dedicated DICOM viewer for interactive assessment of fused USCT images. A new preview mode now allows intuitive and faster patient positioning. We updated the USCT system to decrease the data acquisition time by approximately factor two and to increase the penetration depth of the breast into the USCT aperture by 1 cm. Furthermore the compute-intensive reflectivity reconstruction was considerably accelerated, now allowing a sub-millimeter volume reconstruction in approximately 16 minutes. The updates made it possible to successfully image first patients in our ongoing clinical study.
Simple skin-stretching device in assisted tension-free wound closure
Cheng, Li-Fu; Lee, Jiunn-Tat; Hsu, Honda; Wu, Meng-Si
2017-01-01
Background Numerous conventional wound reconstruction methods such as wound undermining with direct suture, skin graft, and flap surgery can be used to treat large wounds. The adequate undermining of the skin flaps of a wound is a commonly used technique for achieving the closure of large tension wounds; however, the use of tension to approximate and suture the skin flaps can cause ischemic marginal necrosis. The purpose of this study is to use elastic rubber bands to relieve the tension of direct wound closure for simultaneously minimizing the risks of wound dehiscence and wound edge ischemia that lead to necrosis. Materials and Methods This retrospective study was conducted to evaluate our clinical experiences with 22 large wounds, which involved performing primary closures under a considerable amount of tension by using elastic rubber bands in a skin-stretching technique following a wide undermining procedure. Assessment of the results entailed complete wound healing and related complications. Results All 22 wounds in our study showed fair to good results except for one. The mean success rate was approximately 95.45%. Conclusion The simple skin-stretching design enabled tension-free skin closure, which pulled the bilateral undermining skin flaps as bilateral fasciocutaneous advancement flaps. The skin-stretching technique was generally successful. PMID:28195891
Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku
2007-10-01
A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.
Mixed models and reduction method for dynamic analysis of anisotropic shells
NASA Technical Reports Server (NTRS)
Noor, A. K.; Peters, J. M.
1985-01-01
A time-domain computational procedure is presented for predicting the dynamic response of laminated anisotropic shells. The two key elements of the procedure are: (1) use of mixed finite element models having independent interpolation (shape) functions for stress resultants and generalized displacements for the spatial discretization of the shell, with the stress resultants allowed to be discontinuous at interelement boundaries; and (2) use of a dynamic reduction method, with the global approximation vectors consisting of the static solution and an orthogonal set of Lanczos vectors. The dynamic reduction is accomplished by means of successive application of the finite element method and the classical Rayleigh-Ritz technique. The finite element method is first used to generate the global approximation vectors. Then the Rayleigh-Ritz technique is used to generate a reduced system of ordinary differential equations in the amplitudes of these modes. The temporal integration of the reduced differential equations is performed by using an explicit half-station central difference scheme (Leap-frog method). The effectiveness of the proposed procedure is demonstrated by means of a numerical example and its advantages over reduction methods used with the displacement formulation are discussed.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Approximate Bayesian computation for spatial SEIR(S) epidemic models.
Brown, Grant D; Porter, Aaron T; Oleson, Jacob J; Hinman, Jessica A
2018-02-01
Approximate Bayesia n Computation (ABC) provides an attractive approach to estimation in complex Bayesian inferential problems for which evaluation of the kernel of the posterior distribution is impossible or computationally expensive. These highly parallelizable techniques have been successfully applied to many fields, particularly in cases where more traditional approaches such as Markov chain Monte Carlo (MCMC) are impractical. In this work, we demonstrate the application of approximate Bayesian inference to spatially heterogeneous Susceptible-Exposed-Infectious-Removed (SEIR) stochastic epidemic models. These models have a tractable posterior distribution, however MCMC techniques nevertheless become computationally infeasible for moderately sized problems. We discuss the practical implementation of these techniques via the open source ABSEIR package for R. The performance of ABC relative to traditional MCMC methods in a small problem is explored under simulation, as well as in the spatially heterogeneous context of the 2014 epidemic of Chikungunya in the Americas. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Density Perturbation Method to Study the Eigenstructure of Two-Phase Flow Equation Systems
NASA Astrophysics Data System (ADS)
Cortes, J.; Debussche, A.; Toumi, I.
1998-12-01
Many interesting and challenging physical mechanisms are concerned with the mathematical notion of eigenstructure. In two-fluid models, complex phasic interactions yield a complex eigenstructure which may raise numerous problems in numerical simulations. In this paper, we develop a perturbation method to examine the eigenvalues and eigenvectors of two-fluid models. This original method, based on the stiffness of the density ratio, provides a convenient tool to study the relevance of pressure momentum interactions and allows us to get precise approximations of the whole flow eigendecomposition for minor requirements. Roe scheme is successfully implemented and some numerical tests are presented.
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods for unstructured mesh computations are developed and tested. The approximate system which arises from the Newton-linearization of the nonlinear evolution operator is solved by using the preconditioned generalized minimum residual technique. These different preconditioners are investigated: the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over-relaxation (SSOR). The preconditioners have been optimized to have good vectorization properties. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also investigated. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
A Relaxation Method for Nonlocal and Non-Hermitian Operators
NASA Astrophysics Data System (ADS)
Lagaris, I. E.; Papageorgiou, D. G.; Braun, M.; Sofianos, S. A.
1996-06-01
We present a grid method to solve the time dependent Schrödinger equation (TDSE). It uses the Crank-Nicholson scheme to propagate the wavefunction forward in time and finite differences to approximate the derivative operators. The resulting sparse linear system is solved by the symmetric successive overrelaxation iterative technique. The method handles local and nonlocal interactions and Hamiltonians that correspond to either Hermitian or to non-Hermitian matrices with real eigenvalues. We test the method by solving the TDSE in the imaginary time domain, thus converting the time propagation to asymptotic relaxation. Benchmark problems solved are both in one and two dimensions, with local, nonlocal, Hermitian and non-Hermitian Hamiltonians.
Advanced numerical methods for three dimensional two-phase flow calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toumi, I.; Caruge, D.
1997-07-01
This paper is devoted to new numerical methods developed for both one and three dimensional two-phase flow calculations. These methods are finite volume numerical methods and are based on the use of Approximate Riemann Solvers concepts to define convective fluxes versus mean cell quantities. The first part of the paper presents the numerical method for a one dimensional hyperbolic two-fluid model including differential terms as added mass and interface pressure. This numerical solution scheme makes use of the Riemann problem solution to define backward and forward differencing to approximate spatial derivatives. The construction of this approximate Riemann solver uses anmore » extension of Roe`s method that has been successfully used to solve gas dynamic equations. As far as the two-fluid model is hyperbolic, this numerical method seems very efficient for the numerical solution of two-phase flow problems. The scheme was applied both to shock tube problems and to standard tests for two-fluid computer codes. The second part describes the numerical method in the three dimensional case. The authors discuss also some improvements performed to obtain a fully implicit solution method that provides fast running steady state calculations. Such a scheme is not implemented in a thermal-hydraulic computer code devoted to 3-D steady-state and transient computations. Some results obtained for Pressurised Water Reactors concerning upper plenum calculations and a steady state flow in the core with rod bow effect evaluation are presented. In practice these new numerical methods have proved to be stable on non staggered grids and capable of generating accurate non oscillating solutions for two-phase flow calculations.« less
NASA Astrophysics Data System (ADS)
Lin, S. T.; Liou, T. S.
2017-12-01
Numerical simulation of groundwater flow in anisotropic aquifers usually suffers from the lack of accuracy of calculating groundwater flux across grid blocks. Conventional two-point flux approximation (TPFA) can only obtain the flux normal to the grid interface but completely neglects the one parallel to it. Furthermore, the hydraulic gradient in a grid block estimated from TPFA can only poorly represent the hydraulic condition near the intersection of grid blocks. These disadvantages are further exacerbated when the principal axes of hydraulic conductivity, global coordinate system, and grid boundary are not parallel to one another. In order to refine the estimation the in-grid hydraulic gradient, several multiple-point flux approximation (MPFA) methods have been developed for two-dimensional groundwater flow simulations. For example, the MPFA-O method uses the hydraulic head at the junction node as an auxiliary variable which is then eliminated using the head and flux continuity conditions. In this study, a three-dimensional MPFA method will be developed for numerical simulation of groundwater flow in three-dimensional and strongly anisotropic aquifers. This new MPFA method first discretizes the simulation domain into hexahedrons. Each hexahedron is further decomposed into a certain number of tetrahedrons. The 2D MPFA-O method is then extended to these tetrahedrons, using the unknown head at the intersection of hexahedrons as an auxiliary variable along with the head and flux continuity conditions to solve for the head at the center of each hexahedron. Numerical simulations using this new MPFA method have been successfully compared with those obtained from a modified version of TOUGH2.
NASA Astrophysics Data System (ADS)
Kulyanitsa, A. L.; Rukhovich, A. D.; Rukhovich, D. D.; Koroleva, P. V.; Rukhovich, D. I.; Simakova, M. S.
2017-04-01
The concept of soil line can be to describe the temporal distribution of spectral characteristics of the bare soil surface. In this case, the soil line can be referred to as the multi-temporal soil line, or simply temporal soil line (TSL). In order to create TSL for 8000 regular lattice points for the territory of three regions of Tula oblast, we used 34 Landsat images obtained in the period from 1985 to 2014 after their certain transformation. As Landsat images are the matrices of the values of spectral brightness, this transformation is the normalization of matrices. There are several methods of normalization that move, rotate, and scale the spectral plane. In our study, we applied the method of piecewise linear approximation to the spectral neighborhood of soil line in order to assess the quality of normalization mathematically. This approach allowed us to range normalization methods according to their quality as follows: classic normalization > successive application of the turn and shift > successive application of the atmospheric correction and shift > atmospheric correction > shift > turn > raw data. The normalized data allowed us to create the maps of the distribution of a and b coefficients of the TSL. The map of b coefficient is characterized by the high correlation with the ground-truth data obtained from 1899 soil pits described during the soil surveys performed by the local institute for land management (GIPROZEM).
A Fast Hyperspectral Vector Radiative Transfer Model in UV to IR spectral bands
NASA Astrophysics Data System (ADS)
Ding, J.; Yang, P.; Sun, B.; Kattawar, G. W.; Platnick, S. E.; Meyer, K.; Wang, C.
2016-12-01
We develop a fast hyperspectral vector radiative transfer model with a spectral range from UV to IR with 5 nm resolutions. This model can simulate top of the atmosphere (TOA) diffuse radiance and polarized reflectance by considering gas absorption, Rayleigh scattering, and aerosol and cloud scattering. The absorption component considers several major atmospheric absorbers such as water vapor, CO2, O3, and O2 including both line and continuum absorptions. A regression-based method is used to parameterize the layer effective optical thickness for each gas, which substantially increases the computation efficiency for absorption while maintaining high accuracy. This method is over 500 times faster than the existing line-by-line method. The scattering component uses the successive order of scattering (SOS) method. For Rayleigh scattering, convergence is fast due to the small optical thickness of atmospheric gases. For cloud and aerosol layers, a small-angle approximation method is used in SOS calculations. The scattering process is divided into two parts, a forward part and a diffuse part. The scattering in the small-angle range in the forward direction is approximated as forward scattering. A cloud or aerosol layer is divided into thin layers. As the ray propagates through each thin layer, a portion diverges as diffuse radiation, while the remainder continues propagating in forward direction. The computed diffuse radiance is the sum of all of the diffuse parts. The small-angle approximation makes the SOS calculation converge rapidly even in a thick cloud layer.
Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models
NASA Astrophysics Data System (ADS)
Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo
2014-04-01
We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.
1987-11-01
III. - 7 1 11 1*25 4 11 - IN, I 61I’. UNCLASSIFIED MASTER COPY - FOR REPRODUCTION PURPOSES ) C . AD-A 190 ’PORT DOCUMENTATION PAGE ~~ 190 826 lb...E uations, University of Alabama, Birmingham, *AL.-7 N. Medhin, M. Sambandham, and C . K. Zoltani, Numerical Solution to a System of Random Volterra...Sambandham, and C . K. Zoltani, "Numerical Solution to a System of Random Volterra Integral Equations I: Successive Approximation Method’,"-submitted to
Methods to Promote Germination of Dormant Setaria viridis Seeds
Sebastian, Jose; Wong, Mandy Ka; Tang, Evan; Dinneny, José R.
2014-01-01
Setaria viridis has recently emerged as a promising genetic model system to study diverse aspects of monocot biology. While the post-germination life cycle of S. viridis is approximately 8 weeks long, the prolonged dormancy of freshly harvested seeds can more than double the total time required between successive generations. Here we describe methods that promote seed germination in S. viridis. Our results demonstrate that treating S. viridis seeds with liquid smoke or a GA3 and KNO3 solution improves germination rates to 90% or higher even in seeds that are 6 days post-harvest with similar results obtained whether seeds are planted in soil or on gel-based media. Importantly, we show that these treatments have no significant effect on the growth of the adult plant. We have tested these treatments on diverse S. viridis accessions and show variation in their response. The methods described here will help advance research using this model grass species by increasing the pace at which successive generations of plants can be analyzed. PMID:24748008
Temporal resolution improvement using PICCS in MDCT cardiac imaging.
Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang
2009-06-01
The current paradigm for temporal resolution improvement is to add more source-detector units and/or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120 degrees, which is roughly 50% of the standard short-scan angular range (approximately 240 degrees for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications.
Success rates of the first inferior alveolar nerve block administered by dental practitioners.
Kriangcherdsak, Yutthasak; Raucharernporn, Somchart; Chaiyasamut, Teeranut; Wongsirichat, Natthamet
2016-06-01
Inferior alveolar nerve block (IANB) of the mandible is commonly used in the oral cavity as an anesthetic technique for dental procedures. This study evaluated the success rate of the first IANB administered by dental practitioners. Volunteer dental practitioners at Mahidol University who had never performed an INAB carried out 106 INAB procedures. The practitioners were divided into 12 groups with their advisors by randomized control trials. We recorded the success rate via pain visual analog scale (VAS) scores. A large percentage of the dental practitioners (85.26%) used the standard method to locate the anatomical landmarks, injecting the local anesthetic at the correct position, with the barrel of the syringe parallel to the occlusal plane of the mandibular teeth. Further, 68.42% of the dental practitioners injected the local anesthetic on the right side by using the left index finger for retraction. The onset time was approximately 0-5 mins for nearly half of the dental practitioners (47.37% for subjective onset and 43.16% for objective onset), while the duration of the IANB was approximately 240-300 minutes (36.84%) after the initiation of numbness. Moreover, the VAS pain scores were 2.5 ± 1.85 and 2.1 ± 1.8 while injecting and delivering local anesthesia, respectively. The only recorded factor that affected the success of the local anesthetic was the administering practitioner. This reinforces the notion that local anesthesia administration is a technique-sensitive procedure.
Krueger, Robert F.; South, Susan C.; Gruenewald, Tara L.; Seeman, Teresa E.; Roberts, Brent W.
2012-01-01
Background. Outcomes in aging and health research, such as longevity, can be conceptualized as reflecting both genetic and environmental (nongenetic) effects. Parsing genetic and environmental influences can be challenging, particularly when taking a life span perspective, but an understanding of how genetic variants and environments relate to successful aging is critical to public health and intervention efforts. Methods. We review the literature, and survey promising methods, to understand this interplay. We also propose the investigation of personality as a nexus connecting genetics, environments, and health outcomes. Results. Personality traits may reflect psychological mechanisms by which underlying etiologic (genetic and environmental) effects predispose individuals to broad propensities to engage in (un)healthy patterns of behavior across the life span. In terms of methodology, traditional behavior genetic approaches have been used profitably to understand how genetic factors and environments relate to health and personality in somewhat separate literatures; we discuss how other behavior genetic approaches can help connect these literatures and provide new insights. Conclusions. Co-twin control designs can be employed to help determine causality via a closer approximation of the idealized counterfactual design. Gene-by-environment interaction (G × E) designs can be employed to understand how individual difference characteristics, such as personality, might moderate genetic and environmental influences on successful aging outcomes. Application of such methods can clarify the interplay of genes, environments, personality, and successful aging. PMID:22454369
NASA Astrophysics Data System (ADS)
Brown, I. Foster
2008-06-01
Learning to question is essential for determining pathways of conservation and development in southwestern Amazonia during a time of rapid global environmental change. Teaching such an approach in graduate science programs in regional universities can be done using play-acting and simulation exercises. Multiple working hypotheses help students learn to question their own research results and expert witnesses. The method of successive approximations enables students to question the results of complex calculations, such as estimates of forest biomass. Balloons and rolls of toilet paper provide means of questioning two-dimensional representations of a three-dimensional Earth and the value of pi. Generation of systematic errors can illustrate the pitfalls of blind acceptance of data. While learning to question is essential, it is insufficient by itself; students must also learn how to be solutionologists in order to satisfy societal demands for solutions to environmental problems. A little irreverence can be an excellent didactic tool for helping students develop the skills necessary to lead conservation and development efforts in the region.
Ultrasonic Porosity Estimation of Low-Porosity Ceramic Samples
NASA Astrophysics Data System (ADS)
Eskelinen, J.; Hoffrén, H.; Kohout, T.; Hæggström, E.; Pesonen, L. J.
2007-03-01
We report on efforts to extend the applicability of an airborne ultrasonic pulse-reflection (UPR) method towards lower porosities. UPR is a method that has been used successfully to estimate porosity and tortuosity of high porosity foams. UPR measures acoustical reflectivity of a target surface at two or more incidence angles. We used ceramic samples to evaluate the feasibility of extending the UPR range into low porosities (<35%). The validity of UPR estimates depends on pore size distribution and probing frequency as predicted by the theoretical boundary conditions of the used equivalent fluid model under the high-frequency approximation.
Adiabatically tapered splice for selective excitation of the fundamental mode in a multimode fiber.
Jung, Yongmin; Jeong, Yoonchan; Brambilla, Gilberto; Richardson, David J
2009-08-01
We propose a simple and effective method to selectively excite the fundamental mode of a multimode fiber by adiabatically tapering a fusion splice to a single-mode fiber. We experimentally demonstrate the method by adiabatically tapering splice (taper waist=15 microm, uniform length=40 mm) between single-mode and multimode fiber and show that it provides a successful mode conversion/connection and allows for almost perfect fundamental mode excitation in the multimode fiber. Excellent beam quality (M(2) approximately 1.08) was achieved with low loss and high environmental stability.
Estimation of correlation functions by stochastic approximation.
NASA Technical Reports Server (NTRS)
Habibi, A.; Wintz, P. A.
1972-01-01
Consideration of the autocorrelation function of a zero-mean stationary random process. The techniques are applicable to processes with nonzero mean provided the mean is estimated first and subtracted. Two recursive techniques are proposed, both of which are based on the method of stochastic approximation and assume a functional form for the correlation function that depends on a number of parameters that are recursively estimated from successive records. One technique uses a standard point estimator of the correlation function to provide estimates of the parameters that minimize the mean-square error between the point estimates and the parametric function. The other technique provides estimates of the parameters that maximize a likelihood function relating the parameters of the function to the random process. Examples are presented.
Endovascular embolization of varicoceles using n-butyl cyanoacrylate (NBCA) glue
Pietura, Radosław; Toborek, Michał; Dudek, Aneta; Boćkowska, Agata; Janicka, Joanna; Piekarski, Paweł
2013-01-01
Summary Background: Varicoceles are abnormally dilated veins within the pampiniform plexus. They are caused by reflux of blood in the internal spermatic vein. The incidence of varicoceles is approximately 10–15% of the adolescent male population. The etiology of varicoceles is probably multifactorial. The diagnosis is based on Doppler US. Treatment could be endovascular or surgical. The aim of the study was to describe and evaluate a novel method of endovascular embolization of varicoceles using n-butyl cyanoacrylate (NBCA) glue. Material/Methods: 17 patients were subjected to endovascular treatment of varicoceles using NBCA. A 2.8 Fr microcatheter and a 1:1 mixture of NBCA and lipiodol were used for embolization of the spermatic vein. Results: All 17 procedures were successful. There were no complications. Discussion: Embolization of varicoceles using NBCA glue is efficient and safe for all patients. The method should be considered as a method of choice in all patients. Phlebography and Valsalva maneuver are crucial for technical success and avoidance of complications. Conclusions: Endovascular treatment of varicoceles using NBCA glue is very effective and safe. PMID:23807881
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Quasiparticle self-consistent GW method for the spectral properties of complex materials.
Bruneval, Fabien; Gatti, Matteo
2014-01-01
The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.
On the feasibility of a transient dynamic design analysis
NASA Astrophysics Data System (ADS)
Cunniff, Patrick F.; Pohland, Robert D.
1993-05-01
The Dynamic Design Analysis Method has been used for the past 30 years as part of the Navy's efforts to shock-harden heavy shipboard equipment. This method which has been validated several times employs normal mode theory and design shock values. This report examines the degree of success that may be achieved by using simple equipment-vehicle models that produce time history responses which are equivalent to the responses that would be achieved using spectral design values employed by the Dynamic Design Analysis Method. These transient models are constructed by attaching the equipment's modal oscillators to the vehicle which is composed of rigid masses and elastic springs. Two methods have been developed for constructing these transient models. Each method generates the parameters of the vehicles so as to approximate the required damaging effects, such that the transient model is excited by an idealized impulse applied to the vehicle mass to which the equipment modal oscillators are attached. The first method called the Direct Modeling Method, is limited to equipment with at most three-degrees of freedom and the vehicle consists of a single lumped mass and spring. The Optimization Modeling Method, which is based on the simplex method for optimization, has been used successfully with a variety of vehicle models and equipment sizes.
Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.
Talaei, Behzad; Jagannathan, Sarangapani; Singler, John
2018-04-01
This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-07
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
NASA Astrophysics Data System (ADS)
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Bhowmick, Amiya Ranjan; Bandyopadhyay, Subhadip; Rana, Sourav; Bhattacharya, Sabyasachi
2016-01-01
The stochastic versions of the logistic and extended logistic growth models are applied successfully to explain many real-life population dynamics and share a central body of literature in stochastic modeling of ecological systems. To understand the randomness in the population dynamics of the underlying processes completely, it is important to have a clear idea about the quasi-equilibrium distribution and its moments. Bartlett et al. (1960) took a pioneering attempt for estimating the moments of the quasi-equilibrium distribution of the stochastic logistic model. Matis and Kiffe (1996) obtain a set of more accurate and elegant approximations for the mean, variance and skewness of the quasi-equilibrium distribution of the same model using cumulant truncation method. The method is extended for stochastic power law logistic family by the same and several other authors (Nasell, 2003; Singh and Hespanha, 2007). Cumulant truncation and some alternative methods e.g. saddle point approximation, derivative matching approach can be applied if the powers involved in the extended logistic set up are integers, although plenty of evidence is available for non-integer powers in many practical situations (Sibly et al., 2005). In this paper, we develop a set of new approximations for mean, variance and skewness of the quasi-equilibrium distribution under more general family of growth curves, which is applicable for both integer and non-integer powers. The deterministic counterpart of this family of models captures both monotonic and non-monotonic behavior of the per capita growth rate, of which theta-logistic is a special case. The approximations accurately estimate the first three order moments of the quasi-equilibrium distribution. The proposed method is illustrated with simulated data and real data from global population dynamics database. Copyright © 2015 Elsevier Inc. All rights reserved.
Development of a high-temperature oven for the 28 GHz electron cyclotron resonance ion source.
Ohnishi, J; Higurashi, Y; Kidera, M; Ozeki, K; Nakagawa, T
2014-02-01
We have been developing the 28 GHz ECR ion source in order to accelerate high-intensity uranium beams at the RIKEN RI-beam Factory. Although we have generated U(35+) beams by the sputtering method thus far, we began developing a high-temperature oven with the aim of increasing and stabilizing the beams. Because the oven method uses UO2, a crucible must be heated to a temperature higher than 2000 °C to supply an appropriate amount of UO2 vapor to the ECR plasma. Our high-temperature oven uses a tungsten crucible joule-heated with DC current of approximately 450 A. Its inside dimensions are ϕ11 mm × 13.5 mm. Since the crucible is placed in a magnetic field of approximately 3 T, it is subject to a magnetic force of approximately 40 N. Therefore, we used ANSYS to carefully design the crucible, which was manufactured by machining a tungsten rod. We could raise the oven up to 1900 °C in the first off-line test. Subsequently, UO2 was loaded into the crucible, and the oven was installed in the 28 GHz ECR ion source and was tested. As a result, a U(35+) beam current of 150 μA was extracted successfully at a RF power of approximately 3 kW.
An Investigation of Potential Uses of Animals in Coast Guard Operations
1981-06-01
wellhead assembly ("Christmas tree") which Is a series of valves, controls and connections designed to regulate the flow of fluids from the -well...personnel reconnaissance. 9 0 Infrared emitters have been designed and tested for locating patrol and sentry dogs at night.91 Finally, in Project...principle of operant conditioning is "shaping" or the method of successive approximations. In order to design a completely new -3 -83- Si . behavior
Naval Research Logistics Quarterly. Volume 28, Number 4,
1981-12-01
Fan [31 and an observation by Meijerink and van der Vorst [181 guarantee that after pivoting on any diagonal element of a diagonally dominant M- matrix...Science, 3, 255-269 (1957). 1181 Meijerink, J. and H. Van der Vorst, "An Iterative Solution Method for Linear Systems of which the Coefficient Matrix Is a...Hee, K., A. Hordijk and J. Van der Wal, "Successive Approximations for Convergent Dynamic Programming," in Markov Decision Theory, H. Tijms and J
Ion transport by gating voltage to nanopores produced via metal-assisted chemical etching method
NASA Astrophysics Data System (ADS)
Van Toan, Nguyen; Inomata, Naoki; Toda, Masaya; Ono, Takahito
2018-05-01
In this work, we report a simple and low-cost way to create nanopores that can be employed for various applications in nanofluidics. Nano sized Ag particles in the range from 1 to 20 nm are formed on a silicon substrate with a de-wetting method. Then the silicon nanopores with an approximate 15 nm average diameter and 200 μm height are successfully produced by the metal-assisted chemical etching method. In addition, electrically driven ion transport in the nanopores is demonstrated for nanofluidic applications. Ion transport through the nanopores is observed and could be controlled by an application of a gating voltage to the nanopores.
Triangulation of multistation camera data to locate a curved line in space
NASA Technical Reports Server (NTRS)
Fricke, C. L.
1974-01-01
A method is described for finding the location of a curved line in space from local azimuth as a function of elevation data obtained at several observation sites. A least-squares criterion is used to insure the best fit to the data. The method is applicable to the triangulation of an object having no identifiable structural features, provided its width is very small compared with its length so as to approximate a line in space. The method was implemented with a digital computer program and was successfully applied to data obtained from photographs of a barium ion cloud which traced out the earth's magnetic field line at very high altitudes.
NASA Astrophysics Data System (ADS)
Jiang, Junfeng; An, Jianchang; Liu, Kun; Ma, Chunyu; Li, Zhichen; Liu, Tiegen
2017-09-01
We propose a fast positioning algorithm for the asymmetric dual Mach-Zehnder interferometric infrared fiber vibration sensor. Using the approximately derivation method and the enveloping detection method, we successfully eliminate the asymmetry of the interference outputs and improve the processing speed. A positioning measurement experiment was carried out to verify the effectiveness of the proposed algorithm. At the sensing length of 85 km, the experimental results show that the mean positioning error is 18.9 m and the mean processing time is 116 ms. The processing speed is improved by 5 times compared to what can be achieved by using the traditional time-frequency analysis-based positioning method.
Ion transport by gating voltage to nanopores produced via metal-assisted chemical etching method.
Van Toan, Nguyen; Inomata, Naoki; Toda, Masaya; Ono, Takahito
2018-05-11
In this work, we report a simple and low-cost way to create nanopores that can be employed for various applications in nanofluidics. Nano sized Ag particles in the range from 1 to 20 nm are formed on a silicon substrate with a de-wetting method. Then the silicon nanopores with an approximate 15 nm average diameter and 200 μm height are successfully produced by the metal-assisted chemical etching method. In addition, electrically driven ion transport in the nanopores is demonstrated for nanofluidic applications. Ion transport through the nanopores is observed and could be controlled by an application of a gating voltage to the nanopores.
NASA Astrophysics Data System (ADS)
Hashimoto, S.; Iwamoto, Y.; Sato, T.; Niita, K.; Boudard, A.; Cugnon, J.; David, J.-C.; Leray, S.; Mancusi, D.
2014-08-01
A new approach to describing neutron spectra of deuteron-induced reactions in the Monte Carlo simulation for particle transport has been developed by combining the Intra-Nuclear Cascade of Liège (INCL) and the Distorted Wave Born Approximation (DWBA) calculation. We incorporated this combined method into the Particle and Heavy Ion Transport code System (PHITS) and applied it to estimate (d,xn) spectra on natLi, 9Be, and natC targets at incident energies ranging from 10 to 40 MeV. Double differential cross sections obtained by INCL and DWBA successfully reproduced broad peaks and discrete peaks, respectively, at the same energies as those observed in experimental data. Furthermore, an excellent agreement was observed between experimental data and PHITS-derived results using the combined method in thick target neutron yields over a wide range of neutron emission angles in the reactions. We also applied the new method to estimate (d,xp) spectra in the reactions, and discussed the validity for the proton emission spectra.
Sample entropy analysis of cervical neoplasia gene-expression signatures
Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R
2009-01-01
Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
Groundwater Exploration for Rural Communities in Ghana, West Africa
NASA Astrophysics Data System (ADS)
McKay, W. A.
2001-05-01
Exploration for potable water in developing countries continues to be a major activity, as there are more than one billion people without access to safe drinking water. Exploration for groundwater becomes more critical in regions where groundwater movement and occurrence is controlled by secondary features such as fractures and faults. Drilling success rates in such geological settings are generally very low, but can be improved by integrating geological, hydrogeological, aerial photo interpretation with land-based geophysical technology in the selection of drilling sites. To help alleviate water supply problems in West Africa, the Conrad N. Hilton Foundation and other donors, since 1990, have funded the World Vision Ghana Rural Water Project (GRWP) to drill wells for potable water supplies in the Greater Afram Plains (GAP) of Ghana. During the first two years of the program, drilling success rates using traditional methods ranged from 35 to 80 percent, depending on the area. The average drilling success rate for the program was approximately 50 percent. In an effort to increase the efficiency of drilling operations, the Desert Research Institute evaluated and developed techniques for application to well-siting strategies in the GAP area of Ghana. A critical project element was developing technical capabilities of in-country staff to independently implement the new strategies. Simple cost-benefit relationships were then used to evaluate the economic advantages of developing water resources using advanced siting methods. The application of advanced methods in the GAP area reveal an increase of 10 to 15 percent in the success rate over traditional methods. Aerial photography has been found to be the most useful of the imagery products covering the GAP area. An effective approach to geophysical exploration for groundwater has been the combined use of EM and resistivity methods. Economic analyses showed that the use of advanced methods is cost-effective when success rates with traditional methods are less than 70 to 90 percent. Finally, with the focus of GRWP activities shifting to Ghana's northern regions, new challenges in drilling success rates are being encountered. In certain districts, success rates as low as 35 percent are observed, raising questions about the efficacy of existing well-siting strategies in the current physical setting, and the validity of traditional cost-benefit analyses for assessing the economic aspects of water exploration in drought-stricken areas.
NASA Astrophysics Data System (ADS)
Trujillo Bueno, J.; Fabiani Bendicho, P.
1995-12-01
Iterative schemes based on Gauss-Seidel (G-S) and optimal successive over-relaxation (SOR) iteration are shown to provide a dramatic increase in the speed with which non-LTE radiation transfer (RT) problems can be solved. The convergence rates of these new RT methods are identical to those of upper triangular nonlocal approximate operator splitting techniques, but the computing time per iteration and the memory requirements are similar to those of a local operator splitting method. In addition to these properties, both methods are particularly suitable for multidimensional geometry, since they neither require the actual construction of nonlocal approximate operators nor the application of any matrix inversion procedure. Compared with the currently used Jacobi technique, which is based on the optimal local approximate operator (see Olson, Auer, & Buchler 1986), the G-S method presented here is faster by a factor 2. It gives excellent smoothing of the high-frequency error components, which makes it the iterative scheme of choice for multigrid radiative transfer. This G-S method can also be suitably combined with standard acceleration techniques to achieve even higher performance. Although the convergence rate of the optimal SOR scheme developed here for solving non-LTE RT problems is much higher than G-S, the computing time per iteration is also minimal, i.e., virtually identical to that of a local operator splitting method. While the conventional optimal local operator scheme provides the converged solution after a total CPU time (measured in arbitrary units) approximately equal to the number n of points per decade of optical depth, the time needed by this new method based on the optimal SOR iterations is only √n/2√2. This method is competitive with those that result from combining the above-mentioned Jacobi and G-S schemes with the best acceleration techniques. Contrary to what happens with the local operator splitting strategy currently in use, these novel methods remain effective even under extreme non-LTE conditions in very fine grids.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
Koski, Antti; Tossavainen, Timo; Juhola, Martti
2004-01-01
Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.
A study of an alignment-less lithography method as an educational resource
NASA Astrophysics Data System (ADS)
Kai, Kazuho; Shiota, Koki; Nagaoka, Shiro; Mahmood, Mohamad Rusop Bin Haji; Kawai, Akira
2016-07-01
A simplification of the lithography process was studied. The simplification method of photolithography, named "alignment-less lithography" was proposed by omitting the photomask alignment process in photolithography process using mechanically aligned photomasks and substrate by using a simple jig on which countersinks were formed. Photomasks made of glass and the photomasks made of transparent plastic sheets were prepared for the process. As the result, approximately 5µm in the case of the glass mask, and 20µm in the case of the OHP mask were obtained with repetitive accuracies, respectively. It was confirmed that the alignment-less lithography method was successful. The possibility of the application to an educational program, such as a heuristic for solving problems was suggested using the method with the OHP mask. The nMOS FET fabrication process was successfully demonstrated using this method. The feasibility of this process was confirmed. It is expected that a totally simplified device fabrication process can be achievable when combined with other simplifications, such ass the simplified impurity diffusion processes using PSG and BSG thin film as diffusion source prepared by the Sol-Gel material under normal air environment.
NASA Astrophysics Data System (ADS)
Casalegno, Mosè; Bernardi, Andrea; Raos, Guido
2013-07-01
Numerical approaches can provide useful information about the microscopic processes underlying photocurrent generation in organic solar cells (OSCs). Among them, the Kinetic Monte Carlo (KMC) method is conceptually the simplest, but computationally the most intensive. A less demanding alternative is potentially represented by so-called Master Equation (ME) approaches, where the equations describing particle dynamics rely on the mean-field approximation and their solution is attained numerically, rather than stochastically. The description of charge separation dynamics, the treatment of electrostatic interactions and numerical stability are some of the key issues which have prevented the application of these methods to OSC modelling, despite of their successes in the study of charge transport in disordered system. Here we describe a three-dimensional ME approach to photocurrent generation in OSCs which attempts to deal with these issues. The reliability of the proposed method is tested against reference KMC simulations on bilayer heterojunction solar cells. Comparison of the current-voltage curves shows that the model well approximates the exact result for most devices. The largest deviations in current densities are mainly due to the adoption of the mean-field approximation for electrostatic interactions. The presence of deep traps, in devices characterized by strong energy disorder, may also affect result quality. Comparison of the simulation times reveals that the ME algorithm runs, on the average, one order of magnitude faster than KMC.
Mehraeen, Shahab; Dierks, Travis; Jagannathan, S; Crow, Mariesa L
2013-12-01
In this paper, the nearly optimal solution for discrete-time (DT) affine nonlinear control systems in the presence of partially unknown internal system dynamics and disturbances is considered. The approach is based on successive approximate solution of the Hamilton-Jacobi-Isaacs (HJI) equation, which appears in optimal control. Successive approximation approach for updating control and disturbance inputs for DT nonlinear affine systems are proposed. Moreover, sufficient conditions for the convergence of the approximate HJI solution to the saddle point are derived, and an iterative approach to approximate the HJI equation using a neural network (NN) is presented. Then, the requirement of full knowledge of the internal dynamics of the nonlinear DT system is relaxed by using a second NN online approximator. The result is a closed-loop optimal NN controller via offline learning. A numerical example is provided illustrating the effectiveness of the approach.
Roots of polynomials by ratio of successive derivatives
NASA Technical Reports Server (NTRS)
Crouse, J. E.; Putt, C. W.
1972-01-01
An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.
Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.
Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R
2015-11-05
Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Numerical verification of composite rods theory on multi-story buildings analysis
NASA Astrophysics Data System (ADS)
El-Din Mansour, Alaa; Filatov, Vladimir; Gandzhuntsev, Michael; Ryasny, Nikita
2018-03-01
In the article, a verification proposal of the composite rods theory on the structural analysis of skeletons for high-rise buildings. A testing design model been formed on which horizontal elements been represented by a multilayer cantilever beam operates on transverse bending on which slabs are connected with a moment-non-transferring connections and a multilayer columns represents the vertical elements. Those connections are sufficiently enough to form a shearing action can be approximated by a certain shear forces function, the thing which significantly reduces the overall static indeterminacy degree of the structural model. A system of differential equations describe the operation mechanism of the multilayer rods that solved using the numerical approach of successive approximations method. The proposed methodology to be used while preliminary calculations for the sake of determining the rigidity characteristics of the structure; are needed. In addition, for a qualitative assessment of the results obtained by other methods when performing calculations with the verification aims.
A miniature Marine Aerosol Reference Tank (miniMART) as a compact breaking wave analogue
NASA Astrophysics Data System (ADS)
Stokes, M. Dale; Deane, Grant; Collins, Douglas B.; Cappa, Christopher; Bertram, Timothy; Dommer, Abigail; Schill, Steven; Forestieri, Sara; Survilo, Mathew
2016-09-01
In order to understand the processes governing the production of marine aerosols, repeatable, controlled methods for their generation are required. A new system, the miniature Marine Aerosol Reference Tank (miniMART), has been designed after the success of the original MART system, to approximate a small oceanic spilling breaker by producing an evolving bubble plume and surface foam patch. The smaller tank utilizes an intermittently plunging jet of water produced by a rotating water wheel, into an approximately 6 L reservoir to simulate bubble plume and foam formation and generate aerosols. This system produces bubble plumes characteristic of small whitecaps without the large external pump inherent in the original MART design. Without the pump it is possible to easily culture delicate planktonic and microbial communities in the bulk water during experiments while continuously producing aerosols for study. However, due to the reduced volume and smaller plunging jet, the absolute numbers of particles generated are approximately an order of magnitude less than in the original MART design.
NASA Technical Reports Server (NTRS)
Mule, Peter; Hill, Michael D.; Sampler, Henry P.
2000-01-01
The Microwave Anisotropy Probe (MAP) Observatory, scheduled for a fall 2000 launch, is designed to measure temperature fluctuations (anisotropy) and produce a high sensitivity and high spatial resolution (better than 0.3 deg.) map of the cosmic microwave background (CMB) radiation over the entire sky between 22 and 90 GHz. MAP utilizes back-to-back composite Gregorian telescopes supported on a composite truss structure to focus the microwave signals into 10 differential microwave receivers. Proper position and shape of the telescope reflectors at the operating temperature of approximately 90 K is a critical element to ensuring mission success. We describe the methods and analysis used to validate the in-flight position and shape predictions for the reflectors based on photogrammetric (PG) metrology data taken under vacuum with the reflectors at approximately 90 K. Contour maps showing reflector distortion analytical extrapolations were generated. The resulting reflector distortion data are shown to be crucial to the analytical assessment of the MAP instrument's microwave system in-flight performance.
NASA Technical Reports Server (NTRS)
Young, David T.
1991-01-01
This final report covers three years and several phases of work in which instrumentation for the Planetary Instrument Definition and Development Program (PIDDP) were successfully developed. There were two main thrusts to this research: (1) to develop and test methods for electrostatically scanning detector field-of-views, and (2) to improve the mass resolution of plasma mass spectrometers to M/delta M approximately 25, their field-of-view (FOV) to 360 degrees, and their E-range to cover approximately 1 eV to 50 keV. Prototypes of two different approaches to electrostatic scanning were built and tested. The Isochronous time-of-flight (TOF) and the linear electric field 3D TOF devices were examined.
Method to analyze remotely sensed spectral data
Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE
2009-02-17
A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.
A quantum theoretical study of polyimides
NASA Technical Reports Server (NTRS)
Burke, Luke A.
1987-01-01
One of the most important contributions of theoretical chemistry is the correct prediction of properties of materials before any costly experimental work begins. This is especially true in the field of electrically conducting polymers. Development of the Valence Effective Hamiltonian (VEH) technique for the calculation of the band structure of polymers was initiated. The necessary VEH potentials were developed for the sulfur and oxygen atoms within the particular molecular environments and the explanation explored for the success of this approximate method in predicting the optical properties of conducting polymers.
Accelerating Learning By Neural Networks
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Barhen, Jacob
1992-01-01
Electronic neural networks made to learn faster by use of terminal teacher forcing. Method of supervised learning involves addition of teacher forcing functions to excitations fed as inputs to output neurons. Initially, teacher forcing functions are strong enough to force outputs to desired values; subsequently, these functions decay with time. When learning successfully completed, terminal teacher forcing vanishes, and dynamics or neural network become equivalent to those of conventional neural network. Simulated neural network with terminal teacher forcing learned to produce close approximation of circular trajectory in 400 iterations.
Apollo: a sequence annotation editor
Lewis, SE; Searle, SMJ; Harris, N; Gibson, M; Iyer, V; Richter, J; Wiel, C; Bayraktaroglu, L; Birney, E; Crosby, MA; Kaminker, JS; Matthews, BB; Prochnik, SE; Smith, CD; Tupy, JL; Rubin, GM; Misra, S; Mungall, CJ; Clamp, ME
2002-01-01
The well-established inaccuracy of purely computational methods for annotating genome sequences necessitates an interactive tool to allow biological experts to refine these approximations by viewing and independently evaluating the data supporting each annotation. Apollo was developed to meet this need, enabling curators to inspect genome annotations closely and edit them. FlyBase biologists successfully used Apollo to annotate the Drosophila melanogaster genome and it is increasingly being used as a starting point for the development of customized annotation editing tools for other genome projects. PMID:12537571
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
The Mars Exploration Rover (MER) Transverse Impulse Rocket System (TIRS)
NASA Technical Reports Server (NTRS)
SanMartin, Alejandro Miguel; Bailey, Erik
2005-01-01
In a very short period of time the MER project successfully developed and tested a system, TIRS/DIMES, to improve the probability of success in the presence of large Martian winds. The successful development of TIRS/DIMES played a big role in the landing site selection process by enabling the landing of Spirit on Gusev crater, a site of very high scientific interest but with known high wind conditions. The performance of TIRS by Spirit at Gusev Crater was excellent. The velocity prediction error was small and Big TIRS was fired reducing the impact horizontal velocity from approximately 23 meters per second to approximately 11 meters per second, well within the airbag capabilities. The performance of TIRS by Opportunity at Meridiani was good. The velocity prediction error was rather large (approximately 6 meters per second, a less than 2 sigma value, but TIRS did not fire which was the correct action.
Examining Management Success Potential.
ERIC Educational Resources Information Center
Quatrano, Louis A.
The derivation of a model of management success potential in hospitals or health services administration is described. A questionnaire developed to assess management success potential in health administration students was voluntarily completed by approximately 700 incoming graduate students in 35 university health services administration programs…
NASA Technical Reports Server (NTRS)
Yew, Calinda; Stephens, Matt
2015-01-01
The JWST IEC conformal shields are mounted onto a composite frame structure that must undergo qualification testing to satisfy mission assurance requirements. The composite frame segments are bonded together at the joints using epoxy, EA 9394. The development of a test method to verify the integrity of the bonded structure at its operating environment introduces challenges in terms of requirements definition and the attainment of success criteria. Even though protoflight thermal requirements were not achieved, the first attempt in exposing the structure to cryogenic operating conditions in a thermal vacuum environment resulted in approximately 1 bonded joints failure during mechanical pull tests performed at 1.25 times the flight loads. Failure analysis concluded that the failure mode was due to adhesive cracks that formed and propagated along stress concentrated fillets as a result of poor bond squeeze-out control during fabrication. Bond repairs were made and the structures successfully re-tested with an improved LN2 immersion test method to achieve protoflight thermal requirements.
2015-01-01
False negative docking outcomes for highly symmetric molecules are a barrier to the accurate evaluation of docking programs, scoring functions, and protocols. This work describes an implementation of a symmetry-corrected root-mean-square deviation (RMSD) method into the program DOCK based on the Hungarian algorithm for solving the minimum assignment problem, which dynamically assigns atom correspondence in molecules with symmetry. The algorithm adds only a trivial amount of computation time to the RMSD calculations and is shown to increase the reported overall docking success rate by approximately 5% when tested over 1043 receptor–ligand systems. For some families of protein systems the results are even more dramatic, with success rate increases up to 16.7%. Several additional applications of the method are also presented including as a pairwise similarity metric to compare molecules during de novo design, as a scoring function to rank-order virtual screening results, and for the analysis of trajectories from molecular dynamics simulation. The new method, including source code, is available to registered users of DOCK6 (http://dock.compbio.ucsf.edu). PMID:24410429
NASA Astrophysics Data System (ADS)
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
Comparison of conventional therapies for dentin hypersensitivity versus medical hypnosis.
Eitner, Stephan; Bittner, Christian; Wichmann, Manfred; Nickenig, Hans-Joachim; Sokol, Biljana
2010-10-01
This study compared the efficacy of conventional treatments for dentin hypersensitivity (DHS) and hypnotherapy. During a 1-month period at an urban practice in a service area of approximately 22,000 inhabitants, all patients were examined. A total of 102 individuals were included in the evaluation. Values of 186 teeth were analyzed. The comparison of the different treatment methods (desensitizer, fluoridation, and hypnotherapy) did not show significant differences in success rates. However, a noticeable difference was observed in terms of onset and duration of effect. For both desensitizer and hypnotherapy treatments, onset of effect was very rapid. Compared to the other methods studied, hypnotherapy effects had the longest duration. In conclusion, hypnotherapy was as effective as other methods in the treatment of DHS.
Implicit solvers for unstructured meshes
NASA Technical Reports Server (NTRS)
Venkatakrishnan, V.; Mavriplis, Dimitri J.
1991-01-01
Implicit methods were developed and tested for unstructured mesh computations. The approximate system which arises from the Newton linearization of the nonlinear evolution operator is solved by using the preconditioned GMRES (Generalized Minimum Residual) technique. Three different preconditioners were studied, namely, the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over relaxation (SSOR). The preconditioners were optimized to have good vectorization properties. SSOR and ILU were also studied as iterative schemes. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also studied. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.
Extension of transonic flow computational concepts in the analysis of cavitated bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.
1990-01-01
An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.
Simple Skin-Stretching Device in Assisted Tension-Free Wound Closure.
Cheng, Li-Fu; Lee, Jiunn-Tat; Hsu, Honda; Wu, Meng-Si
2017-03-01
Numerous conventional wound reconstruction methods, such as wound undermining with direct suture, skin graft, and flap surgery, can be used to treat large wounds. The adequate undermining of the skin flaps of a wound is a commonly used technique for achieving the closure of large tension wounds; however, the use of tension to approximate and suture the skin flaps can cause ischemic marginal necrosis. The purpose of this study is to use elastic rubber bands to relieve the tension of direct wound closure for simultaneously minimizing the risks of wound dehiscence and wound edge ischemia that lead to necrosis. This retrospective study was conducted to evaluate our clinical experiences with 22 large wounds, which involved performing primary closures under a considerable amount of tension by using elastic rubber bands in a skin-stretching technique after a wide undermining procedure. Assessment of the results entailed complete wound healing and related complications. All 22 wounds in our study showed fair to good results except for one. The mean success rate was approximately 95.45%. The simple skin-stretching design enabled tension-free skin closure, which pulled the bilateral undermining skin flaps as bilateral fasciocutaneous advancement flaps. The skin-stretching technique was generally successful.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
Soft-tissue coverage of the neural elements after myelomeningocele repair.
Seidel, S B; Gardner, P M; Howard, P S
1996-09-01
We retrospectively reviewed all newborns with a diagnosis of myelomeningocele (MMC) admitted to our hospital between January 1990 and September 1994 to determine methods of soft tissue coverage, complication rates, and results. Sixty-five patients underwent repair of thoracic, lumbar, or sacral MMCs. The average size of defect repaired measured 21.3 cm2 (range, 2-80 cm2). Methods of repair included direct approximation of soft tissues with or without undermining (N = 48), Romberg Limberg flaps (N = 8), gluteus maximus or latissimus dorsi musculocutaneous flaps (N = 5), fascioutaneous flaps (N = 3), and V-gamma advancement (N = 1). A total of 18 complications were recorded (27.7%). There were 5 major complications (7.7%) and 13 minor ones (20.0%). Major complications were defined as midline wound dehiscence overlying the neural elements or wound infection leading to meningitis or ventriculitis. All 5 major and 9 minor complications arose in patients undergoing direct soft-tissue approximation. Additionally, all major complications were recorded in defects > 18 cm2. Based on this series, it appears that MMC defects < 18 cm2 can be closed by direct approximation of soft tissues without significant risk or major wound complication. Larger wounds may be successfully closed in this manner, but the risk of major complication is substantial.
Sternal approximation for bilateral anterolateral transsternal thoracotomy for lung transplantation.
McGiffin, David C; Alonso, Jorge E; Zorn, George L; Kirklin, James K; Young, K Randall; Wille, Keith M; Leon, Kevin; Hart, Katherine
2005-02-01
The traditional incision for bilateral sequential lung transplantation is the bilateral anterolateral transsternal thoracotomy with approximation of the sternal fragments with interrupted stainless steel wire loops; this technique may be associated with an unacceptable incidence of postoperative sternal disruption causing chronic pain and deformity. Approximation of the sternal ends was achieved with peristernal cables that passed behind the sternum two intercostal spaces above and below the sternal division, which were then passed through metal sleeves in front of the sternum, the cables tensioned, and the sleeves then crimped. Forty-seven patients underwent sternal closure with this method, and satisfactory bone union occurred in all patients. Six patients underwent removal of the peristernal cables: 1 for infection (with satisfactory bone union after the removal of the cables), 3 for cosmetic reasons, 1 during the performance of a median sternotomy for an aortic valve replacement, and 1 in a patient who requested removal before commencing participation in football. This technique of peristernal cable approximation of sternal ends has successfully eliminated the problem of sternal disruption associated with this incision and is a useful alternative for preventing this complication after bilateral lung transplantation.
Carman, Christián; Díez, José
2015-08-01
The goal of this paper, both historical and philosophical, is to launch a new case into the scientific realism debate: geocentric astronomy. Scientific realism about unobservables claims that the non-observational content of our successful/justified empirical theories is true, or approximately true. The argument that is currently considered the best in favor of scientific realism is the No Miracles Argument: the predictive success of a theory that makes (novel) observational predictions while making use of non-observational content would be inexplicable unless such non-observational content approximately corresponds to the world "out there". Laudan's pessimistic meta-induction challenged this argument, and realists reacted by moving to a "selective" version of realism: the approximately true part of the theory is not its full non-observational content but only the part of it that is responsible for the novel, successful observational predictions. Selective scientific realism has been tested against some of the theories in Laudan's list, but the first member of this list, geocentric astronomy, has been traditionally ignored. Our goal here is to defend that Ptolemy's Geocentrism deserves attention and poses a prima facie strong case against selective realism, since it made several successful, novel predictions based on theoretical hypotheses that do not seem to be retained, not even approximately, by posterior theories. Here, though, we confine our work just to the detailed reconstruction of what we take to be the main novel, successful Ptolemaic predictions, leaving the full analysis and assessment of their significance for the realist thesis to future works. Copyright © 2015. Published by Elsevier Ltd.
Descriptive study of the Socratic method: evidence for verbal shaping.
Calero-Elvira, Ana; Froján-Parga, María Xesús; Ruiz-Sancho, Elena María; Alpañés-Freitag, Manuel
2013-12-01
In this study we analyzed 65 fragments of session recordings in which a cognitive behavioral therapist employed the Socratic method with her patients. Specialized coding instruments were used to categorize the verbal behavior of the psychologist and the patients. First the fragments were classified as more or less successful depending on the overall degree of concordance between the patient's verbal behavior and the therapeutic objectives. Then the fragments were submitted to sequential analysis so as to discover regularities linking the patient's verbal behavior and the therapist's responses to it. Important differences between the more and the less successful fragments involved the therapist's approval or disapproval of verbalizations that approximated therapeutic goals. These approvals and disapprovals were associated with increases and decreases, respectively, in the patient's behavior. These results are consistent with the existence, in this particular case, of a process of shaping through which the therapist modifies the patient's verbal behavior in the overall direction of his or her chosen therapeutic objectives. © 2013.
Zhai, Peng-Wang; Hu, Yongxiang; Trepte, Charles R; Lucker, Patricia L
2009-02-16
A vector radiative transfer model has been developed for coupled atmosphere and ocean systems based on the Successive Order of Scattering (SOS) Method. The emphasis of this study is to make the model easy-to-use and computationally efficient. This model provides the full Stokes vector at arbitrary locations which can be conveniently specified by users. The model is capable of tracking and labeling different sources of the photons that are measured, e.g. water leaving radiances and reflected sky lights. This model also has the capability to separate florescence from multi-scattered sunlight. The delta - fit technique has been adopted to reduce computational time associated with the strongly forward-peaked scattering phase matrices. The exponential - linear approximation has been used to reduce the number of discretized vertical layers while maintaining the accuracy. This model is developed to serve the remote sensing community in harvesting physical parameters from multi-platform, multi-sensor measurements that target different components of the atmosphere-oceanic system.
NASA Astrophysics Data System (ADS)
Sehati, N.; Tavassoly, M. K.
2017-08-01
Inspiring from the scheme proposed in (Zheng in Phys Rev A 69:064,302 2004), our aim is to teleport an unknown qubit atomic state using the cavity QED method without using the explicit Bell-state measurement, and so the additional atom is not required. Two identical Λ-type three-level atoms are interacted separately and subsequently with a two-mode quantized cavity field where each mode is expressed with a single-photon field state. The interaction between atoms and field is well described via the Jaynes-Cummings model. It is then shown that how if the atomic detection results a particular state of atom 1, an unknown state can be appropriately teleported from atom 1 to atom 2. This teleportation procedure successfully leads to the high fidelity F (success probability P_g) in between 69%≲ F≲ 100% (0.14≲ P_g≲ 0.56). At last, we illustrated that our scheme considerably improves similar previous proposals.
Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve
Fong, Youyi; Yin, Shuxin; Huang, Ying
2016-01-01
In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981
Twenty Years of Endodontic Success and Failure at West Virginia University.
1982-01-01
Analysis of Success and Failure - Apical Termination of Filling Material ........... 21 7 Analysis of Success and Failure - Posttreatment Restoration...system with healthy periapical and periodontal tissues. The dentist must reduce or eliminate toxic or irritating substances from within the root canals to...adequate for evaluating endodontic success and that success should be based on the radiographic presence of a periodontal membrane space of approximately
Omori, Satoshi; Kitao, Akio
2013-06-01
We propose a fast clustering and reranking method, CyClus, for protein-protein docking decoys. This method enables comprehensive clustering of whole decoys generated by rigid-body docking using cylindrical approximation of the protein-proteininterface and hierarchical clustering procedures. We demonstrate the clustering and reranking of 54,000 decoy structures generated by ZDOCK for each complex within a few minutes. After parameter tuning for the test set in ZDOCK benchmark 2.0 with the ZDOCK and ZRANK scoring functions, blind tests for the incremental data in ZDOCK benchmark 3.0 and 4.0 were conducted. CyClus successfully generated smaller subsets of decoys containing near-native decoys. For example, the number of decoys required to create subsets containing near-native decoys with 80% probability was reduced from 22% to 50% of the number required in the original ZDOCK. Although specific ZDOCK and ZRANK results were demonstrated, the CyClus algorithm was designed to be more general and can be applied to a wide range of decoys and scoring functions by adjusting just two parameters, p and T. CyClus results were also compared to those from ClusPro. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.
2017-12-01
The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.
Parameter Estimation for a Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Lapointe, Caelan; Grooms, Ian; Rieker, Gregory B.; Hamlington, Peter E.
2016-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other "truth" data to be used for the prediction of unknown model parameters in numerical simulations of real-world engineering systems. In this presentation, we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a simulation with known boundary conditions and problem parameters. Using spatially-sparse temperature statistics from the 2D buoyant jet truth simulation, we show that the ABC method provides accurate predictions of the true jet inflow temperature. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for engineering fluid dynamics research.
Range-Separated Brueckner Coupled Cluster Doubles Theory
NASA Astrophysics Data System (ADS)
Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.
2014-04-01
We introduce a range-separation approximation to coupled cluster doubles (CCD) theory that successfully overcomes limitations of regular CCD when applied to the uniform electron gas. We combine the short-range ladder channel with the long-range ring channel in the presence of a Bruckner renormalized one-body interaction and obtain ground-state energies with an accuracy of 0.001 a.u./electron across a wide range of density regimes. Our scheme is particularly useful in the low-density and strongly correlated regimes, where regular CCD has serious drawbacks. Moreover, we cure the infamous overcorrelation of approaches based on ring diagrams (i.e., the particle-hole random phase approximation). Our energies are further shown to have appropriate basis set and thermodynamic limit convergence, and overall this scheme promises energetic properties for realistic periodic and extended systems which existing methods do not possess.
NASA Astrophysics Data System (ADS)
Smith, Leigh
2015-03-01
I will describe methods used at the University of Cincinnati to enhance student success in an algebra-based physics course. The first method is to use ALEKS, an adaptive online mathematics tutorial engine, before the term begins. Approximately three to four weeks before the beginning of the term, the professor in the course emails all of the students in the course informing them of the possibility of improving their math proficiency by using ALEKS. Using only a minimal reward on homework, we have achieved a 70% response rate with students spending an average of 8 hours working on their math skills before classes start. The second method is to use a flipped classroom approach. The class of 135 meets in a tiered classroom twice per week for two hours. Over the previous weekend students spend approximately 2 hours reading the book, taking short multiple choice conceptual quizzes, and viewing videos covering the material. In class, students use Learning Catalytics to work through homework problems in groups, guided by the instructor and one learning assistant. Using these interventions, we have reduced the student DWF rate (the fraction of students receiving a D or lower in the class) from an historical average of 35 to 40% to less than 20%.
Modal kinematics for multisection continuum arms.
Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G
2015-05-13
This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.
Langford, Seth T.; Wiggins, Cody S.; Santos, Roque; ...
2017-07-06
A method for Positron Emission Particle Tracking (PEPT) based on optical feature point identification techniques is demonstrated for use in low activity tracking experiments. Furthermore, a population of yeast cells of approximately 125,000 members is activated to roughly 55 Bq/cell by 18F uptake. An in vitro particle tracking experiment is performed with nearly 20 of these cells after decay to 32 Bq/cell. These cells are successfully identified and tracked simultaneously in this experiment. Our work extends the applicability of PEPT as a cell tracking method by allowing a number of cells to be tracked together, and demonstrating tracking for verymore » low activity tracers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langford, Seth T.; Wiggins, Cody S.; Santos, Roque
A method for Positron Emission Particle Tracking (PEPT) based on optical feature point identification techniques is demonstrated for use in low activity tracking experiments. Furthermore, a population of yeast cells of approximately 125,000 members is activated to roughly 55 Bq/cell by 18F uptake. An in vitro particle tracking experiment is performed with nearly 20 of these cells after decay to 32 Bq/cell. These cells are successfully identified and tracked simultaneously in this experiment. Our work extends the applicability of PEPT as a cell tracking method by allowing a number of cells to be tracked together, and demonstrating tracking for verymore » low activity tracers.« less
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
Application of Conjugate Gradient methods to tidal simulation
Barragy, E.; Carey, G.F.; Walters, R.A.
1993-01-01
A harmonic decomposition technique is applied to the shallow water equations to yield a complex, nonsymmetric, nonlinear, Helmholtz type problem for the sea surface and an accompanying complex, nonlinear diagonal problem for the velocities. The equation for the sea surface is linearized using successive approximation and then discretized with linear, triangular finite elements. The study focuses on applying iterative methods to solve the resulting complex linear systems. The comparative evaluation includes both standard iterative methods for the real subsystems and complex versions of the well known Bi-Conjugate Gradient and Bi-Conjugate Gradient Squared methods. Several Incomplete LU type preconditioners are discussed, and the effects of node ordering, rejection strategy, domain geometry and Coriolis parameter (affecting asymmetry) are investigated. Implementation details for the complex case are discussed. Performance studies are presented and comparisons made with a frontal solver. ?? 1993.
System analysis of plasma centrifuges and sputtering
NASA Technical Reports Server (NTRS)
Hong, S. H.
1978-01-01
System analyses of cylindrical plasma centrifuges are presented, for which the velocity field and electromagnetic fields are calculated. The effects of different electrode geometrics, induced magnetic fields, Hall-effect, and secondary flows are discussed. It is shown that speeds of 10000 m/sec can be achieved in plasma centrifuges, and that an efficient separation of U238 and U235 in uranium plasmas is feasible. The external boundary-value problem for the deposition of sputtering products is reduced to a Fredholm integral equation, which is solved analytically by means of the method of successive approximations.
Circular current loops, magnetic dipoles and spherical harmonic analysis.
Alldredge, L.R.
1980-01-01
Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author
Investigation of Test Methods, Material Properties, and Processes for Solar Cell Encapsulants
NASA Technical Reports Server (NTRS)
Willis, P. B.; Baum, B.
1979-01-01
The reformulation of a commercial grade of ethylene/vinyl acetate copolymer for use as a pottant in solar cell module manufacture was investigated. Potentially successful formulations were prepared by compounding the raw polymer with antioxidants, ultraviolet absorbers and crosslinking agents to yield stabilized and curable compositions. The resulting elastomer was found to offer low cost (approximately $0.80/lb.), low temperature processability, high transparency (91% transmission), and low modulus. Cured specimens of the final formulation endured 4000 hours of fluorescent sunlamp radiation without change which indicates excellent stability.
Improved numerical solutions for chaotic-cancer-model
NASA Astrophysics Data System (ADS)
Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair
2017-01-01
In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.
On beam models and their paraxial approximation
NASA Astrophysics Data System (ADS)
Waters, W. J.; King, B.
2018-01-01
We derive focused laser pulse solutions to the electromagnetic wave equation in vacuum. After reproducing beam and pulse expressions for the well-known paraxial Gaussian and axicon cases, we apply the method to analyse a laser beam with Lorentzian transverse momentum distribution. Whilst a paraxial approach has some success close to the focal axis and within a Rayleigh range of the focal spot, we find that it incorrectly predicts the transverse fall-off typical of a Lorentzian. Our vector-potential approach is particularly relevant to calculation of quantum electrodynamical processes in weak laser pulse backgrounds.
Estimating groundwater recharge
Healy, Richard W.; Scanlon, Bridget R.
2010-01-01
Understanding groundwater recharge is essential for successful management of water resources and modeling fluid and contaminant transport within the subsurface. This book provides a critical evaluation of the theory and assumptions that underlie methods for estimating rates of groundwater recharge. Detailed explanations of the methods are provided - allowing readers to apply many of the techniques themselves without needing to consult additional references. Numerous practical examples highlight benefits and limitations of each method. Approximately 900 references allow advanced practitioners to pursue additional information on any method. For the first time, theoretical and practical considerations for selecting and applying methods for estimating groundwater recharge are covered in a single volume with uniform presentation. Hydrogeologists, water-resource specialists, civil and agricultural engineers, earth and environmental scientists and agronomists will benefit from this informative and practical book. It can serve as the primary text for a graduate-level course on groundwater recharge or as an adjunct text for courses on groundwater hydrology or hydrogeology.
NASA Technical Reports Server (NTRS)
Rzasnicki, W.
1973-01-01
A method of solution is presented, which, when applied to the elasto-plastic analysis of plates having a v-notch on one edge and subjected to pure bending, will produce stress and strain fields in much greater detail than presently available. Application of the boundary integral equation method results in two coupled Fredholm-type integral equations, subject to prescribed boundary conditions. These equations are replaced by a system of simultaneous algebraic equations and solved by a successive approximation method employing Prandtl-Reuss incremental plasticity relations. The method is first applied to number of elasto-static problems and the results compared with available solutions. Good agreement is obtained in all cases. The elasto-plastic analysis provides detailed stress and strain distributions for several cases of plates with various notch angles and notch depths. A strain hardening material is assumed and both plane strain and plane stress conditions are considered.
MacDonald, G; Mackenzie, J A; Nolan, M; Insall, R H
2016-03-15
In this paper, we devise a moving mesh finite element method for the approximate solution of coupled bulk-surface reaction-diffusion equations on an evolving two dimensional domain. Fundamental to the success of the method is the robust generation of bulk and surface meshes. For this purpose, we use a novel moving mesh partial differential equation (MMPDE) approach. The developed method is applied to model problems with known analytical solutions; these experiments indicate second-order spatial and temporal accuracy. Coupled bulk-surface problems occur frequently in many areas; in particular, in the modelling of eukaryotic cell migration and chemotaxis. We apply the method to a model of the two-way interaction of a migrating cell in a chemotactic field, where the bulk region corresponds to the extracellular region and the surface to the cell membrane.
Accuracy of theory for calculating electron impact ionization of molecules
NASA Astrophysics Data System (ADS)
Chaluvadi, Hari Hara Kumar
The study of electron impact single ionization of atoms and molecules has provided valuable information about fundamental collisions. The most detailed information is obtained from triple differential cross sections (TDCS) in which the energy and momentum of all three final state particles are determined. These cross sections are much more difficult for theory since the detailed kinematics of the experiment become important. There are many theoretical approximations for ionization of molecules. One of the successful methods is the molecular 3-body distorted wave (M3DW) approximation. One of the strengths of the DW approximation is that it can be applied for any energy and any size molecule. One of the approximations that has been made to significantly reduce the required computer time is the OAMO (orientation averaged molecular orbital) approximation. In this dissertation, the accuracy of the M3DW-OAMO is tested for different molecules. Surprisingly, the M3DW-OAMO approximation yields reasonably good agreement with experiment for ionization of H2 and N2. On the other hand, the M3DW-OAMO results for ionization of CH4, NH3 and DNA derivative molecules did not agree very well with experiment. Consequently, we proposed the M3DW with a proper average (PA) calculation. In this dissertation, it is shown that the M3DW-PA calculations for CH4 and SF6 are in much better agreement with experimental data than the M3DW-OAMO results.
Computational alternatives to obtain time optimal jet engine control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Basso, R. J.; Leake, R. J.
1976-01-01
Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.
Design of compact freeform lens for application specific Light-Emitting Diode packaging.
Wang, Kai; Chen, Fei; Liu, Zongyuan; Luo, Xiaobing; Liu, Sheng
2010-01-18
Application specific LED packaging (ASLP) is an emerging technology for high performance LED lighting. We introduced a practical design method of compact freeform lens for extended sources used in ASLP. A new ASLP for road lighting was successfully obtained by integrating a polycarbonate compact freeform lens of small form factor with traditional LED packaging. Optical performance of the ASLP was investigated by both numerical simulation based on Monte Carlo ray tracing method and experiments. Results demonstrated that, comparing with traditional LED module integrated with secondary optics, the ASLP had advantages of much smaller size in volume (approximately 1/8), higher system lumen efficiency (approximately 8.1%), lower cost and more convenience for customers to design and assembly, enabling possible much wider applications of LED for general road lighting. Tolerance analyses were also conducted. Installation errors of horizontal and vertical deviations had more effects on the shape and uniformity of radiation pattern compared with rotational deviation. The tolerances of horizontal, vertical and rotational deviations of this lens were 0.11 mm, 0.14 mm and 2.4 degrees respectively, which were acceptable in engineering.
Stability of aspartame and neotame in pasteurized and in-bottle sterilized flavoured milk.
Kumari, Anuradha; Choudhary, Sonika; Arora, Sumit; Sharma, Vivek
2016-04-01
Analytical high performance liquid chromatography (HPLC) conditions were standardized along with the isolation procedure for separation of aspartame and neotame in flavoured milk (pasteurized and in-bottle sterilized flavoured milk). The recovery of the method was approximately 98% for both aspartame and neotame. The proposed HPLC method can be successfully used for the routine determination of aspartame and neotame in flavoured milk. Pasteurization (90 °C/20 min) resulted in approximately 40% loss of aspartame and only 8% of neotame was degraded. On storage (4-7°C/7 days) aspartame and neotame content decreased significantly (P<0.05) from 59.70% to 44.61% and 91.78% to 87.18%, respectively. Sterilization (121 °C/15 min) resulted in complete degradation of aspartame; however, 50.50% of neotame remained intact. During storage (30 °C/60 days) neotame content decreased significantly (P<0.05) from 50.36% to 8.67%. Results indicated that neotame exhibited better stability than aspartame in both pasteurized and in-bottle sterilized flavoured milk. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hwang, Seok Won; Lee, Ho-Jun; Lee, Hae June
2014-12-01
Fluid models have been widely used and conducted successfully in high pressure plasma simulations where the drift-diffusion and the local-field approximation are valid. However, fluid models are not able to demonstrate non-local effects related to large electron energy relaxation mean free path in low pressure plasmas. To overcome this weakness, a hybrid model coupling electron Monte Carlo collision (EMCC) method with the fluid model is introduced to obtain precise electron energy distribution functions using pseudo-particles. Steady state simulation results by a one-dimensional hybrid model which includes EMCC method for the collisional reactions but uses drift-diffusion approximation for electron transport in a fluid model are compared with those of a conventional particle-in-cell (PIC) and a fluid model for low pressure capacitively coupled plasmas. At a wide range of pressure, the hybrid model agrees well with the PIC simulation with a reduced calculation time while the fluid model shows discrepancy in the results of the plasma density and the electron temperature.
Uniaxial strain on graphene: Raman spectroscopy study and band-gap opening.
Ni, Zhen Hua; Yu, Ting; Lu, Yun Hao; Wang, Ying Ying; Feng, Yuan Ping; Shen, Ze Xiang
2008-11-25
Graphene was deposited on a transparent and flexible substrate, and tensile strain up to approximately 0.8% was loaded by stretching the substrate in one direction. Raman spectra of strained graphene show significant red shifts of 2D and G band (-27.8 and -14.2 cm(-1) per 1% strain, respectively) because of the elongation of the carbon-carbon bonds. This indicates that uniaxial strain has been successfully applied on graphene. We also proposed that, by applying uniaxial strain on graphene, tunable band gap at K point can be realized. First-principle calculations predicted a band-gap opening of approximately 300 meV for graphene under 1% uniaxial tensile strain. The strained graphene provides an alternative way to experimentally tune the band gap of graphene, which would be more efficient and more controllable than other methods that are used to open the band gap in graphene. Moreover, our results suggest that the flexible substrate is ready for such a strain process, and Raman spectroscopy can be used as an ultrasensitive method to determine the strain.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
A point-value enhanced finite volume method based on approximate delta functions
NASA Astrophysics Data System (ADS)
Xuan, Li-Jun; Majdalani, Joseph
2018-02-01
We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.
Temporal resolution improvement using PICCS in MDCT cardiac imaging
Chen, Guang-Hong; Tang, Jie; Hsieh, Jiang
2009-01-01
The current paradigm for temporal resolution improvement is to add more source-detector units and∕or increase the gantry rotation speed. The purpose of this article is to present an innovative alternative method to potentially improve temporal resolution by approximately a factor of 2 for all MDCT scanners without requiring hardware modification. The central enabling technology is a most recently developed image reconstruction method: Prior image constrained compressed sensing (PICCS). Using the method, cardiac CT images can be accurately reconstructed using the projection data acquired in an angular range of about 120°, which is roughly 50% of the standard short-scan angular range (∼240° for an MDCT scanner). As a result, the temporal resolution of MDCT cardiac imaging can be universally improved by approximately a factor of 2. In order to validate the proposed method, two in vivo animal experiments were conducted using a state-of-the-art 64-slice CT scanner (GE Healthcare, Waukesha, WI) at different gantry rotation times and different heart rates. One animal was scanned at heart rate of 83 beats per minute (bpm) using 400 ms gantry rotation time and the second animal was scanned at 94 bpm using 350 ms gantry rotation time, respectively. Cardiac coronary CT imaging can be successfully performed at high heart rates using a single-source MDCT scanner and projection data from a single heart beat with gantry rotation times of 400 and 350 ms. Using the proposed PICCS method, the temporal resolution of cardiac CT imaging can be effectively improved by approximately a factor of 2 without modifying any scanner hardware. This potentially provides a new method for single-source MDCT scanners to achieve reliable coronary CT imaging for patients at higher heart rates than the current heart rate limit of 70 bpm without using the well-known multisegment FBP reconstruction algorithm. This method also enables dual-source MDCT scanner to achieve higher temporal resolution without further hardware modifications. PMID:19610302
Design and Development of NEA Scout Solar Sail Deployer Mechanism
NASA Technical Reports Server (NTRS)
Sobey, Alexander R.; Lockett, Tiffany Russell
2016-01-01
The 6U (approximately 10cm x 20cm x 30cm) cubesat Near Earth Asteroid (NEA) Scout1, projected for launch in September 2018 aboard the maiden voyage of the Space Launch System (SLS), will utilize a solar sail as its main method of propulsion throughout its approximately 3 year mission to a Near Earth Asteroid (NEA). Due to the extreme volume constraints levied onto the mission, an acutely compact solar sail deployment mechanism has been designed to meet the volume and mass constraints, as well as provide enough propulsive solar sail area and quality in order to achieve mission success. The design of such a compact system required the development of approximately half a dozen prototypes in order to identify unforeseen problems, advance solutions, and build confidence in the final design product. This paper focuses on the obstacles of developing a solar sail deployment mechanism for such an application and the lessons learned from a thorough development process. The lessons presented will have significant applications beyond the NEA Scout mission, such as the development of other deployable boom mechanisms and uses for gossamer-thin films in space.
PHYSICS OF OUR DAYS: Nonlinear long waves on water and solitons
NASA Astrophysics Data System (ADS)
Zeytounian, R. Kh
1995-12-01
The water wave problem has been pivotal in the history of nonlinear wave theory. This problem is one of the most interesting and successful applications of nonlinear hydrodynamics. Waves on the free surface of a body of water (perfect liquid) have always been a fascinating subject, for they represent a familiar yet complex phenomenon, easy to observe but very difficult to describe! The archetypical model equations of Kordeweg and de Vries and of Boussinesq, for example, were originally derived as approximations for water waves, and research into the problem has been sustained vigorously up to the present day. In the present paper, the derivation of the model equations is given in depth and rational use is made of asymptotic methods. Indeed, it is important to understand that in some cases the derivation of these approximate equations is intuitive and heuristic. In fact, it is not clear how to insert the model equation under consideration into a hierarchy of rational approximations, which in turn result from the exact formulation of the selected water wave problem.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez Ortega, Jorge
The Standard Model of Partice Physics (SM) is probably the most successful theory, regarding to his predictions. The SM prediction formore » $CP$ violation is not enough to explain the overwhelming asymmetry among the matter and anti-matter abundance. Measuring some process where $CP$ violation is different to the one predicted by the SM would be a clear signal for Physics Beyond the Standard Model. The SM prediction for the $CP$ violation phase, $$\\phi_{s}$$, in the $$B^{0}_{s}$$ meson is practically equal to zero for the current experiments. This means that measuring a deviation from zero in $$\\phi_{s}$$ could be an indication for Physics Beyond the SM. On the other hand, the approximation based on the ``heavy quark symmetry'' let approximated calculations of the fundamental quantities of those hadrons containing a heavy quark, $c,b,t$. These calculations are expressed as expansions on inverse powers of the heavy quark mass in such hadron. This formalism is called `` Heavy Quark Effective Theory'' (HQET), and has been successful predicting some properties of the heavy hadrons. The HQET prediction for the lifetime ratio the $$B^{0}_{d}$$ and $$B^{0}_{s}$$ is practically equal to one. So, measuring with good precision the $$B^{0}_{s}$$ lifetime is also a way to test an approximation based on the SM. In this thesis it is detailed presented the method to measure the $$\\phi_{s}$$ and the lifetime ratio of the $$B^{0}_{d}$$ and $$B^{0}_{s}$$, among other quantities, with the DØ located in the Fermi National Accelerator Laboratory, in the United States.« less
Phased-mission system analysis using Boolean algebraic methods
NASA Technical Reports Server (NTRS)
Somani, Arun K.; Trivedi, Kishor S.
1993-01-01
Most reliability analysis techniques and tools assume that a system is used for a mission consisting of a single phase. However, multiple phases are natural in many missions. The failure rates of components, system configuration, and success criteria may vary from phase to phase. In addition, the duration of a phase may be deterministic or random. Recently, several researchers have addressed the problem of reliability analysis of such systems using a variety of methods. A new technique for phased-mission system reliability analysis based on Boolean algebraic methods is described. Our technique is computationally efficient and is applicable to a large class of systems for which the failure criterion in each phase can be expressed as a fault tree (or an equivalent representation). Our technique avoids state space explosion that commonly plague Markov chain-based analysis. A phase algebra to account for the effects of variable configurations and success criteria from phase to phase was developed. Our technique yields exact (as opposed to approximate) results. The use of our technique was demonstrated by means of an example and present numerical results to show the effects of mission phases on the system reliability.
International surgical telementoring: our initial experience.
Lee, B R; Caddedu, J A; Janetschek, G; Schulam, P; Docimo, S G; Moore, R G; Partin, A W; Kavoussi, L R
1998-01-01
Telesurgical laparoscopic telementoring has successfully been implemented between the Johns Hopkins Bayview Medical Center and the Johns Hopkins Hospital in 27 prior operations. In this previously reported series, telerobotic mentoring was achieved between two institutions 3.5 miles away. We report our experience in performing two international surgical telementoring operations. To determine the clinical utility of international surgical telementoring during laparoscopic surgical procedures. A laparoscopic adrenalectomy was telementored between Innsbruck, Austria (5,083 miles) and Baltimore, MD. As well, a laparoscopic varicocelectomy was telementored between Bangkok, Thailand and Baltimore, MD (10,880 miles) both over three ISDN lines (384 kbps) with an approximate 1 sec delay. Both procedures were successfully accomplished with an uneventful postoperative course. International telementoring is a viable method of instructing less experienced laparoscopic surgeons through potentially complex laparoscopic procedures, as well as potentially improving patient access to specialty care.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
NASA Astrophysics Data System (ADS)
Chen, Gui-Qiang G.; Schrecker, Matthew R. I.
2018-04-01
We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).
Safe landing area determination for a Moon lander by reachability analysis
NASA Astrophysics Data System (ADS)
Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco
2016-11-01
In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An analog-to-digital converter for on-chip focal-plane image sensor applications. The analog-to-digital converter utilizes a single charge integrating amplifier in a charge balancing architecture to implement successive approximation analog-to-digital conversion. This design requires minimal chip area and has high speed and low power dissipation for operation in the 2-10 bit range. The invention is particularly well suited to CMOS on-chip applications requiring many analog-to-digital converters, such as column-parallel focal-plane architectures.
Li, Bo; Beveridge, Peter; O'Hare, William T; Islam, Meez
2014-12-01
Current methods of detection and identification of blood stains rely largely on visual examination followed by presumptive tests such as Kastle-Meyer, Leuco-malachite green or luminol. Although these tests are useful, they can produce false positives and can also have a negative impact on subsequent DNA tests. A novel application of visible wavelength reflectance hyperspectral imaging has been used for the detection and positive identification of blood stains in a non contact and non destructive manner on a range of coloured substrates. The identification of blood staining was based on the unique visible absorption spectrum of haemoglobin between 400 and 500 nm. Images illustrating successful discrimination of blood stains from nine red substances are included. It has also been possible to distinguish between blood and approximately 40 other reddish stains. The technique was also successfully used to detect latent blood stains deposited on white filter paper at dilutions of up to 1 in 512 folds and on red tissue at dilutions of up to 1 in 32 folds. Finally, in a blind trial, the method successfully detected and identified a total of 9 blood stains on a red T-shirt. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kokornaczyk, Maria Olga; Dinelli, Giovanni; Betti, Lucietta
2013-01-01
The present paper reports on an observation that dendrite-like polycrystalline structures from evaporating droplets of wheat grain leakages exhibit bilateral symmetry. The exactness of this symmetry, measured by means of fluctuating asymmetry, varies depending on the cultivar and stress factor influence, and seems to correspond to the seed germination rate. In the bodies of plants, animals, and humans, the exactness of bilateral symmetry is known to reflect the environmental conditions of an organism's growth, its health, and its success in sexual selection. In polycrystalline structures, formed under the same conditions, the symmetry exactness depends on the properties of the crystallizing solution such as the composition and viscosity; however, it has never been associated with sample quality. We hypothesize here that, as in living nature, the exactness of approximate bilateral symmetry might be considered a quality indicator also in crystallographic methods applied to food quality analysis.
NASA Technical Reports Server (NTRS)
Webster, Cassandra M.; Folta, David C.
2017-01-01
In order to fly an occulter in formation with a telescope at the Sun-Earth L2 (SEL2) Libration Point, one must have a detailed understanding of the dy-namics that govern the restricted three body system. For initial purposes, a linear approximation is satisfactory, but operations will require a high-fidelity modeling tool along with strategic targeting methods in order to be successful. This paper focuses on the challenging dynamics of the transfer trajectories to achieve the relative positioning of two spacecraft to fly in formation at SEL2, in our case, the Wide-Field Infrared Survey Telescope (WFIRST) and a proposed Starshade. By modeling the formation transfers using a high fidelity tool, an accurate V approximation can be made to as-sist with the development of the subsystem design required for a WFIRST and Starshade formation flight mission.
Di Pietro, C; Di Pietro, V; Emmanuele, G; Ferro, A; Maugeri, T; Modica, E; Pigola, G; Pulvirenti, A; Purrello, M; Ragusa, M; Scalia, M; Shasha, D; Travali, S; Zimmitti, V
2003-01-01
In this paper we present a new Multiple Sequence Alignment (MSA) algorithm called AntiClusAl. The method makes use of the commonly use idea of aligning homologous sequences belonging to classes generated by some clustering algorithm, and then continue the alignment process ina bottom-up way along a suitable tree structure. The final result is then read at the root of the tree. Multiple sequence alignment in each cluster makes use of the progressive alignment with the 1-median (center) of the cluster. The 1-median of set S of sequences is the element of S which minimizes the average distance from any other sequence in S. Its exact computation requires quadratic time. The basic idea of our proposed algorithm is to make use of a simple and natural algorithmic technique based on randomized tournaments which has been successfully applied to large size search problems in general metric spaces. In particular a clustering algorithm called Antipole tree and an approximate linear 1-median computation are used. Our algorithm compared with Clustal W, a widely used tool to MSA, shows a better running time results with fully comparable alignment quality. A successful biological application showing high aminoacid conservation during evolution of Xenopus laevis SOD2 is also cited.
NASA Astrophysics Data System (ADS)
Holota, P.; Nesvadba, O.
2016-12-01
The mathematical apparatus currently applied for geopotential determination is undoubtedly quite developed. This concerns numerical methods as well as methods based on classical analysis, equally as classical and weak solution concepts. Nevertheless, the nature of the real surface of the Earth has its specific features and is still rather complex. The aim of this paper is to consider these limits and to seek a balance between the performance of an apparatus developed for the surface of the Earth smoothed (or simplified) up to a certain degree and an iteration procedure used to bridge the difference between the real and smoothed topography. The approach is applied for the solution of the linear gravimetric boundary value problem in geopotential determination. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. As examples the use of modified spherical and also modified ellipsoidal coordinates for the transformation of the solution domain is discussed. However, the complexity of the boundary is then reflected in the structure of Laplace's operator. This effect is taken into account by means of successive approximations. The structure of the respective iteration steps is derived and analyzed. On the level of individual iteration steps the attention is paid to the representation of the solution in terms of function bases or in terms of Green's functions. The convergence of the procedure and the efficiency of its use for geopotential determination is discussed.
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Recruitment strategies for an acupuncture randomized clinical trial of reproductive age women
Pastore, Lisa M.; Dalal, Parchayi
2009-01-01
Summary Objectives To assess the most effective recruitment strategies for an acupuncture clinical trial of reproductive age women. Design The underlying study is an acupuncture randomized clinical trial for an ovulatory disorder that affects approximately 6.5% of reproductive age women (Polycystic Ovary Syndrome). Study participation involved 2 months of intervention and 3 months of follow-up with US$170 compensation. Success of each recruitment method used during the first 37 study months was analyzed. Setting Clinical trial in the Dept. of OB/GYN at the University of Virginia, US. The original geographic residency target was an 80 mile radius around a college town in Virginia (population 155,000), and was expanded to the state capital (population 850,000) in recruitment year 2. Main outcome measures Number of study inquiries (phone calls or emails) over time and by recruitment source. Results In the first 37 months of recruitment (Jan 2006 – Jan 2009), there were 800 study inquiries (582 by phone, 218 by email), of which 749 were screened via telephone questionnaire. The most successful recruitment methods were flyers (28% of inquiries and 26 % of participants) and direct mailing to targeted zip codes (26% and 27%, respectively). The direct mailing cost US$110/inquiry, while the flyers cost less than US$300 in total. Study inquiries were least likely in May and November. Almost all prospective participants (94%) were acupuncture-naïve. Conclusions Posters/flyers and direct mailings proved to be the most successful recruitment methods for this CAM study. Active recruitment with multiple methods was needed for continual enrollment. PMID:19632551
Room-temperature wafer bonding of LiNbO3 and SiO2 using a modified surface activated bonding method
NASA Astrophysics Data System (ADS)
Takigawa, Ryo; Higurashi, Eiji; Asano, Tanemasa
2018-06-01
In this paper, we report room-temperature bonding of LiNbO3 (LN) and SiO2/Si for the realization of a LN on insulator (LNOI)/Si hybrid wafer. We investigate the applicability of a modified surface activated bonding (SAB) method for the direct bonding of LN and a thermally grown SiO2 layer. The modified SAB method using ion beam bombardment demonstrates the room-temperature wafer bonding of LN and SiO2. The bonded wafer was successfully cut into 0.5 × 0.5 mm2 dies without interfacial debonding owing to the applied stress during dicing. In addition, the surface energy of the bonded wafer was estimated to be approximately 1.8 J/m2 using the crack opening method. These results indicate that a strong bond strength can be achieved, which may be sufficient for device applications.
An efficient method for purifying high quality RNA from wheat pistils.
Manickavelu, A; Kambara, Kumiko; Mishina, Kohei; Koba, Takato
2007-02-15
Many methods are available for total RNA extraction from plants, except the floral organs like wheat pistils containing high levels of polysaccharides that bind/or co-precipitate with RNA. In this protocol, a simple and effective method for extracting total RNA from small and feathery wheat pistils has been developed. Lithium chloride (LiCl) and phenol:chloroform:isoamylalcohol (PCI) were employed and the samples were ground in microcentrifuge tube using plastic pestle. A jacket of liquid nitrogen and simplified procedures were applied to ensure thorough grinding of the pistils and to minimize the samples loss. These measures substantially increased the recovery of total RNA (approximately 50%) in the extraction process. Reliable differential display by cDNA-AFLP was successfully achieved with the total RNA after DNase treatment and reverse transcription. This method is also practicable for gene expression and gene regulation studies in floral parts of other plants.
The use of Galerkin finite-element methods to solve mass-transport equations
Grove, David B.
1977-01-01
The partial differential equation that describes the transport and reaction of chemical solutes in porous media was solved using the Galerkin finite-element technique. These finite elements were superimposed over finite-difference cells used to solve the flow equation. Both convection and flow due to hydraulic dispersion were considered. Linear and Hermite cubic approximations (basis functions) provided satisfactory results: however, the linear functions were computationally more efficient for two-dimensional problems. Successive over relaxation (SOR) and iteration techniques using Tchebyschef polynomials were used to solve the sparce matrices generated using the linear and Hermite cubic functions, respectively. Comparisons of the finite-element methods to the finite-difference methods, and to analytical results, indicated that a high degree of accuracy may be obtained using the method outlined. The technique was applied to a field problem involving an aquifer contaminated with chloride, tritium, and strontium-90. (Woodard-USGS)
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Cubic KTi2(PO4)3 as electrode materials for sodium-ion batteries.
Han, Jin; Xu, Maowen; Niu, Yubin; Jia, Min; Liu, Ting; Li, Chang Ming
2016-12-01
A novel cubic KTi2(PO4)3 is successfully synthesized via a facile hydrothermal method combined with a subsequent annealing treatment and further used as electrode material for sodium-ion batteries for the first time. For comparison, carbon-coated KTi2(PO4)3 obtained by a normal cane sugar-assisted method reveals superior electrochemical performances in sodium-ion battery. Besides of the high coulombic efficiency of nearly 100% after 100 cycles, a stable capacity of 112mAhg(-1) can be achieved at 0.5C after 100 cycles, and still maintains to 105mAhg(-1) after 500 cycles with capacity retention of approximately 90%. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chatterjee, Subhasri; Das, Nandan K.; Kumar, Satish; Mohapatra, Sonali; Pradhan, Asima; Panigrahi, Prasanta K.; Ghosh, Nirmalya
2013-02-01
Multi-resolution analysis on the spatial refractive index inhomogeneities in the connective tissue regions of human cervix reveals clear signature of multifractality. We have thus developed an inverse analysis strategy for extraction and quantification of the multifractality of spatial refractive index fluctuations from the recorded light scattering signal. The method is based on Fourier domain pre-processing of light scattering data using Born approximation, and its subsequent analysis through Multifractal Detrended Fluctuation Analysis model. The method has been validated on several mono- and multi-fractal scattering objects whose self-similar properties are user controlled and known a-priori. Following successful validation, this approach has initially been explored for differentiating between different grades of precancerous human cervical tissues.
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
Brown, Elizabeth Timbrook; Mock, Stephen; Dmochowski, Roger; Reynolds, W. Stuart; Milam, Douglas; Kaufman, Melissa R.
2016-01-01
Background: Urethroplasty is often successful for the treatment of male urethral stricture disease, but limited data exists on recurrence management. Our goal was to evaluate direct visual internal urethrotomy (DVIU) as a treatment option for isolated, recurrent strictures after urethroplasty. Methods: We retrospectively identified male patients who underwent urethroplasty from 1999 to 2013 and developed an isolated, recurrent stricture at the urethroplasty site treated with DVIU. Success was defined as lack of symptomatology and no subsequent intervention. Comparative analysis identified characteristics and stricture properties associated with success. Results: A total of 436 urethroplasties were performed in 401 patients at our institution between 1999 and 2013. Stricture recurrence was noted in 64 (16%) patients. Of these, 47 (73%) underwent a DVIU. A total of 37 patients met inclusion criteria and underwent 50 DVIU procedures at the urethroplasty site. A single DVIU was successful in 13 of 37 patients (35%). A total of 4 of 6 patients required a second DVIU (67%). Overall, 17 of 43 (40%) of the total DVIUs were successful after urethroplasty. Success did not differ by age, stricture length or location, surgical technique, radiation history, prior urethroplasty or DVIU, time to failure, or etiology. Conclusions: Post-urethroplasty DVIU for isolated, recurrent strictures may be offered as a minimally invasive treatment option. Approximately 40% of patients were spared further intervention. PMID:28203286
Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro
2016-09-08
A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.
NASA Astrophysics Data System (ADS)
Mester, Dávid; Nagy, Péter R.; Kállay, Mihály
2018-03-01
A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.
Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.
Kleinjung, J; Bayley, P; Fraternali, F
2000-03-31
A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.
Method and apparatus for measuring web material wound on a reel
NASA Technical Reports Server (NTRS)
Muller, R. M. (Inventor)
1977-01-01
The method and apparatus for measuring the number of layers of a web material of known thickness wound on a storage or take-up reel is presented. The method and apparatus are based on the principle that, at a relatively large radius, the loci of layers of a thin web wound on the reel approximate a family of concentric circles having radii respectively successively increasing by a length equal to the web thickness. Tachometer pulses are generated in response to linear movement of the web and reset pulses are generated in response to rotation of the reel. A digital circuit, responsive to the tachometer and reset pulses, generates data indicative of the layer number of any layer of the web and of position of the web within the layer without requiring numerical interpolation.
A Comparison of Interactional Aerodynamics Methods for a Helicopter in Low Speed Flight
NASA Technical Reports Server (NTRS)
Berry, John D.; Letnikov, Victor; Bavykina, Irena; Chaffin, Mark S.
1998-01-01
Recent advances in computing subsonic flow have been applied to helicopter configurations with various degrees of success. This paper is a comparison of two specific methods applied to a particularly challenging regime of helicopter flight, very low speeds, where the interaction of the rotor wake and the fuselage are most significant. Comparisons are made between different methods of predicting the interactional aerodynamics associated with a simple generic helicopter configuration. These comparisons are made using fuselage pressure data from a Mach-scaled powered model helicopter with a rotor diameter of approximately 3 meters. The data shown are for an advance ratio of 0.05 with a thrust coefficient of 0.0066. The results of this comparison show that in this type of complex flow both analytical techniques have regions where they are more accurate in matching the experimental data.
NASA Astrophysics Data System (ADS)
Nearing, G. S.
2014-12-01
Statistical models consistently out-perform conceptual models in the short term, however to account for a nonstationary future (or an unobserved past) scientists prefer to base predictions on unchanging and commutable properties of the universe - i.e., physics. The problem with physically-based hydrology models is, of course, that they aren't really based on physics - they are based on statistical approximations of physical interactions, and we almost uniformly lack an understanding of the entropy associated with these approximations. Thermodynamics is successful precisely because entropy statistics are computable for homogeneous (well-mixed) systems, and ergodic arguments explain the success of Newton's laws to describe systems that are fundamentally quantum in nature. Unfortunately, similar arguments do not hold for systems like watersheds that are heterogeneous at a wide range of scales. Ray Solomonoff formalized the situation in 1968 by showing that given infinite evidence, simultaneously minimizing model complexity and entropy in predictions always leads to the best possible model. The open question in hydrology is about what happens when we don't have infinite evidence - for example, when the future will not look like the past, or when one watershed does not behave like another. How do we isolate stationary and commutable components of watershed behavior? I propose that one possible answer to this dilemma lies in a formal combination of physics and statistics. In this talk I outline my recent analogue (Solomonoff's theorem was digital) of Solomonoff's idea that allows us to quantify the complexity/entropy tradeoff in a way that is intuitive to physical scientists. I show how to formally combine "physical" and statistical methods for model development in a way that allows us to derive the theoretically best possible model given any given physics approximation(s) and available observations. Finally, I apply an analogue of Solomonoff's theorem to evaluate the tradeoff between model complexity and prediction power.
Final Report of the Project "From the finite element method to the virtual element method"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manzini, Gianmarco; Gyrya, Vitaliy
The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less
Vacuum infusion method for woven carbon/Kevlar reinforced hybrid composite
NASA Astrophysics Data System (ADS)
Hashim, N.; Majid, D. L.; Uda, N.; Zahari, R.; Yidris, N.
2017-12-01
The vacuum assisted resin transfer moulding (VaRTM) or Vacuum Infusion (VI) is one of the fabrication methods used for composite materials. Compared to other methods, this process costs lower than using prepregs because it does not need to use the autoclave to cure. Moreover, composites fabricated using this VI method exhibit superior mechanical properties than those made through hand layup process. In this study, the VI method is used in fabricating woven carbon/Kevlar fibre cloth with epoxy matrix. This paper reports the detailed methods on fabricating the hybrid composite using VI process and several precautions that need to be taken to avoid any damage to the properties of the composite material. The result highlights that the successfully fabricated composite has approximately 60% of fibres weight fraction. Since the composites produced by the VI process have a higher fibre percentage, this process should be considered for composites used in applications that are susceptible to the conditions where the fibres need to be the dominant element such as in tension loading.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
NASA Technical Reports Server (NTRS)
Harwood, P. (Principal Investigator); Finley, R.; Mcculloch, S.; Marphy, D.; Hupp, B.
1976-01-01
The author has identified the following significant results. Image interpretation mapping techniques were successfully applied to test site 5, an area with a semi-arid climate. The land cover/land use classification required further modification. A new program, HGROUP, added to the ADP classification schedule provides a convenient method for examining the spectral similarity between classes. This capability greatly simplifies the task of combining 25-30 unsupervised subclasses into about 15 major classes that approximately correspond to the land use/land cover classification scheme.
Differentiation of tumor from viable myocardium using cardiac tagging with MR imaging.
Bouton, S; Yang, A; McCrindle, B W; Kidd, L; McVeigh, E R; Zerhouni, E A
1991-01-01
We report the application of myocardial tagging by MR to define tissue planes and differentiate contractile from noncontractile tissue in a neonate with congenital cardiac rhabdomyoma. Using custom-written pulse programming software, six 2 mm thick radiofrequency (RF) slice-selective presaturation pulses (tags) were used to label the chest wall and myocardium in a star pattern in diastole, approximately 60 ms before the R-wave gating trigger. This method successfully delineated the myocardium from noncontractile tumor, providing information that influenced clinical management. This RF tagging technique allowed us to confirm the exact intramyocardial location of a congenital cardiac tumor.
Dual exposure interferometry. [gas dynamics and flow visualization
NASA Technical Reports Server (NTRS)
Smeets, G.; George, A.
1982-01-01
The application of dual exposure differential interferometry to gas dynamics and flow visualization is discussed. A differential interferometer with Wallaston prisms can produce two complementary interference fringe systems, depending on the polarization of the incident light. If these two systems are superimposed on a film, with one exposure during a phenomenon, the other before or after, the phenomenon will appear on a uniform background. By regulating the interferometer to infinite fringe distance, a resolution limit of approximately lambda/500 can be obtained in the quantitative analysis of weak phase objects. This method was successfully applied to gas dynamic investigations.
Weighted cubic and biharmonic splines
NASA Astrophysics Data System (ADS)
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
NASA Astrophysics Data System (ADS)
Salabat, Alireza; Saydi, Hassan
2012-12-01
In this research a new idea for prediction of ultimate sizes of bimetallic nanocomposites synthesized in water-in-oil microemulsion system is proposed. In this method, by modifying Tabor Winterton approximation equation, an effective Hamaker constant was introduced. This effective Hamaker constant was applied in the van der Waals attractive interaction energy. The obtained effective van der Waals interaction energy was used as attractive contribution in the total interaction energy. The modified interaction energy was applied successfully to predict some bimetallic nanoparticles, at different mass fraction, synthesized in microemulsion system of dioctyl sodium sulfosuccinate (AOT)/isooctane.
Identification of cost effective energy conservation measures
NASA Technical Reports Server (NTRS)
Bierenbaum, H. S.; Boggs, W. H.
1978-01-01
In addition to a successful program of readily implemented conservation actions for reducing building energy consumption at Kennedy Space Center, recent detailed analyses have identified further substantial savings for buildings representative of technical facilities designed when energy costs were low. The techniques employed for determination of these energy savings consisted of facility configuration analysis, power and lighting measurements, detailed computer simulations and simulation verifications. Use of these methods resulted in identification of projected energy savings as large as $330,000 a year (approximately two year break-even period) in a single building. Application of these techniques to other commercial buildings is discussed
New developments of the Extended Quadrature Method of Moments to solve Population Balance Equations
NASA Astrophysics Data System (ADS)
Pigou, Maxime; Morchain, Jérôme; Fede, Pascal; Penet, Marie-Isabelle; Laronze, Geoffrey
2018-07-01
Population Balance Models have a wide range of applications in many industrial fields as they allow accounting for heterogeneity among properties which are crucial for some system modelling. They actually describe the evolution of a Number Density Function (NDF) using a Population Balance Equation (PBE). For instance, they are applied to gas-liquid columns or stirred reactors, aerosol technology, crystallisation processes, fine particles or biological systems. There is a significant interest for fast, stable and accurate numerical methods in order to solve for PBEs, a class of such methods actually does not solve directly the NDF but resolves their moments. These methods of moments, and in particular quadrature-based methods of moments, have been successfully applied to a variety of systems. Point-wise values of the NDF are sometimes required but are not directly accessible from the moments. To address these issues, the Extended Quadrature Method of Moments (EQMOM) has been developed in the past few years and approximates the NDF, from its moments, as a convex mixture of Kernel Density Functions (KDFs) of the same parametric family. In the present work EQMOM is further developed on two aspects. The main one is a significant improvement of the core iterative procedure of that method, the corresponding reduction of its computational cost is estimated to range from 60% up to 95%. The second aspect is an extension of EQMOM to two new KDFs used for the approximation, the Weibull and the Laplace kernels. All MATLAB source codes used for this article are provided with this article.
Establishing use of crutches by a mentally retarded spina bifida child1
Horner, R. Don
1971-01-01
A 5-yr-old mentally retarded spina bifida child was taught to walk with the aid of crutches. This behavior was developed through fading of physical prompting within a 10-step successive approximation sequence. Preliminary training to establish gait consisted of developing use of parallel bars through fading of physically modelled responses within a six-step successive approximation sequence. Use of parallel bars ceased during an extinction period and completely recovered upon being primed with one “free” reinforcement. Systematic use of natural reinforcers was employed as an aid in maintaining use of crutches. PMID:16795294
NASA Astrophysics Data System (ADS)
Xu, Yu-Lin
The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.
Light Scattering by Fractal Dust Aggregates. II. Opacity and Asymmetry Parameter
NASA Astrophysics Data System (ADS)
Tazaki, Ryo; Tanaka, Hidekazu
2018-06-01
Optical properties of dust aggregates are important at various astrophysical environments. To find a reliable approximation method for optical properties of dust aggregates, we calculate the opacity and the asymmetry parameter of dust aggregates by using a rigorous numerical method, the T-Matrix Method, and then the results are compared to those obtained by approximate methods: the Rayleigh–Gans–Debye (RGD) theory, the effective medium theory (EMT), and the distribution of hollow spheres method (DHS). First of all, we confirm that the RGD theory breaks down when multiple scattering is important. In addition, we find that both EMT and DHS fail to reproduce the optical properties of dust aggregates with fractal dimensions of 2 when the incident wavelength is shorter than the aggregate radius. In order to solve these problems, we test the mean field theory (MFT), where multiple scattering can be taken into account. We show that the extinction opacity of dust aggregates can be well reproduced by MFT. However, it is also shown that MFT is not able to reproduce the scattering and absorption opacities when multiple scattering is important. We successfully resolve this weak point of MFT, by newly developing a modified mean field theory (MMF). Hence, we conclude that MMF can be a useful tool to investigate radiative transfer properties of various astrophysical environments. We also point out an enhancement of the absorption opacity of dust aggregates in the Rayleigh domain, which would be important to explain the large millimeter-wave opacity inferred from observations of protoplanetary disks.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models
NASA Astrophysics Data System (ADS)
Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas
2017-02-01
A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.
Atkinson, Quentin D; Gray, Russell D; Drummond, Alexei J
2008-02-01
The relative timing and size of regional human population growth following our expansion from Africa remain unknown. Human mitochondrial DNA (mtDNA) diversity carries a legacy of our population history. Given a set of sequences, we can use coalescent theory to estimate past population size through time and draw inferences about human population history. However, recent work has challenged the validity of using mtDNA diversity to infer species population sizes. Here we use Bayesian coalescent inference methods, together with a global data set of 357 human mtDNA coding-region sequences, to infer human population sizes through time across 8 major geographic regions. Our estimates of relative population sizes show remarkable concordance with the contemporary regional distribution of humans across Africa, Eurasia, and the Americas, indicating that mtDNA diversity is a good predictor of population size in humans. Plots of population size through time show slow growth in sub-Saharan Africa beginning 143-193 kya, followed by a rapid expansion into Eurasia after the emergence of the first non-African mtDNA lineages 50-70 kya. Outside Africa, the earliest and fastest growth is inferred in Southern Asia approximately 52 kya, followed by a succession of growth phases in Northern and Central Asia (approximately 49 kya), Australia (approximately 48 kya), Europe (approximately 42 kya), the Middle East and North Africa (approximately 40 kya), New Guinea (approximately 39 kya), the Americas (approximately 18 kya), and a second expansion in Europe (approximately 10-15 kya). Comparisons of relative regional population sizes through time suggest that between approximately 45 and 20 kya most of humanity lived in Southern Asia. These findings not only support the use of mtDNA data for estimating human population size but also provide a unique picture of human prehistory and demonstrate the importance of Southern Asia to our recent evolutionary past.
Success rates of a skeletal anchorage system in orthodontics: A retrospective analysis.
Lam, Raymond; Goonewardene, Mithran S; Allan, Brent P; Sugawara, Junji
2018-01-01
To evaluate the premise that skeletal anchorage with SAS miniplates are highly successful and predictable for a range of complex orthodontic movements. This retrospective cross-sectional analysis consisted of 421 bone plates placed by one clinician in 163 patients (95 female, 68 male, mean age 29.4 years ± 12.02). Simple descriptive statistics were performed for a wide range of malocclusions and desired movements to obtain success, complication, and failure rates. The success rate of skeletal anchorage system miniplates was 98.6%, where approximately 40% of cases experienced mild complications. The most common complication was soft tissue inflammation, which was amenable to focused oral hygiene and antiseptic rinses. Infection occurred in approximately 15% of patients where there was a statistically significant correlation with poor oral hygiene. The most common movements were distalization and intrusion of teeth. More than a third of the cases involved complex movements in more than one plane of space. The success rate of skeletal anchorage system miniplates is high and predictable for a wide range of complex orthodontic movements.
A Binary Segmentation Approach for Boxing Ribosome Particles in Cryo EM Micrographs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adiga, Umesh P.S.; Malladi, Ravi; Baxter, William
Three-dimensional reconstruction of ribosome particles from electron micrographs requires selection of many single-particle images. Roughly 100,000 particles are required to achieve approximately 10 angstrom resolution. Manual selection of particles, by visual observation of the micrographs on a computer screen, is recognized as a bottleneck in automated single particle reconstruction. This paper describes an efficient approach for automated boxing of ribosome particles in micrographs. Use of a fast, anisotropic non-linear reaction-diffusion method to pre-process micrographs and rank-leveling to enhance the contrast between particles and the background, followed by binary and morphological segmentation constitute the core of this technique. Modifying the shapemore » of the particles to facilitate segmentation of individual particles within clusters and boxing the isolated particles is successfully attempted. Tests on a limited number of micrographs have shown that over 80 percent success is achieved in automatic particle picking.« less
Career Placement of Doctor of Pharmacy Graduates at Eight U.S. Midwestern Schools
Sweet, Burgunda V.; Janke, Kristin K.; Kuba, Sarah E.; Plake, Kimberly S.; Stanke, Luke D; Yee, Gary C.
2015-01-01
Objective. To characterize postgraduation placement plans of 2013 doctor of pharmacy (PharmD) graduates. Methods. A cross-sectional survey of PharmD graduates from 8 midwestern colleges of pharmacy was designed to capture a comprehensive picture of graduating students’ experiences and outcomes of their job search. Results. At graduation, 81% of 2013 respondents had postgraduate plans, with approximately 40% accepting jobs and 40% accepting residencies or fellowships. Eighty-four percent of graduates reported being pleased with offers received, and 86% received placement in their preferred practice setting. Students perceived that securing residencies was more difficult than securing jobs. Students who participated in key activities had a nearly sevenfold increase in successful residency placement. Conclusion. While the demand for pharmacists decreased in recent years, responses indicated successful placement by the majority of 2013 graduates at the time of graduation. PMID:26430275
NASA Technical Reports Server (NTRS)
Luu, Y. K.; Kim, K.; Hsiao, B. S.; Chu, B.; Hadjiargyrou, M.; Hadjiargyou, M. (Principal Investigator)
2003-01-01
The present work utilizes electrospinning to fabricate synthetic polymer/DNA composite scaffolds for therapeutic application in gene delivery for tissue engineering. The scaffolds are non-woven, nano-fibered, membranous structures composed predominantly of poly(lactide-co-glycolide) (PLGA) random copolymer and a poly(D,L-lactide)-poly(ethylene glycol) (PLA-PEG) block copolymer. Release of plasmid DNA from the scaffolds was sustained over a 20-day study period, with maximum release occurring at approximately 2 h. Cumulative release profiles indicated amounts released were approximately 68-80% of the initially loaded DNA. Variations in the PLGA to PLA-PEG block copolymer ratio vastly affected the overall structural morphology, as well as both the rate and efficiency of DNA release. Results indicated that DNA released directly from these electrospun scaffolds was indeed intact, capable of cellular transfection, and successfully encoded the protein beta-galactosidase. When tested under tensile loads, the electrospun polymer/DNA composite scaffolds exhibited tensile moduli of approximately 35 MPa, with approximately 45% strain initially. These values approximate those of skin and cartilage. Taken together, this work represents the first successful demonstration of plasmid DNA incorporation into a polymer scaffold using electrospinning.
Sythesis of MCMC and Belief Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less
Seasonal to interannual morphodynamics along a high-energy dissipative littoral cell
Ruggiero, P.; Kaminsky, G.M.; Gelfenbaum, G.; Voigt, B.
2005-01-01
A beach morphology monitoring program was initiated during summer 1997 along the Columbia River littoral cell (CRLC) on the coasts of northwest Oregon and southwest Washington, USA. This field program documents the seasonal through interannual morphological variability of these high-energy dissipative beaches over a variety of spatial scales. Following the installation of a dense network of geodetic control monuments, a nested sampling scheme consisting of cross-shore topographic beach profiles, three-dimensional topographic beach surface maps, nearshore bathymetric surveys, and sediment size distribution analyses was initiated. Beach monitoring is being conducted with state-of-the-art real-time kinematic differential global positioning system survey methods that combine both high accuracy and speed of measurement. Sampling methods resolve variability in beach morphology at alongshore length scales of approximately 10 meters to approximately 100 kilometers and cross-shore length scales of approximately 1 meter to approximately 2 kilometers. During the winter of 1997/1998, coastal change in the US Pacific Northwest was greatly influenced by one of the strongest El Nin??o events on record. Steeper than typical southerly wave angles resulted in alongshore sediment transport gradients and shoreline reorientation on a regional scale. The La Nin??a of 1998/1999, dominated by cross-shore processes associated with the largest recorded wave year in the region, resulted in net beach erosion along much of the littoral cell. The monitoring program successfully documented the morphological response to these interannual forcing anomalies as well as the subsequent beach recovery associated with three consecutive moderate wave years. These morphological observations within the CRLC can be generalized to explain overall system patterns; however, distinct differences in large-scale coastal behavior (e.g., foredune ridge morphology, sandbar morphometrics, and nearshore beach slopes) are not readily explained or understood.
NASA Astrophysics Data System (ADS)
Yao, J. G.; Lagrosas, N.; Ampil, L. J. Y.; Lorenzo, G. R. H.; Simpas, J.
2016-12-01
A hybrid piecewise rainfall value interpolation algorithm was formulated using the commonly known Inverse Distance Weighting (IDW) and Gauss-Seidel variant Successive Over Relaxation (SOR) to interpolate rainfall values over Metro Manila, Philippines. Due to the fact that the SOR requires boundary values for its algorithm to work, the IDW method has been used to estimate rainfall values at the boundary. Iterations using SOR were then done on the defined boundaries to obtain the desired results corresponding to the lowest RMSE value. The hybrid method was applied to rainfall datasets obtained from a dense network of 30 stations in Metro Manila which has been collecting meteorological data every 5 minutes since 2012. Implementing the Davis Vantage Pro 2 Plus weather monitoring system, each station sends data to a central server which could be accessed through the website metroweather.com.ph. The stations are spread over approximately 625 sq km of area such that each station is approximately within 25 sq km from each other. The locations of the stations determined by the Metro Manila Development Authority (MMDA) are in critical sections of Metro Manila such as watersheds and flood-prone areas. Three cases have been investigated in this study, one for each type of rainfall present in Metro Manila: monsoon-induced (8/20/13), typhoon (6/29/13), and thunderstorm (7/3/15 & 7/4/15). The area where the rainfall stations are located is divided such that large measured rainfall values are used as part of the boundaries for the SOR. Measured station values found inside the area where SOR is implemented are compared with results from interpolated values. Root mean square error (RMSE) and correlation trends between measured and interpolated results are quantified. Results from typhoon, thunderstorm and monsoon cases show RMSE values ranged from 0.25 to 2.46 mm for typhoons, 1.55 to 10.69 mm for monsoon-induced rain and 0.01 to 6.27 mm for thunderstorms. R2 values, on the other hand, are 0.91, 0.89 and 0.76 for typhoons, monsoon-induced rain and thunderstorms, respectively. This study has shown that the method of approximating rainfall works and can be used in improved prediction, analysis and real time flood map generation.
NASA Astrophysics Data System (ADS)
Magdziarz, M.; Mista, P.; Weron, A.
2007-05-01
We introduce an approximation of the risk processes by anomalous diffusion. In the paper we consider the case, where the waiting times between successive occurrences of the claims belong to the domain of attraction of alpha -stable distribution. The relationship between the obtained approximation and the celebrated fractional diffusion equation is emphasised. We also establish upper bounds for the ruin probability in the considered model and give some numerical examples.
Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinal, M.; Holas, A.
2011-06-15
The reported algorithm determines the exact exchange potential v{sub x} in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to v{sub x} and the latter for increments of ES and OS due to subsequent changes of v{sub x}. Thus, the need for solution of the differential equations for OSs, used by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms ofmore » ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact v{sub x} so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10{sup -6} after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10{sup -4} hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of v{sub x} iteration, while the accuracy limit of 10{sup -6} to 10{sup -7} hartree is reached after 20 density iterations.« less
Exact exchange potential evaluated from occupied Kohn-Sham and Hartree-Fock solutions
NASA Astrophysics Data System (ADS)
Cinal, M.; Holas, A.
2011-06-01
The reported algorithm determines the exact exchange potential vx in an iterative way using energy shifts (ESs) and orbital shifts (OSs) obtained with finite-difference formulas from the solutions (occupied orbitals and their energies) of the Hartree-Fock-like equation and the Kohn-Sham-like equation, the former used for the initial approximation to vx and the latter for increments of ES and OS due to subsequent changes of vx. Thus, the need for solution of the differential equations for OSs, used by Kümmel and Perdew [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.90.043004 90, 043004 (2003)], is bypassed. The iterated exchange potential, expressed in terms of ESs and OSs, is improved by modifying ESs at odd iteration steps and OSs at even steps. The modification formulas are related to the optimized-effective-potential equation (satisfied at convergence) written as the condition of vanishing density shift (DS). They are obtained, respectively, by enforcing its satisfaction through corrections to approximate OSs and by determining the optimal ESs that minimize the DS norm. The proposed method, successfully tested for several closed-(sub)shell atoms, from Be to Kr, within the density functional theory exchange-only approximation, proves highly efficient. The calculations using the pseudospectral method for representing orbitals give iterative sequences of approximate exchange potentials (starting with the Krieger-Li-Iafrate approximation) that rapidly approach the exact vx so that, for Ne, Ar, and Zn, the corresponding DS norm becomes less than 10-6 after 13, 13, and 9 iteration steps for a given electron density. In self-consistent density calculations, orbital energies of 10-4 hartree accuracy are obtained for these atoms after, respectively, 9, 12, and 12 density iteration steps, each involving just two steps of vx iteration, while the accuracy limit of 10-6 to 10-7 hartree is reached after 20 density iterations.
RAYNOR, HOLLIE A.; OSTERHOLT, KATHRIN M.; HART, CHANTELLE N.; JELALIAN, ELISSA; VIVIER, PATRICK; WING, RENA R.
2016-01-01
Objective Evaluate enrollment numbers, randomization rates, costs, and cost-effectiveness of active versus passive recruitment methods for parent-child dyads into two pediatric obesity intervention trials. Methods Recruitment methods were categorized into active (pediatrician referral and targeted mailings, with participants identified by researcher/health care provider) versus passive methods (newspaper, bus, internet, television, and earning statements; fairs/community centers/schools; and word of mouth; with participants self-identified). Numbers of enrolled and randomized families and costs/recruitment method were monitored throughout the 22-month recruitment period. Costs (in USD) per recruitment method included staff time, mileage, and targeted costs of each method. Results A total of 940 families were referred or made contact, with 164 families randomized (child: 7.2±1.6 years, 2.27±0.61 standardized body mass index [zBMI], 86.6% obese, 61.7% female, 83.5% white; parent: 38.0±5.8 years, 32.9±8.4 BMI, 55.2% obese, 92.7% female, 89.6% white). Pediatrician referral, followed by targeted mailings, produced the largest number of enrolled and randomized families (both methods combined producing 87.2% of randomized families). Passive recruitment methods yielded better retention from enrollment to randomization (p <0.05), but produced few families (21 in total). Approximately $91 000 was spent on recruitment, with cost per randomized family at $554.77. Pediatrician referral was the most cost-effective method, $145.95/randomized family, but yielded only 91 randomized families over 22-months of continuous recruitment. Conclusion Pediatrician referral and targeted mailings, which are active recruitment methods, were the most successful strategies. However, recruitment demanded significant resources. Successful recruitment for pediatric trials should use several strategies. Clinical Trials Registration: NCT00259324, NCT00200265 PMID:19922036
Numerical simulation of wave-induced fluid flow seismic attenuation based on the Cole-Cole model.
Picotti, Stefano; Carcione, José M
2017-07-01
The acoustic behavior of porous media can be simulated more realistically using a stress-strain relation based on the Cole-Cole model. In particular, seismic velocity dispersion and attenuation in porous rocks is well described by mesoscopic-loss models. Using the Zener model to simulate wave propagation is a rough approximation, while the Cole-Cole model provides an optimal description of the physics. Here, a time-domain algorithm is proposed based on the Grünwald-Letnikov numerical approximation of the fractional derivative involved in the time-domain representation of the Cole-Cole model, while the spatial derivatives are computed with the Fourier pseudospectral method. The numerical solution is successfully tested against an analytical solution. The methodology is applied to a model of saline aquifer, where carbon dioxide (CO 2 ) is injected. To follow the migration of the gas and detect possible leakages, seismic monitoring surveys should be carried out periodically. To this aim, the sensitivity of the seismic method must be carefully assessed for the specific case. The simulated test considers a possible leakage in the overburden, above the caprock, where the sandstone is partially saturated with gas and brine. The numerical examples illustrate the implementation of the theory.
NASA Astrophysics Data System (ADS)
Yildiz, Nihat; San, Sait Eren; Okutan, Mustafa; Kaya, Hüseyin
2010-04-01
Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.
Wen, Peng; Zhu, Ding-He; Feng, Kun; Liu, Fang-Jun; Lou, Wen-Yong; Li, Ning; Zong, Min-Hua; Wu, Hong
2016-04-01
A novel antimicrobial packaging material was obtained by incorporating cinnamon essential oil/β-cyclodextrin inclusion complex (CEO/β-CD-IC) into polylacticacid (PLA) nanofibers via electrospinning technique. The CEO/β-CD-IC was prepared by the co-precipitation method and SEM and FT-IR spectroscopy analysis indicated the successful formation of CEO/β-CD-IC, which improved the thermal stability of CEO. The CEO/β-CD-IC was then incorporated into PLA nanofibers by electrospinning and the resulting PLA/CEO/β-CD nanofilm showed better antimicrobial activity compared to PLA/CEO nanofilm. The minimum inhibitory concentration (MIC) of PLA/CEO/β-CD nanofilm against Escherichia coli and Staphylococcus aureus was approximately 1 mg/ml (corresponding CEO concentration 11.35 μg/ml) and minimum bactericidal concentration (MBC) was approximately 7 mg/ml (corresponding CEO concentration 79.45 μg/ml). Furthermore, compared with the casting method, the mild electrospinning process was more favorable for maintaining greater CEO in the obtained film. The PLA/CEO/β-CD nanofilm can effectively prolong the shelf life of pork, suggesting it has potential application in active food packaging. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
This paper describes an accurate economical method for generating approximations to the kernel of the integral equation relating unsteady pressure to normalwash in nonplanar flow. The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the non elementary integrals in the kernel by exponential approximations and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. Coefficients for 8, 12, 24, and 72 term approximations are tabulated in the report. Also, since the method is automated, it can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Composite superconducting wires obtained by high-rate tinning in molten Bi-Pb-Sr-Ca-Cu-O system
NASA Technical Reports Server (NTRS)
Grosav, A. D.; Konopko, L. A.; Leporda, N. I.
1991-01-01
Long lengths of metal superconductor composites were prepared by passing a copper wire through the bismuth based molten oxide system at a constant speed. The key to successful composite preparation is the high pulling speed involved, which permits minimization of the severe interaction between the unbuffered metal surface and the oxide melt. Depending on the temperature of the melt and the pulling speed, a coating with different thickness and microstructure appeared. The nonannealed thick coatings contained a Bi2(Sr,Ca)2Cu1O6 phase as a major component. After relatively short time annealing at 800 C, both resistivity and initial magnetization versus temperature measurements show superconducting transitions beginning in the 110 to 115 K region. The effects of annealing and composition on obtained results are discussed. This method of manufacture led to the fabrication of wire with a copper core in a dense covering with uniform thickness of about h approximately equal to 5 to 50 microns. Composite wires with h approximately equal to 10 microns (h/d approximately equal to 0.1) sustained bending on a 15 mm radius frame without cracking during flexing.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iwasa, Takeshi, E-mail: tiwasa@mail.sci.hokudai.ac.jp; Takenaka, Masato; Taketsugu, Tetsuya
A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems.more » The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.« less
Iwasa, Takeshi; Takenaka, Masato; Taketsugu, Tetsuya
2016-03-28
A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems. The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.
NASA Astrophysics Data System (ADS)
Masuda, Toshiaki; Miyake, Tomoya; Kimura, Nozomi; Okamoto, Atsushi
2011-01-01
Microboudinage structures developed within glaucophane are found in the calcite matrix of blueschist-facies impure marbles from Syros, Greece. The presence of these structures enables the successful application of the microboudin method for palaeodifferential stress analysis, which was originally developed for rocks with a quartzose matrix. Application of the microboudin method reveals that differential stress increased during exhumation of the marble; the estimated maximum palaeodifferential stress values are approximately 9-15 MPa, an order of magnitude lower than the values estimated using the calcite-twin palaeopiezometer. This discrepancy reflects the fact that the two methods assess differential stress at different stages in the deformation history. Differential stresses in the Syros samples estimated using three existing equations for grain-size palaeopiezometry show a high degree of scatter, and no reliable results were obtained by a comparison between the results of the microboudin method and grain-size palaeopiezometry.
The AB Initio Mia Method: Theoretical Development and Practical Applications
NASA Astrophysics Data System (ADS)
Peeters, Anik
The bottleneck in conventional ab initio Hartree -Fock calculations is the storage of the electron repulsion integrals because their number increases with the fourth power of the number of basis functions. This problem can be solved by a combination of the multiplicative integral approximation (MIA) and the direct SCF method. The MIA approach was successfully applied in the geometry optimisation of some biologically interesting compounds like the neurolepticum Haloperidol and two TIBO derivatives, inactivators of HIV1. In this thesis the potency of the MIA-method is shown by the application of this method in the calculation of the forces on the nuclei. In addition, the MIA method enabled the development of a new model for performing crystal field studies: the supermolecule model. The results for this model are in better agreement with experimental data than the results for the point charge model. This is illustrated by the study of some small molecules in the solid state: 2,3-diketopiperazine, formamide oxime and two polymorphic forms of glycine, alpha-glycine and beta-glycine.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Assessment of six different collagenase-based methods to isolate feline pancreatic islets.
Zini, Eric; Franchini, Marco; Guscetti, Franco; Osto, Melania; Kaufmann, Karin; Ackermann, Mathias; Lutz, Thomas A; Reusch, Claudia E
2009-12-01
Isolation of pancreatic islets is necessary to study the molecular mechanisms underlying beta-cell demise in diabetic cats. Six collagenase-based methods of isolation were compared in 10 cat pancreata, including single and double course of collagenase, followed or not by Ficoll centrifugation or accutase, and collagenase plus accutase. Morphometric analysis was performed to measure the relative area of islet and exocrine tissue. Islet specific mRNA transcripts were quantified in isolates by real-time PCR. The single and double course of collagenase digestion was successful in each cat and provided similar islet-to-exocrine tissue ratio. Quantities of insulin mRNA did not differ between the two methods. However, on histological examination either method yielded only approximately 2% of pure islets. The other methods provided disrupted islets or insufficient samples in 1-7 cats. Although pancreas digestion with single and double course of collagenase was superior, further studies are needed to improve islet isolation in cats.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-01-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins. Guest Editors: J.C. Gumbart and Sergei Noskov. PMID:26766517
Susong, D.; Marks, D.; Garen, D.
1999-01-01
Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.
Zhu, Dan; Gu, Zhi-Yong; Lin, Chia-Shiang; Nie, Fa-Chuan; Cui, Jian
2018-04-01
Abdominal pain and hiccups secondary to intra-abdominal adhesion are surgical complications that are often treated by painkillers and secondary surgeries with an unsatisfactory therapeutic effect. This study presents a new treatment method that uses ultrasound-guided local infiltration in peritoneal and abdominal wall adhesions in patients with hiccups and abdominal pain. A 62-year-old patient presented to our hospital with a history of intractable hiccups and abdominal pain for 30 years. Her abdominal examination revealed a scar with an approximate length of 10 cm on the abdominal umbilical plane; pressing the right scar area could simultaneously induce abdominal pain and hiccups. Intraperitoneal computed tomography examination clearly demonstrated that the bowel had no obvious expansion. Ultrasonographic examination found that peritoneal motility below the normal peritoneal adhesion regions was significantly slower than in the normal regions. The diagnosis of chronic postoperative pain syndrome was clear. The symptoms were significantly alleviated by a successful treatment with ultrasound-guided local infiltration in the peritoneal and abdominal wall scar adhesions. After 3 stages of hospitalization and 1 year of follow-up, the patient's abdominal wall pain was relieved by approximately 80% and hiccups were relieved by approximately 70%. The above treatment is a useful option for managing abdominal adhesion and accompanying pain or hiccups resulting from abdominal surgery. This method could ease the psychological and economic burden of patients and improve their quality of life.
Vitrification-based cryopreservation of Drosophila embryos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreuders, P.D.; Mazur, P.
1994-12-31
Currently, over 30,000 strains of Drosophila melanogaster are maintained by geneticists through regular transfer of breeding stocks. A more cost effective solution is to cryopreserve their embryos. Cooling and warming rates >10,000{degrees}C/min. are required to prevent chilling injury. To avoid the lethal intracellular ice normally produced at such high cooling rates, it is necessary to use {ge}50% (w/w) concentrations of glass-inducing solutes to vitrify the embryos. Differential scanning calorimetry (DSC) is used to develop and evaluate ethylene glycol and polyvinyl pyrrolidone based vitrification solutions. The resulting solution consists of 8.5M ethylene glycol + 10% polyvinylpyrrolidone in D-20 Drosophila culture medium.more » A two stage method is used for the introduction and concentration of these solutes within the embryo. The method reduces the exposure time to the solution and, consequently, reduces toxicity. Both DSC and freezing experiments suggest that, while twelve-hour embryos will vitrify using cooling rates >200{degrees}C/min., they will devitrify and be killed with even moderately rapid warming rates of {approximately}1,900{degrees}C/min. Very rapid warming ({approximately}100,000{degrees}C/min.) results in variable numbers of successfully cryopreserved embryos. This sensitivity to warming rite is typical of devitrification. The variability in survival is reduced using embryos of a precisely determined embryonic stage. The vitrification of the older, fifteen-hour, embryos yields an optimized hatching rate of 68%, with 35 - 40% of the resulting larvae developing to normal adults. This Success rite in embryos of this age may reflect a reduced sensitivity to limited devitrification or a more even distribution of the ethylene glycol within the embryo.« less
NASA Astrophysics Data System (ADS)
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Raynor, Hollie A; Osterholt, Kathrin M; Hart, Chantelle N; Jelalian, Elissa; Vivier, Patrick; Wing, Rena R
2009-01-01
Evaluate enrollment numbers, randomization rates, costs, and cost-effectiveness of active versus passive recruitment methods for parent-child dyads into two pediatric obesity intervention trials. Recruitment methods were categorized into active (pediatrician referral and targeted mailings, with participants identified by researcher/health care provider) versus passive methods (newspaper, bus, internet, television, and earning statements; fairs/community centers/schools; and word of mouth; with participants self-identified). Numbers of enrolled and randomized families and costs/recruitment method were monitored throughout the 22-month recruitment period. Costs (in USD) per recruitment method included staff time, mileage, and targeted costs of each method. A total of 940 families were referred or made contact, with 164 families randomized (child: 7.2+/-1.6 years, 2.27+/-0.61 standardized body mass index [zBMI], 86.6% obese, 61.7% female, 83.5% Caucasian; parent: 38.0+/-5.8 years, 32.9+/-8.4 BMI, 55.2% obese, 92.7% female, 89.6% caucasian). Pediatrician referral, followed by targeted mailings, produced the largest number of enrolled and randomized families (both methods combined producing 87.2% of randomized families). Passive recruitment methods yielded better retention from enrollment to randomization (p<0.05), but produced few families (21 in total). Approximately $91,000 was spent on recruitment, with cost per randomized family at $554.77. Pediatrician referral was the most cost-effective method, $145.95/randomized family, but yielded only 91 randomized families over 22-months of continuous recruitment. Pediatrician referral and targeted mailings, which are active recruitment methods, were the most successful strategies. However, recruitment demanded significant resources. Successful recruitment for pediatric trials should use several strategies. NCT00259324, NCT00200265.
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Lu, Zhixin; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2017-12-01
We use recent advances in the machine learning area known as "reservoir computing" to formulate a method for model-free estimation from data of the Lyapunov exponents of a chaotic process. The technique uses a limited time series of measurements as input to a high-dimensional dynamical system called a "reservoir." After the reservoir's response to the data is recorded, linear regression is used to learn a large set of parameters, called the "output weights." The learned output weights are then used to form a modified autonomous reservoir designed to be capable of producing an arbitrarily long time series whose ergodic properties approximate those of the input signal. When successful, we say that the autonomous reservoir reproduces the attractor's "climate." Since the reservoir equations and output weights are known, we can compute the derivatives needed to determine the Lyapunov exponents of the autonomous reservoir, which we then use as estimates of the Lyapunov exponents for the original input generating system. We illustrate the effectiveness of our technique with two examples, the Lorenz system and the Kuramoto-Sivashinsky (KS) equation. In the case of the KS equation, we note that the high dimensional nature of the system and the large number of Lyapunov exponents yield a challenging test of our method, which we find the method successfully passes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrel, J.E.; Kucera, C.L.; Johannsen, C.J.
1980-12-01
During this contract period research was continued at finding suitable methods and criteria for determining the success of revegetation in Midwestern prime ag lands strip mined for coal. Particularly important to the experimental design was the concept of reference areas, which were nearby fields from which the performance standards for reclaimed areas were derived. Direct and remote sensing techniques for measuring plant ground cover, production, and species composition were tested. 15 mine sites were worked in which were permitted under interim permanent surface mine regulations and in 4 adjoining reference sites. Studies at 9 prelaw sites were continued. All sitesmore » were either in Missouri or Illinois. Data gathered in the 1980 growing season showed that 13 unmanaged or young mineland pastures generally had lower average ground cover and production than 2 reference pastures. In contrast, yields at approximately 40% of 11 recently reclaimed mine sites planted with winter wheat, soybeans, or milo were statistically similar to 3 reference values. Digital computer image analysis of color infrared aerial photographs, when compared to ground level measurements, was a fast, accurate, and inexpensive way to determine plant ground cover and areas. But the remote sensing approach was inferior to standard surface methods for detailing plant species abundance and composition.« less
Meshfree simulation of avalanches with the Finite Pointset Method (FPM)
NASA Astrophysics Data System (ADS)
Michel, Isabel; Kuhnert, Jörg; Kolymbas, Dimitrios
2017-04-01
Meshfree methods are the numerical method of choice in case of applications which are characterized by strong deformations in conjunction with free surfaces or phase boundaries. In the past the meshfree Finite Pointset Method (FPM) developed by Fraunhofer ITWM (Kaiserslautern, Germany) has been successfully applied to problems in computational fluid dynamics such as water crossing of cars, water turbines, and hydraulic valves. Most recently the simulation of granular flows, e.g. soil interaction with cars (rollover), has also been tackled. This advancement is the basis for the simulation of avalanches. Due to the generalized finite difference formulation in FPM, the implementation of different material models is quite simple. We will demonstrate 3D simulations of avalanches based on the Drucker-Prager yield criterion as well as the nonlinear barodesy model. The barodesy model (Division of Geotechnical and Tunnel Engineering, University of Innsbruck, Austria) describes the mechanical behavior of soil by an evolution equation for the stress tensor. The key feature of successful and realistic simulations of avalanches - apart from the numerical approximation of the occurring differential operators - is the choice of the boundary conditions (slip, no-slip, friction) between the different phases of the flow as well as the geometry. We will discuss their influences for simplified one- and two-phase flow examples. This research is funded by the German Research Foundation (DFG) and the FWF Austrian Science Fund.
New approximate orientation averaging of the water molecule interacting with the thermal neutron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Markovic, M.I.; Minic, D.M.; Rakic, A.D.
1992-02-01
This paper reports that exactly describing the time of thermal neutron collisions with water molecules, orientation averaging is performed by an exact method (EOA{sub k}) and four approximate methods (two well known and two less known). Expressions for the microscopic scattering kernel are developed. The two well-known approximate orientation averaging methods are Krieger-Nelkin (K-N) and Koppel-Young (K-Y). The results obtained by one of the two proposed approximate orientation averaging methods agree best with the corresponding results obtained by EOA{sub k}. The largest discrepancies between the EOA{sub k} results and the results of the approximate methods are obtained using the well-knowmore » K-N approximate orientation averaging method.« less
Detection of Road Surface States from Tire Noise Using Neural Network Analysis
NASA Astrophysics Data System (ADS)
Kongrattanaprasert, Wuttiwat; Nomura, Hideyuki; Kamakura, Tomoo; Ueda, Koji
This report proposes a new processing method for automatically detecting the states of road surfaces from tire noises of passing vehicles. In addition to multiple indicators of the signal features in the frequency domain, we propose a few feature indicators in the time domain to successfully classify the road states into four categories: snowy, slushy, wet, and dry states. The method is based on artificial neural networks. The proposed classification is carried out in multiple neural networks using learning vector quantization. The outcomes of the networks are then integrated by the voting decision-making scheme. Experimental results obtained from recorded signals for ten days in the snowy season demonstrated that an accuracy of approximately 90% can be attained for predicting road surface states using only tire noise data.
A Consistent Set of Oxidation Number Rules for Intelligent Computer Tutoring
NASA Astrophysics Data System (ADS)
Holder, Dale A.; Johnson, Benny G.; Karol, Paul J.
2002-04-01
We have developed a method for assigning oxidation numbers that eliminates the inconsistencies and ambiguities found in most conventional textbook rules, yet remains simple enough for beginning students to use. It involves imposition of a two-level hierarchy on a set of rules similar to those already being taught. We recommend emphasizing that the oxidation number method is an approximate model and cannot always be successfully applied. This proper perspective will lead students to apply the rules more carefully in all problems. Whenever failure does occur, it will indicate the limitations of the oxidation number concept itself, rather than merely the failure of a poorly constructed set of rules. We have used these improved rules as the basis for an intelligent tutoring program on oxidation numbers.
Exploiting symmetries in the modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Andersen, C. M.; Tanner, John A.
1989-01-01
A computational procedure is presented for reducing the size of the analysis models of tires having unsymmetric material, geometry and/or loading. The two key elements of the procedure when applied to anisotropic tires are: (1) decomposition of the stiffness matrix into the sum of an orthotropic and nonorthotropic parts; and (2) successive application of the finite-element method and the classical Rayleigh-Ritz technique. The finite-element method is first used to generate few global approximation vectors (or modes). Then the amplitudes of these modes are computed by using the Rayleigh-Ritz technique. The proposed technique has high potential for handling practical tire problems with anisotropic materials, unsymmetric imperfections and asymmetric loading. It is also particularly useful for use with three-dimensional finite-element models of tires.
NASA Astrophysics Data System (ADS)
Zhang, Lu; Cheng, Li; Bai, Suo; Su, Chen; Chen, Xiaobo; Qin, Yong
2015-01-01
Ultrafine organic nanowire arrays (ONWAs) with a controlled direction were successfully fabricated by a novel one-step Faraday cage assisted plasma etching method. The mechanism of formation of nanowire arrays is proposed; the obliquity and aspect ratio can be accurately controlled from approximately 0° to 90° via adjusting the angle of the sample and the etching time, respectively. In addition, the ONWAs were further utilized to improve the output of the triboelectric nanogenerator (TENG). Compared with the output of TENG composed of vertical ONWAs, the open-circuit voltage, short-circuit current and inductive charges were improved by 73%, 150% and 98%, respectively. This research provides a convenient and practical method to fabricate ONWAs with various obliquities on different materials, which can be used for energy harvesting.
Zhang, Lu; Cheng, Li; Bai, Suo; Su, Chen; Chen, Xiaobo; Qin, Yong
2015-01-28
Ultrafine organic nanowire arrays (ONWAs) with a controlled direction were successfully fabricated by a novel one-step Faraday cage assisted plasma etching method. The mechanism of formation of nanowire arrays is proposed; the obliquity and aspect ratio can be accurately controlled from approximately 0° to 90° via adjusting the angle of the sample and the etching time, respectively. In addition, the ONWAs were further utilized to improve the output of the triboelectric nanogenerator (TENG). Compared with the output of TENG composed of vertical ONWAs, the open-circuit voltage, short-circuit current and inductive charges were improved by 73%, 150% and 98%, respectively. This research provides a convenient and practical method to fabricate ONWAs with various obliquities on different materials, which can be used for energy harvesting.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koumetz, Serge D., E-mail: Serge.Koumetz@univ-rouen.fr; Martin, Patrick; Murray, Hugues
Experimental results on the diffusion of grown-in beryllium (Be) in indium gallium arsenide (In{sub 0.53}Ga{sub 0.47}As) and indium gallium arsenide phosphide (In{sub 0.73}Ga{sub 0.27}As{sub 0.58}P{sub 0.42}) gas source molecular beam epitaxy alloys lattice-matched to indium phosphide (InP) can be successfully explained in terms of a combined kick-out and dissociative diffusion mechanism, involving neutral Be interstitials (Be{sub i}{sup 0}), singly positively charged gallium (Ga), indium (In) self-interstitials (I{sub III}{sup +}) and singly positively charged Ga, In vacancies (V{sub III}{sup +}). A new numerical method of solution to the system of diffusion equations, based on the finite difference approximations and Bairstow's method,more » is proposed.« less
Araya, A; Telada, S; Tochikubo, K; Taniguchi, S; Takahashi, R; Kawabe, K; Tatsumi, D; Yamazaki, T; Kawamura, S; Miyoki, S; Moriwaki, S; Musha, M; Nagano, S; Fujimoto, M K; Horikoshi, K; Mio, N; Naito, Y; Takamori, A; Yamamoto, K
1999-05-01
A new method has been demonstrated for absolute-length measurements of a long-baseline Fabry-Perot cavity by use of phase-modulated light. This method is based on determination of a free spectral range (FSR) of the cavity from the frequency difference between a carrier and phase-modulation sidebands, both of which resonate in the cavity. Sensitive response of the Fabry-Perot cavity near resonant frequencies ensures accurate determination of the FSR and thus of the absolute length of the cavity. This method was applied to a 300-m Fabry-Perot cavity of the TAMA gravitational wave detector that is being developed at the National Astronomical Observatory, Tokyo. With a modulation frequency of approximately 12 MHz, we successfully determined the absolute cavity length with resolution of 1 microm (3 x 10(-9) in strain) and observed local ground strain variations of 6 x 10(-8).
On simulation of no-slip condition in the method of discrete vortices
NASA Astrophysics Data System (ADS)
Shmagunov, O. A.
2017-10-01
When modeling flows of an incompressible fluid, it is convenient sometimes to use the method of discrete vortices (MDV), where the continuous vorticity field is approximated by a set of discrete vortex elements moving in the velocity field. The vortex elements have a clear physical interpretation, they do not require the construction of grids and are automatically adaptive, since they concentrate in the regions of greatest interest and successfully describe the flows of a non-viscous fluid. The possibility of using MDV in simulating flows of a viscous fluid was considered in the previous papers using the examples of flows past bodies with sharp edges with the no-penetration condition at solid boundaries. However, the appearance of vorticity on smooth boundaries requires the no-slip condition to be met when MDV is realized, which substantially complicates the initially simple method. In this connection, an approach is considered that allows solving the problem by simple means.
A solution to the problem of elastic half-plane with a cohesive edge crack
NASA Astrophysics Data System (ADS)
Thanh, Le Thi; Belaya, L. A.; Lavit, I. M.
2018-03-01
This paper considers the problem of extension of an elastic half-plane slackened by a rectilinear edge crack. The opposite edges of the crack are attracted to each other. The intensity of attracting forces – the forces of cohesion – depends on displacements of the edges; this dependence is nonlinear in the general case. External load and cohesive forces are related to each other by the condition of finite stresses at the crack tip. The authors apply Picard’s method of successive approximation. In each iteration, Irwin’s method is used to solve the problem of a half-plane with a crack, the edges of which are subjected to irregularly distributed load. The solution of the resulting integral equation is found by Galerkin’s method. The paper includes examples of calculations and their results. Some of them are compared with the data of previous studies.
Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)
NASA Astrophysics Data System (ADS)
Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik
2018-04-01
The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.
The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2006-11-01
The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.
Cho, Yunju; Choi, Man-Ho; Kim, Byungjoo; Kim, Sunghwan
2016-04-29
An experimental setup for the speciation of compounds by hydrogen/deuterium exchange (HDX) with atmospheric pressure ionization while performing chromatographic separation is presented. The proposed experimental setup combines the high performance supercritical fluid chromatography (SFC) system that can be readily used as an inlet for mass spectrometry (MS) and atmospheric pressure photo ionization (APPI) or atmospheric pressure chemical ionization (APCI) HDX. This combination overcomes the limitation of an approach using conventional liquid chromatography (LC) by minimizing the amount of deuterium solvents used for separation. In the SFC separation, supercritical CO2 was used as a major component of the mobile phase, and methanol was used as a minor co-solvent. By using deuterated methanol (CH3OD), AP HDX was achieved during SFC separation. To prove the concept, thirty one nitrogen- and/or oxygen-containing standard compounds were analyzed by SFC-AP HDX MS. The compounds were successfully speciated from the obtained SFC-MS spectra. The exchange ions were observed with as low as 1% of CH3OD in the mobile phase, and separation could be performed within approximately 20min using approximately 0.24 mL of CH3OD. The results showed that SFC separation and APPI/APCI HDX could be successfully performed using the suggested method. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbert, John M.
1997-01-01
Rayleigh-Schroedinger perturbation theory is an effective and popular tool for describing low-lying vibrational and rotational states of molecules. This method, in conjunction with ab initio techniques for computation of electronic potential energy surfaces, can be used to calculate first-principles molecular vibrational-rotational energies to successive orders of approximation. Because of mathematical complexities, however, such perturbation calculations are rarely extended beyond the second order of approximation, although recent work by Herbert has provided a formula for the nth-order energy correction. This report extends that work and furnishes the remaining theoretical details (including a general formula for the Rayleigh-Schroedinger expansion coefficients) necessary formore » calculation of energy corrections to arbitrary order. The commercial computer algebra software Mathematica is employed to perform the prohibitively tedious symbolic manipulations necessary for derivation of generalized energy formulae in terms of universal constants, molecular constants, and quantum numbers. As a pedagogical example, a Hamiltonian operator tailored specifically to diatomic molecules is derived, and the perturbation formulae obtained from this Hamiltonian are evaluated for a number of such molecules. This work provides a foundation for future analyses of polyatomic molecules, since it demonstrates that arbitrary-order perturbation theory can successfully be applied with the aid of commercially available computer algebra software.« less
Circulating microRNAs as biomarkers of early embryonic viability in cattle
USDA-ARS?s Scientific Manuscript database
Embryonic mortality (EM) is considered to be the primary factor limiting pregnancy success in cattle and occurs early (< day 28) or late (= day 28) during gestation. The incidence of early EM in cattle is approximately 25% while late EM is approximately 3.2 to 42.7%. In cattle, real time ultrasonog...
NASA Technical Reports Server (NTRS)
Desmarais, R. N.
1982-01-01
The method is capable of generating approximations of arbitrary accuracy. It is based on approximating the algebraic part of the nonelementary integrals in the kernel by exponential functions and then integrating termwise. The exponent spacing in the approximation is a geometric sequence. The coefficients and exponent multiplier of the exponential approximation are computed by least squares so the method is completely automated. Exponential approximates generated in this manner are two orders of magnitude more accurate than the exponential approximation that is currently most often used for this purpose. The method can be used to generate approximations to attain any desired trade-off between accuracy and computing cost.
Application of geometric approximation to the CPMG experiment: Two- and three-site exchange.
Chao, Fa-An; Byrd, R Andrew
2017-04-01
The Carr-Purcell-Meiboom-Gill (CPMG) experiment is one of the most classical and well-known relaxation dispersion experiments in NMR spectroscopy, and it has been successfully applied to characterize biologically relevant conformational dynamics in many cases. Although the data analysis of the CPMG experiment for the 2-site exchange model can be facilitated by analytical solutions, the data analysis in a more complex exchange model generally requires computationally-intensive numerical analysis. Recently, a powerful computational strategy, geometric approximation, has been proposed to provide approximate numerical solutions for the adiabatic relaxation dispersion experiments where analytical solutions are neither available nor feasible. Here, we demonstrate the general potential of geometric approximation by providing a data analysis solution of the CPMG experiment for both the traditional 2-site model and a linear 3-site exchange model. The approximate numerical solution deviates less than 0.5% from the numerical solution on average, and the new approach is computationally 60,000-fold more efficient than the numerical approach. Moreover, we find that accurate dynamic parameters can be determined in most cases, and, for a range of experimental conditions, the relaxation can be assumed to follow mono-exponential decay. The method is general and applicable to any CPMG RD experiment (e.g. N, C', C α , H α , etc.) The approach forms a foundation of building solution surfaces to analyze the CPMG experiment for different models of 3-site exchange. Thus, the geometric approximation is a general strategy to analyze relaxation dispersion data in any system (biological or chemical) if the appropriate library can be built in a physically meaningful domain. Published by Elsevier Inc.
Clinical Trials in Benign Prostatic Hyperplasia: A Moving Target of Success.
Thomas, Dominique; Chung, Caroline; Zhang, Yiye; Te, Alexis; Gratzke, Christian; Woo, Henry; Chughtai, Bilal
2018-05-24
Benign prostatic hyperplasia (BPH) affects over 50% of men above the age of 50 yr. With half of these men having bothersome lower urinary tract symptoms, this area represents a hot bed of novel treatments. Many BPH therapies have favorable short-term outcomes but lack durability or well-defined adverse events (AEs). Clinical trials are a gold standard for comparing treatments. We characterized all BPH clinical trials registered worldwide from inception to 2017. A total of 251 clinical trials were included. Of the studies, 30.1% used patient-reported outcomes such as the American Urological Association Symptom Score. Approximately 70% of clinical trials studied medical interventions, while the remaining trials investigated surgical approaches. Seventy-nine percent of trials were industry sponsored, while a minority were funded without commercial interest. Only 42% of trials had 12-mo follow-up, with the majority with <3 mo of follow-up. No trials evaluated prevention, diet, behavior, or alternative methods Overall, only 23% of trials reported results. Management options for BPH need unified benchmarks of success, AEs, durability, and standard reporting for all clinical trials, regardless of outcomes. We found that the majority of clinical trials were medical intervention, with very few trials evaluating prevention, diet, behavior, or alternative methods Furthermore, a few trials reported results in peer-reviewed journals. All clinical trials need to report results regardless of outcome, and in conclusion, standardized methods are needed in order to document the successes, adverse events, and durability for all clinical trials. Copyright © 2018 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Capacitor-Chain Successive-Approximation ADC
NASA Technical Reports Server (NTRS)
Cunningham, Thomas
2003-01-01
A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.
Jakobsson, Hugo; Farmaki, Katerina; Sakinis, Augustinas; Ehn, Olof; Johannsson, Gudmundur; Ragnarsson, Oskar
2018-01-01
Primary aldosteronism (PA) is a common cause of secondary hypertension. Adrenal venous sampling (AVS) is the gold standard for assessing laterality of PA, which is of paramount importance to decide adequate treatment. AVS is a technically complicated procedure with success rates ranging between 30% and 96%. The aim of this study was to investigate the success rate of AVS over time, performed by a single interventionalist. This was a retrospective study based on consecutive AVS procedures performed by a single operator between September 2005 and June 2016. Data on serum concentrations of aldosterone and cortisol from right and left adrenal vein, inferior vena cava, and peripheral vein were collected and selectivity index (SI) calculated. Successful AVS was defined as SI > 5. In total, 282 AVS procedures were performed on 269 patients, 168 men (62%) and 101 women (38%), with a mean age of 55±11 years (range, 26-78 years). Out of 282 AVS procedures, 259 were successful, giving an overall success rate of 92%. The most common reason for failure was inability to localize the right adrenal vein (n=16; 76%). The success rates were 63%, 82%, and 94% during the first, second, and third years, respectively. During the last 8 years the success rate was 95%, and on average 27 procedures were performed annually. Satisfactory AVS success rate was achieved after approximately 36 procedures and satisfactory success rate was maintained by performing approximately 27 procedures annually. AVS should be limited to few operators that perform sufficiently large number of procedures to achieve, and maintain, satisfactory AVS success rate.
The TSP-approach to approximate solving the m-Cycles Cover Problem
NASA Astrophysics Data System (ADS)
Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana
2016-10-01
In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, W.R.; Carlson, K.E.
The Pittsburg & Midway Coal Mining Co.`s ({open_quotes}P&M{close_quotes}) Midway Mine lies 50 miles south of Kansas City, Kansas, straddling the border of Kansas and Missouri. P&M actively mined the area until 1989, when the mine was closed and reclaimed. Approximately 3,750 acres of surface mined land were topsoiled and revegetated to cool season fescue/legume pasture. Various pasture management methods are being utilized to meet reclamation success standards and achieve final bond release. The effectiveness and costs of various cool season fescue/legume pasture management methods are evaluated and contrasted. These methods include sharecropping, bush hogging, burning and livestock grazing. It presentsmore » guidelines used to develop a site specific rotational livestock grazing programs with land owners or contractors, and local, state and federal agencies. Rotational grazing uses both cow/calf or feeder livestock operations. Key managerial elements used to control grazing activities, either by the landowner or a contractor, are reviewed. Methods used to determine stocking levels for successful rotational grazing on this type of pasture are presented. Rotational grazing of livestock has proven to be the most effective method for managing established cool season fescue/legume pastures at this site. Initial stocking rates of 1 A.U.M. per 5 acres have been modified to a current stocking rate of 1 A.U.M. per 2.5 acres. Supporting physical and chemical data are presented and discussed.« less
Generalized trajectory surface hopping method based on the Zhu-Nakamura theory
NASA Astrophysics Data System (ADS)
Oloyede, Ponmile; Mil'nikov, Gennady; Nakamura, Hiroki
2006-04-01
We present a generalized formulation of the trajectory surface hopping method applicable to a general multidimensional system. The method is based on the Zhu-Nakamura theory of a nonadiabatic transition and therefore includes the treatment of classically forbidden hops. The method uses a generalized recipe for the conservation of angular momentum after forbidden hops and an approximation for determining a nonadiabatic transition direction which is crucial when the coupling vector is unavailable. This method also eliminates the need for a rigorous location of the seam surface, thereby ensuring its applicability to a wide class of chemical systems. In a test calculation, we implement the method for the DH2+ system, and it shows a remarkable agreement with the previous results of C. Zhu, H. Kamisaka, and H. Nakamura, [J. Chem. Phys. 116, 3234 (2002)]. We then apply it to a diatomic-in-molecule model system with a conical intersection, and the results compare well with exact quantum calculations. The successful application to the conical intersection system confirms the possibility of directly extending the present method to an arbitrary potential of general topology.
On the accuracy of the LSC-IVR approach for excitation energy transfer in molecular aggregates
NASA Astrophysics Data System (ADS)
Teh, Hung-Hsuan; Cheng, Yuan-Chung
2017-04-01
We investigate the applicability of the linearized semiclassical initial value representation (LSC-IVR) method to excitation energy transfer (EET) problems in molecular aggregates by simulating the EET dynamics of a dimer model in a wide range of parameter regime and comparing the results to those obtained from a numerically exact method. It is found that the LSC-IVR approach yields accurate population relaxation rates and decoherence rates in a broad parameter regime. However, the classical approximation imposed by the LSC-IVR method does not satisfy the detailed balance condition, generally leading to incorrect equilibrium populations. Based on this observation, we propose a post-processing algorithm to solve the long time equilibrium problem and demonstrate that this long-time correction method successfully removed the deviations from exact results for the LSC-IVR method in all of the regimes studied in this work. Finally, we apply the LSC-IVR method to simulate EET dynamics in the photosynthetic Fenna-Matthews-Olson complex system, demonstrating that the LSC-IVR method with long-time correction provides excellent description of coherent EET dynamics in this typical photosynthetic pigment-protein complex.
Characteristics of Successful Small and Micro Community Enterprises in Rural Thailand
ERIC Educational Resources Information Center
Ruengdet, Kamon; Wongsurawat, Winai
2010-01-01
This research aims to articulate the most salient factors that set apart successful small and micro community enterprises in the province of Phetchaburi, Thailand. The authors utilize both quantitative and qualitative research techniques. Approximately one hundred questionnaires were sent to leaders of the community enterprises. Simple statistical…
Learning topic models by belief propagation.
Zeng, Jia; Cheung, William K; Liu, Jiming
2013-05-01
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.
Parameter Estimation for a Pulsating Turbulent Buoyant Jet Using Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Christopher, Jason; Wimer, Nicholas; Lapointe, Caelan; Hayden, Torrey; Grooms, Ian; Rieker, Greg; Hamlington, Peter
2017-11-01
Approximate Bayesian Computation (ABC) is a powerful tool that allows sparse experimental or other ``truth'' data to be used for the prediction of unknown parameters, such as flow properties and boundary conditions, in numerical simulations of real-world engineering systems. Here we introduce the ABC approach and then use ABC to predict unknown inflow conditions in simulations of a two-dimensional (2D) turbulent, high-temperature buoyant jet. For this test case, truth data are obtained from a direct numerical simulation (DNS) with known boundary conditions and problem parameters, while the ABC procedure utilizes lower fidelity large eddy simulations. Using spatially-sparse statistics from the 2D buoyant jet DNS, we show that the ABC method provides accurate predictions of true jet inflow parameters. The success of the ABC approach in the present test suggests that ABC is a useful and versatile tool for predicting flow information, such as boundary conditions, that can be difficult to determine experimentally.
Full-Scale Passive Earth Entry Vehicle Landing Tests: Methods and Measurements
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Kellas, Sotiris
2018-01-01
During the summer of 2016, a series of drop tests were conducted on two passive earth entry vehicle (EEV) test articles at the Utah Test and Training Range (UTTR). The tests were conducted to evaluate the structural integrity of a realistic EEV vehicle under anticipated landing loads. The test vehicles were lifted to an altitude of approximately 400m via a helicopter and released via release hook into a predesignated 61 m landing zone. Onboard accelerometers were capable of measuring vehicle free flight and impact loads. High-speed cameras on the ground tracked the free-falling vehicles and data was used to calculate critical impact parameters during the final seconds of flight. Additional sets of high definition and ultra-high definition cameras were able to supplement the high-speed data by capturing the release and free flight of the test articles. Three tests were successfully completed and showed that the passive vehicle design was able to withstand the impact loads from nominal and off-nominal impacts at landing velocities of approximately 29 m/s. Two out of three test resulted in off-nominal impacts due to a combination of high winds at altitude and the method used to suspend the vehicle from the helicopter. Both the video and acceleration data captured is examined and discussed. Finally, recommendations for improved release and instrumentation methods are presented.
Design of multi-body Lambert type orbits with specified departure and arrival positions
NASA Astrophysics Data System (ADS)
Ishii, Nobuaki; Kawaguchi, Jun'ichiro; Matsuo, Hiroki
1991-10-01
A new procedure for designing a multi-body Lambert type orbit comprising a multiple swingby process is developed, aiming at relieving a numerical difficulty inherent to a highly nonlinear swingby mechanism. The proposed algorithm, Recursive Multi-Step Linearization, first divides a whole orbit into several trajectory segments. Then, with a maximum use of piecewised transition matrices, a segmentized orbit is repeatedly upgraded until an approximated orbit initially based on a patched conics method eventually converges. In application to the four body earth-moon system with sun's gravitation, one of the double lunar swingby orbits including 12 lunar swingbys is successfully designed without any velocity mismatch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Diane Schaub
2007-03-05
Since its inception, the University of Florida Industrial Assessment Center has successfully completed close to 400 energy assessments of small to medium manufacturing facilities in Florida, southern Georgia and southern Alabama. Through these efforts, recommendations were made that would result in savings of about $5 million per year, with an implementation rate of 20-25%. Approximately 80 engineering students have worked for the UF-IAC, at least 10 of whom went on to work in energy related fields after graduation. Additionally, through the popular course in Industrial Energy Management, many students have graduated from the University of Florida with a strong understandingmore » and support of energy conservation methods.« less
Models for Models: An Introduction to Polymer Models Employing Simple Analogies
NASA Astrophysics Data System (ADS)
Tarazona, M. Pilar; Saiz, Enrique
1998-11-01
An introduction to the most common models used in the calculations of conformational properties of polymers, ranging from the freely jointed chain approximation to Monte Carlo or molecular dynamics methods, is presented. Mathematical formalism is avoided and simple analogies, such as human chains, gases, opinion polls, or marketing strategies, are used to explain the different models presented. A second goal of the paper is to teach students how models required for the interpretation of a system can be elaborated, starting with the simplest model and introducing successive improvements until the refinements become so sophisticated that it is much better to use an alternative approach.
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.
Hamilton-Jacobi formalism to warm inflationary scenario
NASA Astrophysics Data System (ADS)
Sayar, K.; Mohammadi, A.; Akhtari, L.; Saaidi, Kh.
2017-01-01
The Hamilton-Jacobi formalism as a powerful method is being utilized to reconsider the warm inflationary scenario, where the scalar field as the main component driving inflation interacts with other fields. Separating the context into strong and weak dissipative regimes, the goal is followed for two popular functions of Γ . Applying slow-rolling approximation, the required perturbation parameters are extracted and, by comparing to the latest Planck data, the free parameters are restricted. The possibility of producing an acceptable inflation is studied where the result shows that for all cases the model could successfully suggest the amplitude of scalar perturbation, scalar spectral index, its running, and the tensor-to-scalar ratio.
Fault detection and diagnosis in an industrial fed-batch cell culture process.
Gunther, Jon C; Conner, Jeremy S; Seborg, Dale E
2007-01-01
A flexible process monitoring method was applied to industrial pilot plant cell culture data for the purpose of fault detection and diagnosis. Data from 23 batches, 20 normal operating conditions (NOC) and three abnormal, were available. A principal component analysis (PCA) model was constructed from 19 NOC batches, and the remaining NOC batch was used for model validation. Subsequently, the model was used to successfully detect (both offline and online) abnormal process conditions and to diagnose the root causes. This research demonstrates that data from a relatively small number of batches (approximately 20) can still be used to monitor for a wide range of process faults.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Cambridge, Vivien J.; Koger, Thomas L.
1993-01-01
A microprocessor and electronics package employing predictive methodology was developed to accelerate the response time of slowly responding hydrogen sensors. The system developed improved sensor response time from approximately 90 seconds to 8.5 seconds. The microprocessor works in real-time providing accurate hydrogen concentration corrected for fluctuations in sensor output resulting from changes in atmospheric pressure and temperature. Following the successful development of the hydrogen sensor system, the system and predictive methodology was adapted to a commercial medical thermometer probe. Results of the experiment indicate that, with some customization of hardware and software, response time improvements are possible for medical thermometers as well as other slowly responding sensors.
Analysis of simple 2-D and 3-D metal structures subjected to fragment impact
NASA Technical Reports Server (NTRS)
Witmer, E. A.; Stagliano, T. R.; Spilker, R. L.; Rodal, J. J. A.
1977-01-01
Theoretical methods were developed for predicting the large-deflection elastic-plastic transient structural responses of metal containment or deflector (C/D) structures to cope with rotor burst fragment impact attack. For two-dimensional C/D structures both, finite element and finite difference analysis methods were employed to analyze structural response produced by either prescribed transient loads or fragment impact. For the latter category, two time-wise step-by-step analysis procedures were devised to predict the structural responses resulting from a succession of fragment impacts: the collision force method (CFM) which utilizes an approximate prediction of the force applied to the attacked structure during fragment impact, and the collision imparted velocity method (CIVM) in which the impact-induced velocity increment acquired by a region of the impacted structure near the impact point is computed. The merits and limitations of these approaches are discussed. For the analysis of 3-d responses of C/D structures, only the CIVM approach was investigated.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tanaka, Masahiko, E-mail: masahiko@spring8.or.jp; Katsuya, Yoshio, E-mail: katsuya@spring8.or.jp; Sakata, Osami, E-mail: SAKATA.Osami@nims.go.jp
2016-07-27
Focused-beam flat-sample method (FFM) is a new trial for synchrotron powder diffraction method, which is a combination of beam focusing optics, flat shape powder sample and area detectors. The method has advantages for X-ray diffraction experiments applying anomalous scattering effect (anomalous diffraction), because of 1. Absorption correction without approximation, 2. High intensity X-rays of focused incident beams and high signal noise ratio of diffracted X-rays 3. Rapid data collection with area detectors. We applied the FFM to anomalous diffraction experiments and collected synchrotron X-ray powder diffraction data of CoFe{sub 2}O{sub 4} (inverse spinel structure) using X-rays near Fe K absorptionmore » edge, which can distinguish Co and Fe by anomalous scattering effect. We conducted Rietveld analyses with the obtained powder diffraction data and successfully determined the distribution of Co and Fe ions in CoFe{sub 2}O{sub 4} crystal structure.« less
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Elliott, J. P.; Spreiter, J. R.
1983-01-01
An investigation was conducted to continue the development of perturbation procedures and associated computational codes for rapidly determining approximations to nonlinear flow solutions, with the purpose of establishing a method for minimizing computational requirements associated with parametric design studies of transonic flows in turbomachines. The results reported here concern the extension of the previously developed successful method for single parameter perturbations to simultaneous multiple-parameter perturbations, and the preliminary application of the multiple-parameter procedure in combination with an optimization method to blade design/optimization problem. In order to provide as severe a test as possible of the method, attention is focused in particular on transonic flows which are highly supercritical. Flows past both isolated blades and compressor cascades, involving simultaneous changes in both flow and geometric parameters, are considered. Comparisons with the corresponding exact nonlinear solutions display remarkable accuracy and range of validity, in direct correspondence with previous results for single-parameter perturbations.
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Cao, Han; Ng, Marcus C K; Jusoh, Siti Azma; Tai, Hio Kuan; Siu, Shirley W I
2017-09-01
[Formula: see text]-Helical transmembrane proteins are the most important drug targets in rational drug development. However, solving the experimental structures of these proteins remains difficult, therefore computational methods to accurately and efficiently predict the structures are in great demand. We present an improved structure prediction method TMDIM based on Park et al. (Proteins 57:577-585, 2004) for predicting bitopic transmembrane protein dimers. Three major algorithmic improvements are introduction of the packing type classification, the multiple-condition decoy filtering, and the cluster-based candidate selection. In a test of predicting nine known bitopic dimers, approximately 78% of our predictions achieved a successful fit (RMSD <2.0 Å) and 78% of the cases are better predicted than the two other methods compared. Our method provides an alternative for modeling TM bitopic dimers of unknown structures for further computational studies. TMDIM is freely available on the web at https://cbbio.cis.umac.mo/TMDIM . Website is implemented in PHP, MySQL and Apache, with all major browsers supported.
NASA Technical Reports Server (NTRS)
Shollenberger, C. A.; Smyth, D. N.
1978-01-01
A nonlinear, nonplanar three dimensional jet flap analysis, applicable to the ground effect problem, is presented. Lifting surface methodology is developed for a wing with arbitrary planform operating in an inviscid and incompressible fluid. The classical, infintely thin jet flap model is employed to simulate power induced effects. An iterative solution procedure is applied within the analysis to successively approximate the jet shape until a converged solution is obtained which closely satisfies jet and wing boundary conditions. Solution characteristics of the method are discussed and example results are presented for unpowered, basic powered and complex powered configurations. Comparisons between predictions of the present method and experimental measurements indicate that the improvement of the jet with the ground plane is important in the analyses of powered lift systems operating in ground proximity. Further development of the method is suggested in the areas of improved solution convergence, more realistic modeling of jet impingement and calculation efficiency enhancements.
New approach application of data transformation in mean centering of ratio spectra method
NASA Astrophysics Data System (ADS)
Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.
2015-05-01
Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.
TMDIM: an improved algorithm for the structure prediction of transmembrane domains of bitopic dimers
NASA Astrophysics Data System (ADS)
Cao, Han; Ng, Marcus C. K.; Jusoh, Siti Azma; Tai, Hio Kuan; Siu, Shirley W. I.
2017-09-01
α-Helical transmembrane proteins are the most important drug targets in rational drug development. However, solving the experimental structures of these proteins remains difficult, therefore computational methods to accurately and efficiently predict the structures are in great demand. We present an improved structure prediction method TMDIM based on Park et al. (Proteins 57:577-585, 2004) for predicting bitopic transmembrane protein dimers. Three major algorithmic improvements are introduction of the packing type classification, the multiple-condition decoy filtering, and the cluster-based candidate selection. In a test of predicting nine known bitopic dimers, approximately 78% of our predictions achieved a successful fit (RMSD <2.0 Å) and 78% of the cases are better predicted than the two other methods compared. Our method provides an alternative for modeling TM bitopic dimers of unknown structures for further computational studies. TMDIM is freely available on the web at https://cbbio.cis.umac.mo/TMDIM. Website is implemented in PHP, MySQL and Apache, with all major browsers supported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Mengyan; Xi, Xin; Gong, Cairong, E-mail: gcr@tju.edu.cn
2016-02-15
Highlights: • BiVO{sub 4} nanofibers were successfully fabricated by electrospinning method. • PVP was used to adjust the viscosity and increase spinnability of the electrospinning sol. • BiVO{sub 4} nanofibers were used for the degradation of MB. • Compared to the submicron sized BiVO4, BiVO{sub 4} nanofibers show superior photocatalytic activity. - Abstract: Witnessed by X-ray powder diffraction (XRD), Raman, scanning electron microscopy (SEM) and transmission electron microscopy (TEM) studies, BiVO{sub 4} nanofibers and porous nanostructures were successfully fabricated by electrospinning method using NH{sub 4}VO{sub 3} and Bi(NO{sub 3}){sub 3} as starting materials. Polyvinylpyrrolidinone (PVP) was used to tune themore » viscosity and spinnability of the electrospinning sol. The slow decomposition and combustion of PVP matrix prevented rapid crystal growth of BiVO{sub 4} nanostructures leading to considerably small crystallite size (approximately 19.1–28.3 nm) with less surface defects after two hours calcination at varying temperatures. This paid great tributes to the superior visible light photocatalytic activity when compared to the submicron sized BiVO{sub 4} prepared in the absence of PVP.« less
Nonlinear functional approximation with networks using adaptive neurons
NASA Technical Reports Server (NTRS)
Tawel, Raoul
1992-01-01
A novel mathematical framework for the rapid learning of nonlinear mappings and topological transformations is presented. It is based on allowing the neuron's parameters to adapt as a function of learning. This fully recurrent adaptive neuron model (ANM) has been successfully applied to complex nonlinear function approximation problems such as the highly degenerate inverse kinematics problem in robotics.
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
A comparison of transport algorithms for premixed, laminar steady state flames
NASA Technical Reports Server (NTRS)
Coffee, T. P.; Heimerl, J. M.
1980-01-01
The effects of different methods of approximating multispecies transport phenomena in models of premixed, laminar, steady state flames were studied. Five approximation methods that span a wide range of computational complexity were developed. Identical data for individual species properties were used for each method. Each approximation method is employed in the numerical solution of a set of five H2-02-N2 flames. For each flame the computed species and temperature profiles, as well as the computed flame speeds, are found to be very nearly independent of the approximation method used. This does not indicate that transport phenomena are unimportant, but rather that the selection of the input values for the individual species transport properties is more important than the selection of the method used to approximate the multispecies transport. Based on these results, a sixth approximation method was developed that is computationally efficient and provides results extremely close to the most sophisticated and precise method used.
NASA Astrophysics Data System (ADS)
Hamid, Nor Zila Abd; Adenan, Nur Hamiza; Noorani, Mohd Salmi Md
2017-08-01
Forecasting and analyzing the ozone (O3) concentration time series is important because the pollutant is harmful to health. This study is a pilot study for forecasting and analyzing the O3 time series in one of Malaysian educational area namely Shah Alam using chaotic approach. Through this approach, the observed hourly scalar time series is reconstructed into a multi-dimensional phase space, which is then used to forecast the future time series through the local linear approximation method. The main purpose is to forecast the high O3 concentrations. The original method performed poorly but the improved method addressed the weakness thereby enabling the high concentrations to be successfully forecast. The correlation coefficient between the observed and forecasted time series through the improved method is 0.9159 and both the mean absolute error and root mean squared error are low. Thus, the improved method is advantageous. The time series analysis by means of the phase space plot and Cao method identified the presence of low-dimensional chaotic dynamics in the observed O3 time series. Results showed that at least seven factors affect the studied O3 time series, which is consistent with the listed factors from the diurnal variations investigation and the sensitivity analysis from past studies. In conclusion, chaotic approach has been successfully forecast and analyzes the O3 time series in educational area of Shah Alam. These findings are expected to help stakeholders such as Ministry of Education and Department of Environment in having a better air pollution management.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
[Molecular authentication of Jinyinhua formula granule by using allele-specific PCR].
Jiang, Chao; Tu, Li-Chan; Yuan, Yuan; Huang, Lu-Qi; Gao, Wei; Jin, Yan
2017-07-01
Traditional authentication method is hard to identify herb's authenticity of traditional Chinese medicine(TCM) formula granules because they have lost all their morphological characteristics. In this study, a new allele-specific PCR method was established for identifying the authentication of Jinyinhua formula granule (made from Lonicerae Japonicae Flos) based on an SNP site in trnL-trnF fragment. Genomic DNA was successfully extracted from Lonicerae Japonicae Flos and its formula granules by using an improved spin column method and then PCR was performed with the designed primer. Approximately 110 bp specific bands was obtained only in the authentic Lonicerae Japonicae Flos and its formula granules, while no bands were found in fake mixed products. In addition, the PCR product sequence was proved from Lonicerae Japonicae Flos trnL-trnF sequence by using BLAST method. Therefore, DNA molecular authentication method could make up the limitations of character identification method and microscopic identification, and quickly identify herb's authenticity of TCM formula granules, with enormous potential for market supervision and quality control. Copyright© by the Chinese Pharmaceutical Association.
Ku, Yu-Fu; Huang, Long-Sun; Yen, Yi-Kuang
2018-02-28
Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR) and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Sontag, Angelina; Rosen, Raymond C; Litman, Heather J; Ni, Xiao; Araujo, Andre B
2013-02-01
Reliability of successful outcomes in men with erectile dysfunction (ED) on phosphodiesterase type 5 inhibitors is an important aspect of patient management. We examined reliability of successful outcomes in a large integrated dataset of randomized tadalafil trials. Success rates, time to success, subsequent success after first success, and probability of success were analyzed based on Sexual Encounter Profile questions 2 and 3. Data from 3,254 ED patients treated with tadalafil 10 mg (N = 510), 20 mg (N = 1,772), or placebo (N = 972) were pooled from 17 placebo-controlled studies. Tadalafil patients had significantly higher first-attempt success rates vs. placebo. This effect was consistent across most subgroups; however, patients with severe ED experienced a greater response to tadalafil than patients with mild-moderate ED. Approximately 80% of patients achieved successful penile insertion within two attempts with either tadalafil dose and successful intercourse within eight attempts for tadalafil 10 mg and four attempts for tadalafil 20 mg. However, approximately 70% of tadalafil-treated patients achieved successful intercourse even by the second attempt. Subsequent success rates were higher for patients with first-attempt success (81.5% for 10 mg and 86.1% for 20 mg vs. 66.2% for placebo, P < 0.001) vs. patients with later initial success (53.2% for 10 mg and 56.4% for 20 mg vs. 39.9% for placebo, P < 0.001). Among patients treated with tadalafil, intercourse success rates at early attempts were similar to rates at later attempts (i.e., attempts 5 and 10 vs. 25), although insertion success rates were significantly lower earlier in treatment. The findings affirm the reliability of successful outcomes with tadalafil treatment and that first-attempt success is a critical factor affecting subsequent outcomes. The results further show that even among men who did not succeed on first attempt, a substantial proportion will have successful outcomes if treatment is maintained. © 2012 International Society for Sexual Medicine.
A Spoonful of Success: Undergraduate Tutor-Tutee Interactions and Performance
ERIC Educational Resources Information Center
Marx, Jonathan; Wolf, Michelle G.; Howard, Kimberly
2016-01-01
We explore how the dynamics of the tutor-tutee relationship influence students' self-reliance and, ultimately, course performance. We examine 333 tutor and tutee pairs at a student success center at a public, comprehensive, university attended by approximately 5,000 undergraduates enrolled in more than 60 courses during spring 2015. The results…
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Wang, Lixin; Caylor, Kelly K; Dragoni, Danilo
2009-02-01
The (18)O and (2)H of water vapor serve as powerful tracers of hydrological processes. The typical method for determining water vapor delta(18)O and delta(2)H involves cryogenic trapping and isotope ratio mass spectrometry. Even with recent technical advances, these methods cannot resolve vapor composition at high temporal resolutions. In recent years, a few groups have developed continuous laser absorption spectroscopy (LAS) approaches for measuring delta(18)O and delta(2)H which achieve accuracy levels similar to those of lab-based mass spectrometry methods. Unfortunately, most LAS systems need cryogenic cooling and constant calibration to a reference gas, and have substantial power requirements, making them unsuitable for long-term field deployment at remote field sites. A new method called Off-Axis Integrated Cavity Output Spectroscopy (OA-ICOS) has been developed which requires extremely low-energy consumption and neither reference gas nor cryogenic cooling. In this report, we develop a relatively simple pumping system coupled to a dew point generator to calibrate an ICOS-based instrument (Los Gatos Research Water Vapor Isotope Analyzer (WVIA) DLT-100) under various pressures using liquid water with known isotopic signatures. Results show that the WVIA can be successfully calibrated using this customized system for different pressure settings, which ensure that this instrument can be combined with other gas-sampling systems. The precisions of this instrument and the associated calibration method can reach approximately 0.08 per thousand for delta(18)O and approximately 0.4 per thousand for delta(2)H. Compared with conventional mass spectrometry and other LAS-based methods, the OA-ICOS technique provides a promising alternative tool for continuous water vapor isotopic measurements in field deployments. Copyright 2009 John Wiley & Sons, Ltd.
Ribozyme-mediated signal augmentation on a mass-sensitive biosensor.
Knudsen, Scott M; Lee, Joonhyung; Ellington, Andrew D; Savran, Cagri A
2006-12-20
Mass-based detection methods such as the quartz crystal microbalance (QCM) offer an attractive option to label-based methods; however the sensitivity is generally lower by comparison. In particular, low-molecular-weight analytes can be difficult to detect based on mass addition alone. In this communication, we present the use of effector-dependent ribozymes (aptazymes) as reagents for augmenting small ligand detection on a mass-sensitive device. Two distinct aptazymes were chosen: an L1-ligase-based aptazyme (L1-Rev), which is activated by a small peptide (MW approximately 2.4 kDa) from the HIV-1 Rev protein, and a hammerhead cleavase-based aptazyme (HH-theo3) activated by theophylline (MW = 180 Da). Aptazyme activity was observed in real time, and low-molecular-weight analyte detection has been successfully demonstrated with both aptazymes.
Study of modal coupling procedures for the shuttle: A matrix method for damping synthesis
NASA Technical Reports Server (NTRS)
Hasselman, T. K.
1972-01-01
The damping method was applied successfully to real structures as well as analytical models. It depends on the ability to determine an appropriate modal damping matrix for each substructure. In the past, modal damping matrices were assumed diagonal for lack of being able to determine the coupling terms which are significant in the general case of nonproportional damping. This problem was overcome by formulating the damped equations of motion as a linear perturbation of the undamped equations for light structural damping. Damped modes are defined as complex vectors derived from the complex frequency response vectors of each substructure and are obtained directly from sinusoidal vibration tests. The damped modes are used to compute first order approximations to the modal damping matrices. The perturbation approach avoids ever having to solve a complex eigenvalue problem.
Progress in reforming chemical engineering education.
Wankat, Phillip C
2013-01-01
Three successful historical reforms of chemical engineering education were the triumph of chemical engineering over industrial chemistry, the engineering science revolution, and Engineering Criteria 2000. Current attempts to change teaching methods have relied heavily on dissemination of the results of engineering-education research that show superior student learning with active learning methods. Although slow dissemination of education research results is probably a contributing cause to the slowness of reform, two other causes are likely much more significant. First, teaching is the primary interest of only approximately one-half of engineering faculty. Second, the vast majority of engineering faculty have no training in teaching, but trained professors are on average better teachers. Significant progress in reform will occur if organizations with leverage-National Science Foundation, through CAREER grants, and the Engineering Accreditation Commission of ABET-use that leverage to require faculty to be trained in pedagogy.
First-principles studies of electronic, transport and bulk properties of pyrite FeS2
NASA Astrophysics Data System (ADS)
Banjara, Dipendra; Mbolle, Augustine; Malozovsky, Yuriy; Franklin, Lashounda; Bagayoko, Diola
We present results of ab-initio, self-consistent density functional theory (DFT) calculations of electronic, transport, and bulk properties of pyrite FeS2. We employed a local density approximation (LDA) potential and the linear combination of atomic orbitals (LCAO) formalism, following the Bagayoko, Zhao and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF). The BZW-EF method requires successive, self consistent calculations with increasing basis sets to reach the ground state of the system under study. We report the band structure, the band gap, total and partial densities of states, effective masses, and the bulk modulus. Work funded in part by the US Department of Energy (DOE), National Nuclear Security Administration (NNSA) (Award No.DE-NA0002630), the National Science Foundation (NSF) (Award No, 1503226), LaSPACE, and LONI-SUBR.
Calculation of the Full Scattering Amplitude without Partial Wave Decomposition II
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE) can be reduced to a 2d partial differential equation (pde), and was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation. The resultant equation can be reduced to a pair of coupled pde's, to which the finite element method can still be applied. The resultant scattering amplitudes, both singlet and triplet, as a function of angle can be calculated for various energies. The results are in excellent agreement with converged partial wave results.
26 CFR 1.985-3 - United States dollar approximate separate transactions method.
Code of Federal Regulations, 2010 CFR
2010-04-01
... transactions method. 1.985-3 Section 1.985-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE... dollar approximate separate transactions method. (a) Scope and effective date—(1) Scope. This section describes the United States dollar (dollar) approximate separate transactions method of accounting (DASTM...
Lionfish on the Loose in the Waters off St Vincent
Welsh, JS; Young, J; Gupta, R
2014-01-01
Objective The purpose of this study was to determine if the exotic venomous species, Pterois volitans (lionfish) had reached as far south as St Vincent in the Caribbean. This predatory marine fish has successfully invaded the waters of the Western Atlantic and the Caribbean. Such success as an exotic invasive species is rare for a predatory marine fish. It is possible that the fish are growing larger and spreading faster than anticipated, thanks to a lower burden of parasites and a paucity of natural predators in their new environment. But prior to this report, no sightings of this species this far south had been reported. Methods The authors conducted a search along with the help of local divers and fishermen in the waters of St Vincent. Results Approximately one year after the initiation of the search, a juvenile specimen was positively confirmed and captured off the southern coast of St Vincent. Conclusions The exotic predatory and venomous red lionfish, Pterois volitans, has successfully invaded marine waters as far south as the Windward Islands. Fishermen in these regions should be aware of this venomous species in the region and physicians must be aware of how to manage stings from such animals. PMID:25303255
Design and fabrication of robotic gripper for grasping in minimizing contact force
NASA Astrophysics Data System (ADS)
Heidari, Hamidreza; Pouria, Milad Jafary; Sharifi, Shahriar; Karami, Mahmoudreza
2018-03-01
This paper presents a new method to improve the kinematics of robot gripper for grasping in unstructured environments, such as space operations. The robot gripper is inspired from the human hand and kept the hand design close to the structure of human fingers to provide successful grasping capabilities. The main goal is to improve kinematic structure of gripper to increase the grasping capability of large objects, decrease the contact forces and makes a successful grasp of various objects in unstructured environments. This research will describe the development of a self-adaptive and reconfigurable robotic hand for space operations through mechanical compliance which is versatile, robust and easy to control. Our model contains two fingers, two-link and three-link, with combining a kinematic model of thumb index. Moreover, some experimental tests are performed to examine the effectiveness of the hand-made in real, unstructured tasks. The results represent that the successful grasp range is improved about 30% and the contact forces is reduced approximately 10% for a wide range of target object size. According to the obtained results, the proposed approach provides an accommodative kinematic model which makes the better grasping capability by fingers geometries for a robot gripper.
Mean-field approximation for spacing distribution functions in classical systems
NASA Astrophysics Data System (ADS)
González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.
Realism on the rocks: Novel success and James Hutton's theory of the earth.
Rossetter, Thomas
2018-02-01
In this paper, I introduce a new historical case study into the scientific realism debate. During the late-eighteenth century, the Scottish natural philosopher James Hutton made two important successful novel predictions. The first concerned granitic veins intruding from granite masses into strata. The second concerned what geologists now term "angular unconformities": older sections of strata overlain by younger sections, the two resting at different angles, the former typically more inclined than the latter. These predictions, I argue, are potentially problematic for selective scientific realism in that constituents of Hutton's theory that would not be considered even approximately true today played various roles in generating them. The aim here is not to provide a full philosophical analysis but to introduce the case into the debate by detailing the history and showing why, at least prima facie, it presents a problem for selective realism. First, I explicate Hutton's theory. I then give an account of Hutton's predictions and their confirmations. Next, I explain why these predictions are relevant to the realism debate. Finally, I consider which constituents of Hutton's theory are, according to current beliefs, true (or approximately true), which are not (even approximately) true, and which were responsible for these successes. Copyright © 2017 The Author. Published by Elsevier Ltd.. All rights reserved.
Reintroducing fire into the Blacks Mountain Research Natural Area: effects on fire hazard
Carl N. Skinner
2005-01-01
Frequent, low-intensity, surface fires were an integral ecological process in the Blacks Mountain Experimental Forest (BMEF) prior to the 20th Century. With rare exception, fires have been successfully excluded from BMEF since the early 1900s. The Blacks Mountain Research Natural Area (BMRNA) covers approximately 521 acres of BMEF in 5 compartments of approximately 100...
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Design and Development of NEA Scout Solar Sail Deployer Mechanism
NASA Technical Reports Server (NTRS)
Sobey, Alexander R.; Lockett, Tiffany Russell
2016-01-01
The 6U (approximately10cm x 20cm x 30cm) cubesat Near Earth Asteroid (NEA) Scout, projected for launch in September 2018 aboard the maiden voyage of the Space Launch System (SLS), will utilize a solar sail as its main method of propulsion throughout its approximately 3 year mission to a near earth asteroid. Due to the extreme volume constraints levied onto the mission, an acutely compact solar sail deployment mechanism has been designed to meet the volume and mass constraints, as well as provide enough propulsive solar sail area and quality in order to achieve mission success. The design of such a compact system required the development of approximately half a dozen prototypes in order to identify unforeseen problems and advance solutions. Though finite element analysis was performed during this process in an attempt to quantify forces present within the mechanism during deployment, both the boom and the sail materials do not lend themselves to achieving high-confidence results. This paper focuses on the obstacles of developing a solar sail deployment mechanism for such an application and the lessons learned from a thorough development process. The lessons presented here will have significant applications beyond the NEA Scout mission, such as the development of other deployable boom mechanisms and uses for gossamer-thin films in space.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
Uniform Foam Crush Testing for Multi-Mission Earth Entry Vehicle Impact Attenuation
NASA Technical Reports Server (NTRS)
Patterson, Byron W.; Glaab, Louis J.
2012-01-01
Multi-Mission Earth Entry Vehicles (MMEEVs) are blunt-body vehicles designed with the purpose of transporting payloads from outer space to the surface of the Earth. To achieve high-reliability and minimum weight, MMEEVs avoid use of limited-reliability systems, such as parachutes and retro-rockets, instead using built-in impact attenuators to absorb energy remaining at impact to meet landing loads requirements. The Multi-Mission Systems Analysis for Planetary Entry (M-SAPE) parametric design tool is used to facilitate the design of MMEEVs and develop the trade space. Testing was conducted to characterize the material properties of several candidate impact foam attenuators to enhance M-SAPE analysis. In the current effort, four different Rohacell foams are tested at three different, uniform, strain rates (approximately 0.17, approximately 100, approximately 13,600%/s). The primary data analysis method uses a global data smoothing technique in the frequency domain to remove noise and system natural frequencies. The results from the data indicate that the filter and smoothing technique are successful in identifying the foam crush event and removing aberrations. The effect of strain rate increases with increasing foam density. The 71-WF-HT foam may support Mars Sample Return requirements. Several recommendations to improve the drop tower test technique are identified.
Improving sexually transmitted infection results notification via mobile phone technology.
Reed, Jennifer L; Huppert, Jill S; Taylor, Regina G; Gillespie, Gordon L; Byczkowski, Terri L; Kahn, Jessica A; Alessandrini, Evaline A
2014-11-01
To improve adolescent notification of positive sexually transmitted infection (STI) tests using mobile phone technology and STI information cards. A randomized intervention among 14- to 21-year olds in a pediatric emergency department (PED). A 2 × 3 factorial design with replication was used to evaluate the effectiveness of six combinations of two factors on the proportion of STI-positive adolescents notified within 7 days of testing. Independent factors included method of notification (call, text message, or call + text message) and provision of an STI information card with or without a phone number to obtain results. Covariates for logistic regression included age, empiric STI treatment, days until first attempted notification, and documentation of confidential phone number. Approximately half of the 383 females and 201 males enrolled were ≥18 years of age. Texting only or type of card was not significantly associated with patient notification rates, and there was no significant interaction between card and notification method. For females, successful notification was significantly greater for call + text message (odds ratio, 3.2; 95% confidence interval, 1.4-6.9), and documenting a confidential phone number was independently associated with successful notification (odds ratio, 3.6; 95% confidence interval, 1.7-7.5). We found no significant predictors of successful notification for males. Of patients with a documented confidential phone number who received a call + text message, 94% of females and 83% of males were successfully notified. Obtaining a confidential phone number and using call + text message improved STI notification rates among female but not male adolescents in a pediatric emergency department. Copyright © 2014 Society for Adolescent Health and Medicine. All rights reserved.
2012-01-01
Background Conventional transabdominal ultrasound usually fails to visualize parts of the ureter or extrahepatic bile duct covered by bowel gas. In this study, we propose a new method for gaining acoustic access to the ureters and extrahepatic bile duct to help determine the nature of obstruction to these structures when conventional transabdominal ultrasound fails. Methods The normal saline retention enema method, that is, using normal saline-filled colons to gain acoustic access to the bilateral ureters and extrahepatic bile duct and detecting the lesions with transabdominal ultrasonic diagnostic apparatus, was applied to 777 patients with obstructive lesions, including 603 with hydroureter and 174 with dilated common bile duct, which were not visualized by conventional ultrasonography. The follow-up data of all the patients were collected to verify the results obtained by this method. Results Of the 755 patients who successfully finished the examination after normal saline retention enema (the success rate of the enema is about 98%), the nature of obstruction in 718 patients was determined (the visualizing rate is approximately 95%), including 533 with ureteral calculus, 23 with ureteral stricture, 129 with extrahepatic bile duct calculus, and 33 with common bile duct tumor. Conclusions Colons filled fully with normal saline can surely give acoustic access to the bilateral ureters and extrahepatic bile duct so as to determine the nature of obstruction of these structures when conventional transabdominal ultrasound fails. PMID:22871226
Iterative discrete ordinates solution of the equation for surface-reflected radiance
NASA Astrophysics Data System (ADS)
Radkevich, Alexander
2017-11-01
This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.
Mitchell, Sara N; Catteruccia, Flaminia
2017-12-01
Vectorial capacity is a mathematical approximation of the efficiency of vector-borne disease transmission, measured as the number of new infections disseminated per case per day by an insect vector. Multiple elements of mosquito biology govern their vectorial capacity, including survival, population densities, feeding preferences, and vector competence. Intriguingly, biological pathways essential to mosquito reproductive fitness directly or indirectly influence a number of these elements. Here, we explore this complex interaction, focusing on how the interplay between mating and blood feeding in female Anopheles not only shapes their reproductive success but also influences their ability to sustain Plasmodium parasite development. Central to malaria transmission, mosquito reproductive biology has recently become the focus of research strategies aimed at malaria control, and we discuss promising new methods based on the manipulation of key reproductive steps. In light of widespread resistance to all public health-approved insecticides targeting mosquito reproduction may prove crucial to the success of malaria-eradication campaigns. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.
Recruitment strategies for an osteoporosis clinical trial: analysis of effectiveness.
Heard, Allison; March, Rachel; Maguire, Patricia; Reilly, Penny; Helmore, Joy; Cameron, Sheryl; Frampton, Christopher; Nicholls, Gary; Gilchrist, Nigel
2012-09-01
To examine the effectiveness of a planned rapid recruitment strategy in an osteoporosis clinical trial. Multiple recruitment methods were explored, including media advertising, searching bone density scan and X-ray results in specialist and primary practice databases, community initiatives, and generation of research centre and study-specific pamphlets. Of 246 women screened, 41 consenting to the study, only 14 were randomised. Thus, 232 (94%) volunteers were screen failures, ineligible or declined to participate. With regard to the cost-effectiveness of all recruitment strategies, searching the research centre database was the most successful, with four women randomised at a cost of approximately NZ$302 per volunteer. Other strategies were less cost-effective. Obtaining a specific study cohort can be achieved by a comprehensive, targeted, rapid recruitment program. A research centre database search was the most successful and cost-effective recruitment modality in this small study. © 2012 Canterbury Geriatric Medical Research Trust. Australasian Journal on Ageing © 2012 ACOTA.
Human Age Recognition by Electrocardiogram Signal Based on Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dasgupta, Hirak
2016-12-01
The objective of this work is to make a neural network function approximation model to detect human age from the electrocardiogram (ECG) signal. The input vectors of the neural network are the Katz fractal dimension of the ECG signal, frequencies in the QRS complex, male or female (represented by numeric constant) and the average of successive R-R peak distance of a particular ECG signal. The QRS complex has been detected by short time Fourier transform algorithm. The successive R peak has been detected by, first cutting the signal into periods by auto-correlation method and then finding the absolute of the highest point in each period. The neural network used in this problem consists of two layers, with Sigmoid neuron in the input and linear neuron in the output layer. The result shows the mean of errors as -0.49, 1.03, 0.79 years and the standard deviation of errors as 1.81, 1.77, 2.70 years during training, cross validation and testing with unknown data sets, respectively.
NASA Technical Reports Server (NTRS)
Jones, John H.; Hanson, B. Z.
2011-01-01
Petrologic investigation of the shergottites has been hampered by the fact that most of these meteorites are partial cumulates. Two lines of inquiry have been used to evaluate the compositions of parental liquids: (i) perform melting experiments at different pressures and temperatures until the compositions of cumulate crystal cores are reproduced [e.g., 1]; and (ii) use point-counting techniques to reconstruct the compositions of intercumulus liquids [e.g., 2]. The second of these methods is hampered by the approximate nature of the technique. In effect, element maps are used to construct mineral modes; and average mineral compositions are then converted into bulk compositions. This method works well when the mineral phases are homogeneous [3]. However, when minerals are zoned, with narrow rims contributing disproportionately to the mineral volume, this method becomes problematic. Decisions need to be made about the average composition of the various zones within crystals. And, further, the proportions of those zones also need to be defined. We have developed a new microprobe technique to see whether the point-count method of determining intercumulus liquid composition is realistic. In our technique, the approximating decisions of earlier methods are unnecessary because each pixel of our x-ray maps is turned into a complete eleven-element quantitative analysis. The success or failure of our technique can then be determined by experimentation. As discussed earlier, experiments on our point-count composition can then be used to see whether experimental liquidus phases successfully reproduce natural mineral compositions. Regardless of our ultimate outcome in retrieving shergottite parent liquids, we believe our pixel-bypixel analysis technique represents a giant step forward in documenting thin-section modes and compositions. For a third time, we have analyzed the groundmass composition of EET 79001, 68 [Eg]. The first estimate of Eg was made by [4] and later modified by [5], to take phase diagram considerations into account. The Eg composition of [4] was too olivine normative to be the true Eg composition, because the ,68 groundmass contains no forsteritic olivine. A later mapping by [2] basically reconfirmed the modifications of [5]. However, even the modified composition of [5] has olivine on the liquidus for 50 C before low-Ca pyroxene appears [6].
NASA Astrophysics Data System (ADS)
Ma, Sangback
In this paper we compare various parallel preconditioners such as Point-SSOR (Symmetric Successive OverRelaxation), ILU(0) (Incomplete LU) in the Wavefront ordering, ILU(0) in the Multi-color ordering, Multi-Color Block SOR (Successive OverRelaxation), SPAI (SParse Approximate Inverse) and pARMS (Parallel Algebraic Recursive Multilevel Solver) for solving large sparse linear systems arising from two-dimensional PDE (Partial Differential Equation)s on structured grids. Point-SSOR is well-known, and ILU(0) is one of the most popular preconditioner, but it is inherently serial. ILU(0) in the Wavefront ordering maximizes the parallelism in the natural order, but the lengths of the wave-fronts are often nonuniform. ILU(0) in the Multi-color ordering is a simple way of achieving a parallelism of the order N, where N is the order of the matrix, but its convergence rate often deteriorates as compared to that of natural ordering. We have chosen the Multi-Color Block SOR preconditioner combined with direct sparse matrix solver, since for the Laplacian matrix the SOR method is known to have a nondeteriorating rate of convergence when used with the Multi-Color ordering. By using block version we expect to minimize the interprocessor communications. SPAI computes the sparse approximate inverse directly by least squares method. Finally, ARMS is a preconditioner recursively exploiting the concept of independent sets and pARMS is the parallel version of ARMS. Experiments were conducted for the Finite Difference and Finite Element discretizations of five two-dimensional PDEs with large meshsizes up to a million on an IBM p595 machine with distributed memory. Our matrices are real positive, i. e., their real parts of the eigenvalues are positive. We have used GMRES(m) as our outer iterative method, so that the convergence of GMRES(m) for our test matrices are mathematically guaranteed. Interprocessor communications were done using MPI (Message Passing Interface) primitives. The results show that in general ILU(0) in the Multi-Color ordering ahd ILU(0) in the Wavefront ordering outperform the other methods but for symmetric and nearly symmetric 5-point matrices Multi-Color Block SOR gives the best performance, except for a few cases with a small number of processors.
Mori, Takaharu; Miyashita, Naoyuki; Im, Wonpil; Feig, Michael; Sugita, Yuji
2016-07-01
This paper reviews various enhanced conformational sampling methods and explicit/implicit solvent/membrane models, as well as their recent applications to the exploration of the structure and dynamics of membranes and membrane proteins. Molecular dynamics simulations have become an essential tool to investigate biological problems, and their success relies on proper molecular models together with efficient conformational sampling methods. The implicit representation of solvent/membrane environments is reasonable approximation to the explicit all-atom models, considering the balance between computational cost and simulation accuracy. Implicit models can be easily combined with replica-exchange molecular dynamics methods to explore a wider conformational space of a protein. Other molecular models and enhanced conformational sampling methods are also briefly discussed. As application examples, we introduce recent simulation studies of glycophorin A, phospholamban, amyloid precursor protein, and mixed lipid bilayers and discuss the accuracy and efficiency of each simulation model and method. This article is part of a Special Issue entitled: Membrane Proteins edited by J.C. Gumbart and Sergei Noskov. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Wrzyszcz, Aneta; Urbaniak, Joanna; Sapa, Agnieszka; Woźniak, Mieczysław
2017-01-01
To date, there has been no ideal method for blood platelet isolation which allows one to obtain a preparation devoid of contaminations, reflecting the activation status and morphological features of circulating platelets. To address these requirements, we have developed a method which combines the continuous density gradient centrifugation with washing from PGI 2 -supplemented platelet-rich plasma (PRP). We have assessed the degree of erythrocyte and leukocyte contamination, recovery of platelets, morphological features, activation status, and reactivity of isolated platelets. Using our protocol, we were able to get a preparation free from contaminations, representing well the platelet population prior to the isolation in terms of size and activity. Besides this, we have obtained approximately 2 times more platelets from the same volume of blood compared to the most widely used method. From 10 ml of whole citrated blood we were able to get on average 2.7 mg of platelet-derived protein. The method of platelet isolation presented in this paper can be successfully applied to tests requiring very pure platelets, reflecting the circulating platelet state, from a small volume of blood.
NASA Technical Reports Server (NTRS)
Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)
2002-01-01
The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.
Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization
Zhao, Qiangfu; Liu, Yong
2015-01-01
A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050
Online PhD Program Delivery Models and Their Relationship to Student Success
ERIC Educational Resources Information Center
Jorissen, Shari L.
2012-01-01
Attrition rates in Ph.D. programs are at approximately 50% in traditional Ph.D. programs and 10-20% higher in online Ph.D. programs. Understanding the relationship between student factors, measures of student success (retention, graduation, year to degree), and student satisfaction is important to support and improve retention, graduation rates,…
Race to the Top. Rhode Island Report. Year 4: School Year 2013-2014. [State-Specific Summary Report
ERIC Educational Resources Information Center
US Department of Education, 2015
2015-01-01
This State-specific summary report serves as an assessment of Rhode Island's Year 3 Race to the Top implementation, highlighting successes and accomplishments, identifying challenges, and providing lessons learned from implementation from approximately September 2013 through September 2014. Building upon the successes of Years 1 through 3, in Year…
The Social Background of Students and Their Prospect for Success at School.
ERIC Educational Resources Information Center
Phillipines National Commission for UNESCO.
This document is an English-language abstract (approximately 1,500 words) of a report prepared in answer to an IBE questionnaire. In the Philippines, the main problem is that widespread poverty is responsible for many undernourshed, poorly sheltered and ill clad students whose prospect of success at school is from the start seriously hampered by…
First Step to Success. What Works Clearinghouse Intervention Report
ERIC Educational Resources Information Center
What Works Clearinghouse, 2012
2012-01-01
"First Step to Success" is an early intervention program designed to help children who are at risk for developing aggressive or antisocial behavioral patterns. The program uses a trained behavior coach who works with each student and his or her class peers, teacher, and parents for approximately 50 to 60 hours over a three-month period.…
Race to the Top. Ohio Report. Year 2: School Year 2011-2012. [State-Specific Summary Report
ERIC Educational Resources Information Center
US Department of Education, 2013
2013-01-01
This State-specific summary report serves as an assessment of Ohio's Year 2 Race to the Top implementation, highlighting successes and accomplishments, identifying challenges, and providing lessons learned from implementation from approximately September 2011 through September 2012. During Year 2, Ohio built on its Year 1 successes. In its…
Siria, Doreen J; Batista, Elis P A; Opiyo, Mercy A; Melo, Elizangela F; Sumaye, Robert D; Ngowo, Halfan S; Eiras, Alvaro E; Okumu, Fredros O
2018-04-11
Controlled blood-feeding is essential for maintaining laboratory colonies of disease-transmitting mosquitoes and investigating pathogen transmission. We evaluated a low-cost artificial feeding (AF) method, as an alternative to direct human feeding (DHF), commonly used in mosquito laboratories. We applied thinly-stretched pieces of polytetrafluoroethylene (PTFE) membranes cut from locally available seal tape (i.e. plumbers tape, commonly used for sealing pipe threads in gasworks or waterworks). Approximately 4 ml of bovine blood was placed on the bottom surfaces of inverted Styrofoam cups and then the PTFE membranes were thinly stretched over the surfaces. The cups were filled with boiled water to keep the blood warm (~37 °C), and held over netting cages containing 3-4 day-old inseminated adults of female Aedes aegypti, Anopheles gambiae (s.s.) or Anopheles arabiensis. Blood-feeding success, fecundity and survival of mosquitoes maintained by this system were compared against DHF. Aedes aegypti achieved 100% feeding success on both AF and DHF, and also similar fecundity rates (13.1 ± 1.7 and 12.8 ± 1.0 eggs/mosquito respectively; P > 0.05). An. arabiensis had slightly lower feeding success on AF (85.83 ± 16.28%) than DHF (98.83 ± 2.29%) though these were not statistically different (P > 0.05), and also comparable fecundity between AF (8.82 ± 7.02) and DHF (8.02 ± 5.81). Similarly, for An. gambiae (s.s.), we observed a marginal difference in feeding success between AF (86.00 ± 10.86%) and DHF (98.92 ± 2.65%), but similar fecundity by either method. Compared to DHF, mosquitoes fed using AF survived a similar number of days [Hazard Ratios (HR) for Ae. aegypti = 0.99 (0.75-1.34), P > 0.05; An. arabiensis = 0.96 (0.75-1.22), P > 0.05; and An. gambiae (s.s.) = 1.03 (0.79-1.35), P > 0.05]. Mosquitoes fed via this simple AF method had similar feeding success, fecundity and longevity. The method could potentially be used for laboratory colonization of mosquitoes, where DHF is unfeasible. If improved (e.g. minimizing temperature fluctuations), the approach could possibly also support studies where vectors are artificially infected with blood-borne pathogens.
Mean-field approximation for spacing distribution functions in classical systems.
González, Diego Luis; Pimpinelli, Alberto; Einstein, T L
2012-01-01
We propose a mean-field method to calculate approximately the spacing distribution functions p((n))(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p((n))(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed. © 2012 American Physical Society
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Kováč, Michal
2015-03-01
Thin-walled centrically compressed members with non-symmetrical or mono-symmetrical cross-sections can buckle in a torsional-flexural buckling mode. Vlasov developed a system of governing differential equations of the stability of such member cases. Solving these coupled equations in an analytic way is only possible in simple cases. Therefore, Goľdenvejzer introduced an approximate method for the solution of this system to calculate the critical axial force of torsional-flexural buckling. Moreover, this can also be used in cases of members with various boundary conditions in bending and torsion. This approximate method for the calculation of critical force has been adopted into norms. Nowadays, we can also solve governing differential equations by numerical methods, such as the finite element method (FEM). Therefore, in this paper, the results of the approximate method and the FEM were compared to each other, while considering the FEM as a reference method. This comparison shows any discrepancies of the approximate method. Attention was also paid to when and why discrepancies occur. The approximate method can be used in practice by considering some simplifications, which ensure safe results.
Pernal, Katarzyna
2012-05-14
Time-dependent density functional theory (TD-DFT) in the adiabatic formulation exhibits known failures when applied to predicting excitation energies. One of them is the lack of the doubly excited configurations. On the other hand, the time-dependent theory based on a one-electron reduced density matrix functional (time-dependent density matrix functional theory, TD-DMFT) has proven accurate in determining single and double excitations of H(2) molecule if the exact functional is employed in the adiabatic approximation. We propose a new approach for computing excited state energies that relies on functionals of electron density and one-electron reduced density matrix, where the latter is applied in the long-range region of electron-electron interactions. A similar approach has been recently successfully employed in predicting ground state potential energy curves of diatomic molecules even in the dissociation limit, where static correlation effects are dominating. In the paper, a time-dependent functional theory based on the range-separation of electronic interaction operator is rigorously formulated. To turn the approach into a practical scheme the adiabatic approximation is proposed for the short- and long-range components of the coupling matrix present in the linear response equations. In the end, the problem of finding excitation energies is turned into an eigenproblem for a symmetric matrix. Assignment of obtained excitations is discussed and it is shown how to identify double excitations from the analysis of approximate transition density matrix elements. The proposed method used with the short-range local density approximation (srLDA) and the long-range Buijse-Baerends density matrix functional (lrBB) is applied to H(2) molecule (at equilibrium geometry and in the dissociation limit) and to Be atom. The method accounts for double excitations in the investigated systems but, unfortunately, the accuracy of some of them is poor. The quality of the other excitations is in general much better than that offered by TD-DFT-LDA or TD-DMFT-BB approximations if the range-separation parameter is properly chosen. The latter remains an open problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Caroline; Lischeske, James J.; Sievers, David A.
2015-11-03
One viable treatment method for conversion of lignocellulosic biomass to biofuels begins with saccharification (thermochemical pretreatment and enzymatic hydrolysis), followed by fermentation or catalytic upgrading to fuels such as ethanol, butanol, or other hydrocarbons. The post-hydrolysis slurry is typically 4-8 percent insoluble solids, predominantly consisting of lignin. Suspended solids are known to inhibit fermentation as well as poison catalysts and obstruct flow in catalyst beds. Thus a solid-liquid separation following enzymatic hydrolysis would be highly favorable for process economics, however the material is not easily separated by filtration or gravimetric methods. Use of a polyacrylamide flocculant to bind the suspendedmore » particles in a corn stover hydrolyzate slurry into larger flocs (1-2mm diameter) has been found to be extremely helpful in improving separation. Recent and ongoing research on novel pretreatment methods yields hydrolyzate material with diverse characteristics. Therefore, we need a thorough understanding of rapid and successful flocculation design in order to quickly achieve process design goals. In this study potential indicators of flocculation performance were investigated in order to develop a rapid analysis method for flocculation procedure in the context of a novel hydrolyzate material. Flocculation conditions were optimized on flocculant type and loading, pH, and mixing time. Filtration flux of the hydrolyzate slurry was improved 170-fold using a cationic polyacrylamide flocculant with a dosing of approximately 22 mg flocculant/g insoluble solids at an approximate pH of 3. With cake washing, sugar recovery exceeded 90 percent with asymptotic yield at 15 L wash water/kg insoluble solids.« less
Cell Culture Isolation of Piscine Nodavirus (Betanodavirus) in Fish-Rearing Seawater
Nishi, Shinnosuke; Yamashita, Hirofumi; Kawato, Yasuhiko
2016-01-01
Piscine nodavirus (betanodavirus) is the causative agent of viral nervous necrosis (VNN) in a variety of cultured fish species, particularly marine fish. In the present study, we developed a sensitive method for cell culture isolation of the virus from seawater and applied the method to a spontaneous fish-rearing environment. The virus in seawater was concentrated by an iron-based flocculation method and subjected to isolation with E-11 cells. A real-time reverse transcriptase PCR (RT-PCR) assay was used to quantify the virus in water. After spiking into seawater was performed, a betanodavirus strain (redspotted grouper nervous necrosis virus [RGNNV] genotype) was effectively recovered in the E-11 cells at a detection limit of approximately 105 copies (equivalent to 102 50% tissue culture infective doses [TCID50])/liter seawater. In an experimental infection of juvenile sevenband grouper (Epinephelus septemfasciatus) with the virus, the virus was isolated from the drainage of a fish-rearing tank when the virus level in water was at least approximately 105 copies/liter. The application of this method to sevenband grouper-rearing floating net pens, where VNN prevailed, resulted in the successful isolation of the virus from seawater. No differences were found in the partial sequences of the coat protein gene (RNA2) between the clinical virus isolates of dead fish and the cell-cultured virus isolates from seawater, and the viruses were identified as RGNNV. The infection experiment showed that the virus isolates from seawater were virulent to sevenband grouper. These results showed direct evidence of the horizontal transmission of betanodavirus via rearing water in marine aquaculture. PMID:26896128
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
Left-right correlation in coupled F-center defects.
Janesko, Benjamin G
2016-08-07
This work explores how left-right correlation, a textbook problem in electronic structure theory, manifests in a textbook example of electrons trapped in crystal defects. I show that adjacent F-center defects in lithium fluoride display symptoms of "strong" left-right correlation, symptoms similar to those seen in stretched H2. Simulations of UV/visible absorption spectra qualitatively fail to reproduce experiment unless left-right correlation is taken into account. This is of interest to both the electronic structure theory and crystal-defect communities. Theorists have a new well-behaved system to test their methods. Crystal-defect groups are cautioned that the approximations that successfully model single F-centers may fail for adjacent F-centers.
Safety evaluation methodology for advanced coal extraction systems
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.
1981-01-01
Qualitative and quantitative evaluation methods for coal extraction systems were developed. The analysis examines the soundness of the design, whether or not the major hazards have been eliminated or reduced, and how the reduction would be accomplished. The quantitative methodology establishes the approximate impact of hazards on injury levels. The results are weighted by peculiar geological elements, specialized safety training, peculiar mine environmental aspects, and reductions in labor force. The outcome is compared with injury level requirements based on similar, safer industries to get a measure of the new system's success in reducing injuries. This approach provides a more detailed and comprehensive analysis of hazards and their effects than existing safety analyses.
Pressure Distribution in a Porous Squeeze Film Bearing Lubricated with a Herschel-Bulkley Fluid
NASA Astrophysics Data System (ADS)
Walicka, A.; Jurczak, P.
2016-12-01
The influence of a wall porosity on the pressure distribution in a curvilinear squeeze film bearing lubricated with a lubricant being a viscoplastic fluid of a Herschel-Bulkley type is considered. After general considerations on the flow of the viscoplastic fluid (lubricant) in a bearing clearance and in a porous layer the modified Reynolds equation for the curvilinear squeeze film bearing with a Herschel-Bulkley lubricant is given. The solution of this equation is obtained by a method of successive approximation. As a result one obtains a formula expressing the pressure distribution. The example of squeeze films in a step bearing (modeled by two parallel disks) is discussed in detail.
Imaging elemental distribution and ion transport in cultured cells with ion microscopy.
Chandra, S; Morrison, G H
1985-06-28
Both elemental distribution and ion transport in cultured cells have been imaged by ion microscopy. Morphological and chemical information was obtained with a spatial resolution of approximately 0.5 micron for sodium, potassium, calcium, and magnesium in freeze-fixed, cryofractured, and freeze-dried normal rat kidney cells and Chinese hamster ovary cells. Ion transport was successfully demonstrated by imaging Na+-K+ fluxes after the inhibition of Na+- and K+ -dependent adenosine triphosphatase with ouabain. This method allows measurements of elemental (isotopic) distribution to be related to cell morphology, thereby providing the means for studying ion distribution and ion transport under different physiological, pathological, and toxicological conditions in cell culture systems.
Microscopic boson approach to nuclear collective motion
NASA Astrophysics Data System (ADS)
Kuchta, R.
1989-10-01
A quantum mechanical approach to the maximally decoupled nuclear collective motion is proposed. The essential idea is to transcribe the original shell-model hamiltonian in terms of boson operators, then to isolate the collective one-boson eigenstates of the mapped hamiltonian and to perform a canonical transformation which eliminates (up to the two-body terms) the coupling between the collective and noncollective bosons. Unphysical states arising due to the violation of the Pauli principle in the boson space are identified and removed within a suitable approximation. The method is applied to study the low-lying collective states of nuclei which are successfully described by the exactly solvable multi-level pairing hamiltonian (Sn, Ni, Pb).
Anharmonic effects in simple physical models: introducing undergraduates to nonlinearity
NASA Astrophysics Data System (ADS)
Christian, J. M.
2017-09-01
Given the pervasive character of nonlinearity throughout the physical universe, a case is made for introducing undergraduate students to its consequences and signatures earlier rather than later. The dynamics of two well-known systems—a spring and a pendulum—are reviewed when the standard textbook linearising assumptions are relaxed. Some qualitative effects of nonlinearity can be anticipated from symmetry (e.g., inspection of potential energy functions), and further physical insight gained by applying a simple successive-approximation method that might be taught in parallel with courses on classical mechanics, ordinary differential equations, and computational physics. We conclude with a survey of how these ideas have been deployed on programmes at a UK university.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1997-01-01
The following accomplishments were made during the present reporting period: (1) We expanded our new method, for identifying the presence of absorbing aerosols and simultaneously performing atmospheric correction, to the point where it could be added as a subroutine to the MODIS water-leaving radiance algorithm; (2) We successfully acquired micro pulse lidar (MPL) data at sea during a cruise in February; (3) We developed a water-leaving radiance algorithm module for an approximate correction of the MODIS instrument polarization sensitivity; and (4) We participated in one cruise to the Gulf of Maine, a well known region for mesoscale coccolithophore blooms. We measured coccolithophore abundance, production and optical properties.
Arzola, Cristian; Carvalho, Jose C A; Cubillos, Javier; Ye, Xiang Y; Perlas, Anahi
2013-08-01
Focused assessment of the gastric antrum by ultrasound is a feasible tool to evaluate the quality of the stomach content. We aimed to determine the amount of training an anesthesiologist would need to achieve competence in the bedside ultrasound technique for qualitative assessment of gastric content. Six anesthesiologists underwent a teaching intervention followed by a formative assessment; then learning curves were constructed. Participants received didactic teaching (reading material, picture library, and lecture) and an interactive hands-on workshop on live models directed by an expert sonographer. The participants were instructed on how to perform a systematic qualitative assessment to diagnose one of three distinct categories of gastric content (empty, clear fluid, solid) in healthy volunteers. Individual learning curves were constructed using the cumulative sum method, and competence was defined as a 90% success rate in a series of ultrasound examinations. A predictive model was further developed based on the entire cohort performance to determine the number of cases required to achieve a 95% success rate. Each anesthesiologist performed 30 ultrasound examinations (a total of 180 assessments), and three of the six participants achieved competence. The average number of cases required to achieve 90% and 95% success rates was estimated to be 24 and 33, respectively. With appropriate training and supervision, it is estimated that anesthesiologists will achieve a 95% success rate in bedside qualitative ultrasound assessment after performing approximately 33 examinations.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
Nonlinear Analysis of Bonded Composite Tubular Lap Joints
NASA Technical Reports Server (NTRS)
Oterkus, E.; Madenci, E.; Smeltzer, S. S., III; Ambur, D. R.
2005-01-01
The present study describes a semi-analytical solution method for predicting the geometrically nonlinear response of a bonded composite tubular single-lap joint subjected to general loading conditions. The transverse shear and normal stresses in the adhesive as well as membrane stress resultants and bending moments in the adherends are determined using this method. The method utilizes the principle of virtual work in conjunction with nonlinear thin-shell theory to model the adherends and a cylindrical shear lag model to represent the kinematics of the thin adhesive layer between the adherends. The kinematic boundary conditions are imposed by employing the Lagrange multiplier method. In the solution procedure, the displacement components for the tubular joint are approximated in terms of non-periodic and periodic B-Spline functions in the longitudinal and circumferential directions, respectively. The approach presented herein represents a rapid-solution alternative to the finite element method. The solution method was validated by comparison against a previously considered tubular single-lap joint. The steep variation of both peeling and shearing stresses near the adhesive edges was successfully captured. The applicability of the present method was also demonstrated by considering tubular bonded lap-joints subjected to pure bending and torsion.
Detection and avoidance of errors in computer software
NASA Technical Reports Server (NTRS)
Kinsler, Les
1989-01-01
The acceptance test errors of a computer software project to determine if the errors could be detected or avoided in earlier phases of development. GROAGSS (Gamma Ray Observatory Attitude Ground Support System) was selected as the software project to be examined. The development of the software followed the standard Flight Dynamics Software Development methods. GROAGSS was developed between August 1985 and April 1989. The project is approximately 250,000 lines of code of which approximately 43,000 lines are reused from previous projects. GROAGSS had a total of 1715 Change Report Forms (CRFs) submitted during the entire development and testing. These changes contained 936 errors. Of these 936 errors, 374 were found during the acceptance testing. These acceptance test errors were first categorized into methods of avoidance including: more clearly written requirements; detail review; code reading; structural unit testing; and functional system integration testing. The errors were later broken down in terms of effort to detect and correct, class of error, and probability that the prescribed detection method would be successful. These determinations were based on Software Engineering Laboratory (SEL) documents and interviews with the project programmers. A summary of the results of the categorizations is presented. The number of programming errors at the beginning of acceptance testing can be significantly reduced. The results of the existing development methodology are examined for ways of improvements. A basis is provided for the definition is a new development/testing paradigm. Monitoring of the new scheme will objectively determine its effectiveness on avoiding and detecting errors.
Feinauer, Christoph; Procaccini, Andrea; Zecchina, Riccardo; Weigt, Martin; Pagnani, Andrea
2014-01-01
In the course of evolution, proteins show a remarkable conservation of their three-dimensional structure and their biological function, leading to strong evolutionary constraints on the sequence variability between homologous proteins. Our method aims at extracting such constraints from rapidly accumulating sequence data, and thereby at inferring protein structure and function from sequence information alone. Recently, global statistical inference methods (e.g. direct-coupling analysis, sparse inverse covariance estimation) have achieved a breakthrough towards this aim, and their predictions have been successfully implemented into tertiary and quaternary protein structure prediction methods. However, due to the discrete nature of the underlying variable (amino-acids), exact inference requires exponential time in the protein length, and efficient approximations are needed for practical applicability. Here we propose a very efficient multivariate Gaussian modeling approach as a variant of direct-coupling analysis: the discrete amino-acid variables are replaced by continuous Gaussian random variables. The resulting statistical inference problem is efficiently and exactly solvable. We show that the quality of inference is comparable or superior to the one achieved by mean-field approximations to inference with discrete variables, as done by direct-coupling analysis. This is true for (i) the prediction of residue-residue contacts in proteins, and (ii) the identification of protein-protein interaction partner in bacterial signal transduction. An implementation of our multivariate Gaussian approach is available at the website http://areeweb.polito.it/ricerca/cmp/code. PMID:24663061
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1984-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Teglas, Russell
1987-01-01
The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.
Monitoring of Crew Activity with FAMOS
NASA Astrophysics Data System (ADS)
Wolf, L.; Cajochen, C.; Bromundt, V.
2007-10-01
The success of long duration space missions, such as manned missions to Mars, depends on high and sustained levels of vigilance and performance of astronauts and operators working in the technology rich environment of a spacecraft. Experiment 'Monitoring of Crew Activity with FAMOS' was set up to obtain operational experience with complimentary methods / technologies to assess the alertness / sleepiness status of selected AustroMars crewmembers on a daily basis. We applied a neurobehavioral test battery consisting of 1) Karolinska Sleepiness Scale KSS, 2) Karolinska Drowsiness Test KDT, 3) Psychomotor Vigilance Task PVT, combined with 4) left eye video recordings with an early prototype of the FAMOS Fatigue Monitoring System headset currently being developed by Sowoon Technologies (CH), and 5) Actiwatches that were worn continuously. A test battery required approximately 15 minutes and was repeated up to 4 times daily by 2 to 4 subjects. Here we present the data analysis of methods 1, 2, 3, and 5, while data analysis of method 4 is still in progress.
ANI-1, A data set of 20 million calculated off-equilibrium conformations for organic molecules
NASA Astrophysics Data System (ADS)
Smith, Justin S.; Isayev, Olexandr; Roitberg, Adrian E.
2017-12-01
One of the grand challenges in modern theoretical chemistry is designing and implementing approximations that expedite ab initio methods without loss of accuracy. Machine learning (ML) methods are emerging as a powerful approach to constructing various forms of transferable atomistic potentials. They have been successfully applied in a variety of applications in chemistry, biology, catalysis, and solid-state physics. However, these models are heavily dependent on the quality and quantity of data used in their fitting. Fitting highly flexible ML potentials, such as neural networks, comes at a cost: a vast amount of reference data is required to properly train these models. We address this need by providing access to a large computational DFT database, which consists of more than 20 M off equilibrium conformations for 57,462 small organic molecules. We believe it will become a new standard benchmark for comparison of current and future methods in the ML potential community.
Application of high resolution synchrotron micro-CT radiation in dental implant osseointegration.
Neldam, Camilla Albeck; Lauridsen, Torsten; Rack, Alexander; Lefolii, Tore Tranberg; Jørgensen, Niklas Rye; Feidenhans'l, Robert; Pinholt, Else Marie
2015-06-01
The purpose of this study was to describe a refined method using high-resolution synchrotron radiation microtomography (SRmicro-CT) to evaluate osseointegration and peri-implant bone volume fraction after titanium dental implant insertion. SRmicro-CT is considered gold standard evaluating bone microarchitecture. Its high resolution, high contrast, and excellent high signal-to-noise-ratio all contribute to the highest spatial resolutions achievable today. Using SRmicro-CT at a voxel size of 5 μm in an experimental goat mandible model, the peri-implant bone volume fraction was found to quickly increase to 50% as the radial distance from the implant surface increased, and levelled out to approximately 80% at a distance of 400 μm. This method has been successful in depicting the bone and cavities in three dimensions thereby enabling us to give a more precise answer to the fraction of the bone-to-implant contact compared to previous methods. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104
Xu, Gongxian; Liu, Ying; Gao, Qunwang
2016-02-10
This paper deals with multi-objective optimization of continuous bio-dissimilation process of glycerol to 1, 3-propanediol. In order to maximize the production rate of 1, 3-propanediol, maximize the conversion rate of glycerol to 1, 3-propanediol, maximize the conversion rate of glycerol, and minimize the concentration of by-product ethanol, we first propose six new multi-objective optimization models that can simultaneously optimize any two of the four objectives above. Then these multi-objective optimization problems are solved by using the weighted-sum and normal-boundary intersection methods respectively. Both the Pareto filter algorithm and removal criteria are used to remove those non-Pareto optimal points obtained by the normal-boundary intersection method. The results show that the normal-boundary intersection method can successfully obtain the approximate Pareto optimal sets of all the proposed multi-objective optimization problems, while the weighted-sum approach cannot achieve the overall Pareto optimal solutions of some multi-objective problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie; Zhang, Pengpeng; Pham, Hai; Xiong, Jianping; Yorke, Ellen D.; Goodman, Karyn A.; Rimner, Andreas; Mostafavi, Hassan; Mageras, Gig S.
2014-01-01
Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. The kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur. PMID:24989384
DOE Office of Scientific and Technical Information (OSTI.GOV)
Regmi, Rajesh; Lovelock, D. Michael; Hunt, Margie
Purpose: Certain types of commonly used fiducial markers take on irregular shapes upon implantation in soft tissue. This poses a challenge for methods that assume a predefined shape of markers when automatically tracking such markers in kilovoltage (kV) radiographs. The authors have developed a method of automatically tracking regularly and irregularly shaped markers using kV projection images and assessed its potential for detecting intrafractional target motion during rotational treatment. Methods: Template-based matching used a normalized cross-correlation with simplex minimization. Templates were created from computed tomography (CT) images for phantom studies and from end-expiration breath-hold planning CT for patient studies. Themore » kV images were processed using a Sobel filter to enhance marker visibility. To correct for changes in intermarker relative positions between simulation and treatment that can introduce errors in automatic matching, marker offsets in three dimensions were manually determined from an approximately orthogonal pair of kV images. Two studies in anthropomorphic phantom were carried out, one using a gold cylindrical marker representing regular shape, another using a Visicoil marker representing irregular shape. Automatic matching of templates to cone beam CT (CBCT) projection images was performed to known marker positions in phantom. In patient data, automatic matching was compared to manual matching as an approximate ground truth. Positional discrepancy between automatic and manual matching of less than 2 mm was assumed as the criterion for successful tracking. Tracking success rates were examined in kV projection images from 22 CBCT scans of four pancreas, six gastroesophageal junction, and one lung cancer patients. Each patient had at least one irregularly shaped radiopaque marker implanted in or near the tumor. In addition, automatic tracking was tested in intrafraction kV images of three lung cancer patients with irregularly shaped markers during 11 volumetric modulated arc treatments. Purpose-built software developed at our institution was used to create marker templates and track the markers embedded in kV images. Results: Phantom studies showed mean ± standard deviation measurement uncertainty of automatic registration to be 0.14 ± 0.07 mm and 0.17 ± 0.08 mm for Visicoil and gold cylindrical markers, respectively. The mean success rate of automatic tracking with CBCT projections (11 frames per second, fps) of pancreas, gastroesophageal junction, and lung cancer patients was 100%, 99.1% (range 98%–100%), and 100%, respectively. With intrafraction images (approx. 0.2 fps) of lung cancer patients, the success rate was 98.2% (range 97%–100%), and 94.3% (range 93%–97%) using templates from 1.25 mm and 2.5 mm slice spacing CT scans, respectively. Correction of intermarker relative position was found to improve the success rate in two out of eight patients analyzed. Conclusions: The proposed method can track arbitrary marker shapes in kV images using templates generated from a breath-hold CT acquired at simulation. The studies indicate its feasibility for tracking tumor motion during rotational treatment. Investigation of the causes of misregistration suggests that its rate of incidence can be reduced with higher frequency of image acquisition, templates made from smaller CT slice spacing, and correction of changes in intermarker relative positions when they occur.« less
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.
2018-03-01
Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.
ERIC Educational Resources Information Center
Lewis, Kenneth D.
This address was presented to a conference celebrating the fifteenth anniversary of The United Way's "Success By 6" early childhood development initiative. The address acknowledges that the American public education system has traditionally operated on the assumption that children enter kindergarten at approximately the same educational…
ERIC Educational Resources Information Center
Universal Service Administrative Company, 2008
2008-01-01
This report includes examples of how Universal Service Fund support is used by beneficiaries across the country. Included in this version are approximately 140 success stories of how the Universal Service Fund is helping to improve connectivity in the United States. This report is updated quarterly, as Universal Service Administrative Company…
Causes and consequences of herbivory on prairie lupine (Lupinus lepidus) in early primary succession
John G. Bishop; William F. Fagan; John G. Schade; Charles M. Crisafulli
2005-01-01
Primary succession, the formation and change of ecological communities in locations initially lacking organisms or other biological materials, has been an important research focus for at least a century (Cowles 1899; Griggs 1933; Eggler 1941; Crocker and Major 1955; Eggler 1959; Miles and Walton 1993; Waker and del Moral 2003). At approximately 60 km2...
ERIC Educational Resources Information Center
Luce, R. Duncan; Steingrimsson, Ragnar; Narens, Louis
2010-01-01
Most studies concerning psychological measurement scales of intensive attributes have concluded that these scales are of ratio type and that the psychophysical function is closely approximated by a power function. Experiments show, for such cases, that a commutativity property must hold under either successive increases or successive decreases…
ERIC Educational Resources Information Center
Universal Service Administrative Company, 2007
2007-01-01
This report shows how Universal Service Fund support for schools and libraries is used by school districts and libraries around the country. Highlighted are approximately 190 success stories of program participants that have come to rely on the USF to expand educational opportunities for students through better use of telecommunications technology…
ERIC Educational Resources Information Center
Brooks, Dianne
2009-01-01
Completion of postsecondary education frequently builds upon a student's successful academic and personal experience during high school. For students with hearing loss, healthy adjustment to hearing loss is a key lifelong developmental process. The vast majority (94%) of approximately 1.1 million K-12 students with hearing loss are educated in…
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
NASA Astrophysics Data System (ADS)
Foster, B.; Heath, G. P.; Llewellyn, T. J.; Gingrich, D. M.; Harnew, N.; Hallam-Baker, P. M.; Khatri, T.; McArthur, I. C.; Morawitz, P.; Nash, J.; Shield, P. D.; Topp-Jorgensen, S.; Wilson, F. F.; Allen, D. B.; Carter, R. C.; Jeffs, M. D.; Morrissey, M. C.; Quinton, S. P. H.; Lane, J. B.; Postranecky, M.
1993-05-01
The Central Tracking Detector of the ZEUS experiment employs a time difference technique to measure the z coordinate of each hit. The method provides fast, three-dimensional space point measurements which are used as input to all levels of the ZEUS trigger. Such a tracking trigger is essential in order to discriminate against events with vertices lying outside the nominal electron-proton interaction region. Since the beam crossing interval of the HERA collider is 96 ns, all data must be pipelined through the front-end readout electronics. Subsequent data aquisition employs a novel technique which utilizes a network of approximately 120 INMOS transputers to process the data in parallel. The z-by-timing method and its data aquisition have been employed successfully in recording and reconstructing tracks from electron-proton interactions in ZEUS.
Autogenic feedback training experiment: A preventative method for space motion sickness
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.
1993-01-01
Space motion sickness is a disorder which produces symptoms similar to those of motion sickness on Earth. This syndrome has affected approximately 50 percent of all astronauts and cosmonauts exposed to microgravity in space, but it differs from what is commonly known as motion sickness in a number of critical ways. There is currently no ground-based method for predicting susceptibility to motion sickness in space. Antimotion sickness drugs have had limited success in preventing or counteracting symptoms in space, and frequently caused debilitating side effects. The objectives were: (1) to evaluate the effectiveness of Autogenic-Feedback Training as a countermeasure for space motion sickness; (2) to compare physiological data and in-flight symptom reports to ground-based motion sickness data; and (3) to predict susceptibility to space motion sickness based on pre-flight data of each treatment group crew member.
NASA Technical Reports Server (NTRS)
Mitchell, C. E.; Eckert, K.
1979-01-01
A program for predicting the linear stability of liquid propellant rocket engines is presented. The underlying model assumptions and analytical steps necessary for understanding the program and its input and output are also given. The rocket engine is modeled as a right circular cylinder with an injector with a concentrated combustion zone, a nozzle, finite mean flow, and an acoustic admittance, or the sensitive time lag theory. The resulting partial differential equations are combined into two governing integral equations by the use of the Green's function method. These equations are solved using a successive approximation technique for the small amplitude (linear) case. The computational method used as well as the various user options available are discussed. Finally, a flow diagram, sample input and output for a typical application and a complete program listing for program MODULE are presented.
Alhama, José; Romero-Ruiz, Antonio; López-Barea, Juan
2006-02-24
In this paper, we describe a highly specific, sensitive and reliable method for total metallothionein (MT) quantification by RP-HPLC coupled to fluorescence detection following reaction with monobromobimane of thiols from metal-depleted MT after heat-denaturation of extracts in the presence of sodium dodecyl sulphate (SDS). SDS-polyacrylamide gel electrophoresis (SDS-PAGE) confirmed the identity of the peak resolved (t(R)=16.44) with MT: a highly fluorescent protein of approximately 8.3 kDa, in agreement with the high thiol content and low MT size. Other heat-resistant and Cys-containing proteins of 35 kDa were efficiently separated. The new method was successfully used to quantify MT content in digestive gland of clams from southern Spanish coastal sites with different metal levels, and is proposed as a tool for using MTs as biomarker in monitoring programmes.
Ultrasonic measurements of thin zinc layers on concrete
NASA Astrophysics Data System (ADS)
Jansen, Henri; Brooks, Bill; Nguyen, Vinh; Koretsky, Milo
2008-05-01
In order to protect bridges at the coast from corrosion, a thin layer (approximately 0.5 mm) of zinc is sprayed on the concrete of the bridge. When this zinc layer is electrically connected to the reinforcing steel (rebar) and placed at a positive potential with respect to the rebar, oxidation is favored at the zinc layer and reduced at the rebar. The resulting protection of the rebar fails when the zinc layer delaminates from the concrete or when the zinc oxidation product layer becomes too thick. We have used ultrasonic detection to investigate the properties of the zinc layer. This method has been applied very successfully in the semiconductor industry. We present the details of the method and the expected response. Unfortunately, we are not able to measure changes in the zinc layer, because either the frequency we use (10-20 MHz) is too low, or scattering in the concrete is a dominant effect.
Blueberry (Vaccinium corymbosum L.).
Song, Guo-Qing
2015-01-01
Vaccinium consists of approximately 450 species, of which highbush blueberry (Vaccinium corymbosum) is one of the three major Vaccinium fruit crops (i.e., blueberry, cranberry, and lingonberry) domesticated in the twentieth century. In blueberry the adventitious shoot regeneration using leaf explants has been the most desirable regeneration system to date; Agrobacterium tumefaciens-mediated transformation is the major gene delivery method and effective selection has been reported using either the neomycin phosphotransferase II gene (nptII) or the bialaphos resistance (bar) gene as selectable markers. The A. tumefaciens-mediated transformation protocol described in this chapter is based on combining the optimal conditions for efficient plant regeneration, reliable gene delivery, and effective selection. The protocol has led to successful regeneration of transgenic plants from leaf explants of four commercially important highbush blueberry cultivars for multiple purposes, providing a powerful approach to supplement conventional breeding methods for blueberry by introducing genes of interest.
Fukuda, Muneyuki; Tomimatsu, Satoshi; Nakamura, Kuniyasu; Koguchi, Masanari; Shichi, Hiroyasu; Umemura, Kaoru
2004-01-01
A new method to prepare micropillar specimens with a high aspect ratio that is suitable for three-dimensional scanning transmission electron microscopy (3D-STEM) was developed. The key features of the micropillar fabrication are: first, microsampling to extract a small piece including the structure of interest in an IC chip, and second, an ion-beam with an incident direction of 60 degrees to the pillar's axis that enables the parallel sidewalls of the pillar to be produced with a high aspect ratio. A memory-cell structure (length: 6 microm; width: 300 x 500 nm) was fabricated in the micropillar and observed from various directions with a 3D-STEM. A planiform capacitor covered with granular surfaces and a solid crossing gate and metal lines was successfully observed threedimensionally at a resolution of approximately 5 nm.
Challenges and perspectives of metaproteomic data analysis.
Heyer, Robert; Schallert, Kay; Zoun, Roman; Becher, Beatrice; Saake, Gunter; Benndorf, Dirk
2017-11-10
In nature microorganisms live in complex microbial communities. Comprehensive taxonomic and functional knowledge about microbial communities supports medical and technical application such as fecal diagnostics as well as operation of biogas plants or waste water treatment plants. Furthermore, microbial communities are crucial for the global carbon and nitrogen cycle in soil and in the ocean. Among the methods available for investigation of microbial communities, metaproteomics can approximate the activity of microorganisms by investigating the protein content of a sample. Although metaproteomics is a very powerful method, issues within the bioinformatic evaluation impede its success. In particular, construction of databases for protein identification, grouping of redundant proteins as well as taxonomic and functional annotation pose big challenges. Furthermore, growing amounts of data within a metaproteomics study require dedicated algorithms and software. This review summarizes recent metaproteomics software and addresses the introduced issues in detail. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sorokin, Anatoly; Selkov, Gene; Goryanin, Igor
2012-07-16
The volume of the experimentally measured time series data is rapidly growing, while storage solutions offering better data types than simple arrays of numbers or opaque blobs for keeping series data are sorely lacking. A number of indexing methods have been proposed to provide efficient access to time series data, but none has so far been integrated into a tried-and-proven database system. To explore the possibility of such integration, we have developed a data type for time series storage in PostgreSQL, an object-relational database system, and equipped it with an access method based on SAX (Symbolic Aggregate approXimation). This new data type has been successfully tested in a database supporting a large-scale plant gene expression experiment, and it was additionally tested on a very large set of simulated time series data. Copyright © 2011 Elsevier B.V. All rights reserved.
LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.
Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong
2017-03-01
In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.
Decomposition of conditional probability for high-order symbolic Markov chains.
Melnik, S S; Usatenko, O V
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Decomposition of conditional probability for high-order symbolic Markov chains
NASA Astrophysics Data System (ADS)
Melnik, S. S.; Usatenko, O. V.
2017-07-01
The main goal of this paper is to develop an estimate for the conditional probability function of random stationary ergodic symbolic sequences with elements belonging to a finite alphabet. We elaborate on a decomposition procedure for the conditional probability function of sequences considered to be high-order Markov chains. We represent the conditional probability function as the sum of multilinear memory function monomials of different orders (from zero up to the chain order). This allows us to introduce a family of Markov chain models and to construct artificial sequences via a method of successive iterations, taking into account at each step increasingly high correlations among random elements. At weak correlations, the memory functions are uniquely expressed in terms of the high-order symbolic correlation functions. The proposed method fills the gap between two approaches, namely the likelihood estimation and the additive Markov chains. The obtained results may have applications for sequential approximation of artificial neural network training.
Schuck, P; Millar, D B
1998-05-15
A new method is described that allows measurement of the molar mass of the solute within 15 to 30 min after start of a conventional long-column sedimentation equilibrium experiment. A series of scans of the concentration distribution in close vicinity of the meniscus, taken in rapid succession after the start of the centrifuge run, is analyzed by direct fitting using the Lamm equation and the Svedberg equation. In case of a single solute, this analysis of the initial depletion at the meniscus reveals its buoyant molar mass and sedimentation coefficient with an accuracy of approximately 10% and provides gross information about sample heterogeneity. This method can be used to study macromolecules that do not possess the prolonged stability needed in conventional sedimentation equilibrium experiments and it can increase the efficiency of sedimentation equilibrium experiments of previously uncharacterized samples.
Optimization of β-cyclodextrin cross-linked polymer for monitoring of quercetin
NASA Astrophysics Data System (ADS)
Zhu, Xiashi; Ping, Wenhui
2014-11-01
A novel method for the separation/analysis of quercetin was described, which was based on the investigation of the inclusion interactions of β-cyclodextrin cross-linked polymer (β-CDCP) with quercetin (Qu) and the adsorption behavior of Qu on β-CDCP. The inclusion interaction of β-CDCP with Qu was studied through FTIR, TGA and 13C NMR. Under the optimum conditions, the preconcentration factor of the proposed method was approximately 8.8, the β-CDCP could be used repeatedly for 30 times and offered better recovery. The linear range, limit of detection (LOD) and the relative standard deviation (RSD) was found to be 0.10-12.0 μg mL-1, 4.6 ng mL-1 and 3.10% (n = 3, c = 2.0 μg mL-1) respectively. This technique had been successfully applied to the determination of Qu in real samples.
A 30-MHz piezo-composite ultrasound array for medical imaging applications.
Ritter, Timothy A; Shrout, Thomas R; Tutwiler, Rick; Shung, K Kirk
2002-02-01
Ultrasound imaging at frequencies above 20 MHz is capable of achieving improved resolution in clinical applications requiring limited penetration depth. High frequency arrays that allow real-time imaging are desired for these applications but are not yet currently available. In this work, a method for fabricating fine-scale 2-2 composites suitable for 30-MHz linear array transducers was successfully demonstrated. High thickness coupling, low mechanical loss, and moderate electrical loss were achieved. This piezo-composite was incorporated into a 30-MHz array that included acoustic matching, an elevation focusing lens, electrical matching, and an air-filled kerf between elements. Bandwidths near 60%, 15-dB insertion loss, and crosstalk less than -30 dB were measured. Images of both a phantom and an ex vivo human eye were acquired using a synthetic aperture reconstruction method, resulting in measured lateral and axial resolutions of approximately 100 microm.
Green's functions in equilibrium and nonequilibrium from real-time bold-line Monte Carlo
NASA Astrophysics Data System (ADS)
Cohen, Guy; Gull, Emanuel; Reichman, David R.; Millis, Andrew J.
2014-03-01
Green's functions for the Anderson impurity model are obtained within a numerically exact formalism. We investigate the limits of analytical continuation for equilibrium systems, and show that with real time methods even sharp high-energy features can be reliably resolved. Continuing to an Anderson impurity in a junction, we evaluate two-time correlation functions, spectral properties, and transport properties, showing how the correspondence between the spectral function and the differential conductance breaks down when nonequilibrium effects are taken into account. Finally, a long-standing dispute regarding this model has involved the voltage splitting of the Kondo peak, an effect which was predicted over a decade ago by approximate analytical methods but never successfully confirmed by numerics. We settle the issue by demonstrating in an unbiased manner that this splitting indeed occurs. Yad Hanadiv-Rothschild Foundation, TG-DMR120085, TG-DMR130036, NSF CHE-1213247, NSF DMR 1006282, DOE ER 46932.
NASA Astrophysics Data System (ADS)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a large-scale 4D-Var system. The impact of using the diagonal preconditioners proposed by Gilbert and Le Maréchal (1989) instead of the usual Oren-Spedicato scalar will be first presented. We will also introduce new hybrid methods that combine randomization estimates of the analysis error variance with L-BFGS diagonal updates to improve the inverse Hessian approximation. Results from these new algorithms will be evaluated against standard large ensemble Monte-Carlo simulations. The methods explored here are applied to the problem of inferring global atmospheric CO2 fluxes using remote sensing observations, and are intended to be integrated with the future NASA Carbon Monitoring System.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
NASA Technical Reports Server (NTRS)
Wood, C. A.
1974-01-01
For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.
NASA Astrophysics Data System (ADS)
Yoo, S. H.
2017-12-01
Monitoring seismologists have successfully used seismic coda for event discrimination and yield estimation for over a decade. In practice seismologists typically analyze long-duration, S-coda signals with high signal-to-noise ratios (SNR) at regional and teleseismic distances, since the single back-scattering model reasonably predicts decay of the late coda. However, seismic monitoring requirements are shifting towards smaller, locally recorded events that exhibit low SNR and short signal lengths. To be successful at characterizing events recorded at local distances, we must utilize the direct-phase arrivals, as well as the earlier part of the coda, which is dominated by multiple forward scattering. To remedy this problem, we have developed a new hybrid method known as full-waveform envelope template matching to improve predicted envelope fits over the entire waveform and account for direct-wave and early coda complexity. We accomplish this by including a multiple forward-scattering approximation in the envelope modeling of the early coda. The new hybrid envelope templates are designed to fit local and regional full waveforms and produce low-variance amplitude estimates, which will improve yield estimation and discrimination between earthquakes and explosions. To demonstrate the new technique, we applied our full-waveform envelope template-matching method to the six known North Korean (DPRK) underground nuclear tests and four aftershock events following the September 2017 test. We successfully discriminated the event types and estimated the yield for all six nuclear tests. We also applied the same technique to the 2015 Tianjin explosions in China, and another suspected low-yield explosion at the DPRK test site on May 12, 2010. Our results show that the new full-waveform envelope template-matching method significantly improves upon longstanding single-scattering coda prediction techniques. More importantly, the new method allows monitoring seismologists to extend coda-based techniques to lower magnitude thresholds and low-yield local explosions.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
NASA Astrophysics Data System (ADS)
Wu, Lan; Sun, Wei; Wang, Bo; Zhao, Haiyu; Li, Yaoli; Cai, Shaoqing; Xiang, Li; Zhu, Yingjie; Yao, Hui; Song, Jingyuan; Cheng, Yung-Chi; Chen, Shilin
2015-08-01
Traditional herbal medicines adulterated and contaminated with plant materials from the Aristolochiaceae family, which contain aristolochic acids (AAs), cause aristolochic acid nephropathy. Approximately 256 traditional Chinese patent medicines, containing Aristolochiaceous materials, are still being sold in Chinese markets today. In order to protect consumers from health risks due to AAs, the hidden assassins, efficient methods to differentiate Aristolochiaceous herbs from their putative substitutes need to be established. In this study, 158 Aristolochiaceous samples representing 46 species and four genera as well as 131 non-Aristolochiaceous samples representing 33 species, 20 genera and 12 families were analyzed using DNA barcodes based on the ITS2 and psbA-trnH sequences. Aristolochiaceous materials and their non-Aristolochiaceous substitutes were successfully identified using BLAST1, the nearest distance method and the neighbor-joining (NJ) tree. In addition, based on sequence information of ITS2, we developed a Real-Time PCR assay which successfully identified herbal material from the Aristolochiaceae family. Using Ultra High Performance Liquid Chromatography-Mass Spectrometer (UHPLC-HR-MS), we demonstrated that most representatives from the Aristolochiaceae family contain toxic AAs. Therefore, integrated DNA barcodes, Real-Time PCR assays using TaqMan probes and UHPLC-HR-MS system provides an efficient and reliable authentication system to protect consumers from health risks due to the hidden assassins (AAs).
Ozer, Abdullah; Tome, Jacob M.; Friedman, Robin C.; Gheba, Dan; Schroth, Gary P.; Lis, John T.
2016-01-01
Because RNA-protein interactions play a central role in a wide-array of biological processes, methods that enable a quantitative assessment of these interactions in a high-throughput manner are in great demand. Recently, we developed the High Throughput Sequencing-RNA Affinity Profiling (HiTS-RAP) assay, which couples sequencing on an Illumina GAIIx with the quantitative assessment of one or several proteins’ interactions with millions of different RNAs in a single experiment. We have successfully used HiTS-RAP to analyze interactions of EGFP and NELF-E proteins with their corresponding canonical and mutant RNA aptamers. Here, we provide a detailed protocol for HiTS-RAP, which can be completed in about a month (8 days hands-on time) including the preparation and testing of recombinant proteins and DNA templates, clustering DNA templates on a flowcell, high-throughput sequencing and protein binding with GAIIx, and finally data analysis. We also highlight aspects of HiTS-RAP that can be further improved and points of comparison between HiTS-RAP and two other recently developed methods, RNA-MaP and RBNS. A successful HiTS-RAP experiment provides the sequence and binding curves for approximately 200 million RNAs in a single experiment. PMID:26182240
NASA Technical Reports Server (NTRS)
Garrett, L. B.; Smith, G. L.; Perkins, J. N.
1972-01-01
An implicit finite-difference scheme is developed for the fully coupled solution of the viscous, radiating stagnation-streamline equations, including strong blowing. Solutions are presented for both air injection and injection of carbon-phenolic ablation products into air at conditions near the peak radiative heating point in an earth entry trajectory from interplanetary return missions. A detailed radiative-transport code that accounts for the important radiative exchange processes for gaseous mixtures in local thermodynamic and chemical equilibrium is utilized in the study. With minimum number of assumptions for the initially unknown parameters and profile distributions, convergent solutions to the full stagnation-line equations are rapidly obtained by a method of successive approximations. Damping of selected profiles is required to aid convergence of the solutions for massive blowing. It is shown that certain finite-difference approximations to the governing differential equations stabilize and improve the solutions. Detailed comparisons are made with the numerical results of previous investigations. Results of the present study indicate lower radiative heat fluxes at the wall for carbonphenolic ablation than previously predicted.
Algebraic grid generation using tensor product B-splines. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Saunders, B. V.
1985-01-01
Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.
Estimating False Discovery Proportion Under Arbitrary Covariance Dependence*
Fan, Jianqing; Han, Xu; Gu, Weijie
2012-01-01
Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure. PMID:24729644
Core-Collapse Supernovae Explored by Multi-D Boltzmann Hydrodynamic Simulations
NASA Astrophysics Data System (ADS)
Sumiyoshi, Kohsuke; Nagakura, Hiroki; Iwakami, Wakana; Furusawa, Shun; Matsufuru, Hideo; Imakura, Akira; Yamada, Shoichi
We report the latest results of numerical simulations of core-collapse supernovae by solving multi-D neutrino-radiation hydrodynamics with Boltzmann equations. One of the longstanding issues of the explosion mechanism of supernovae has been uncertainty in the approximations of the neutrino transfer in multi-D such as the diffusion approximation and ray-by-ray method. The neutrino transfer is essential, together with 2D/3D hydrodynamical instabilities, to evaluate the neutrino heating behind the shock wave for successful explosions and to predict the neutrino burst signals. We tackled this difficult problem by utilizing our solver of the 6D Boltzmann equation for neutrinos in 3D space and 3D neutrino momentum space coupled with multi-D hydrodynamics adding special and general relativistic extensions. We have performed a set of 2D core-collapse simulations from 11M ⊙ and 15M ⊙ stars on K-computer in Japan by following long-term evolution over 400 ms after bounce to reveal the outcome from the full Boltzmann hydrodynamic simulations with a sophisticated equation of state with multi-nuclear species and updated rates for electron captures on nuclei.
NASA Technical Reports Server (NTRS)
Patten, W. N.; Robertshaw, H. H.; Pierpont, D.; Wynn, R. H.
1989-01-01
A new, near-optimal feedback control technique is introduced that is shown to provide excellent vibration attenuation for those distributed parameter systems that are often encountered in the areas of aeroservoelasticity and large space systems. The technique relies on a novel solution methodology for the classical optimal control problem. Specifically, the quadratic regulator control problem for a flexible vibrating structure is first cast in a weak functional form that admits an approximate solution. The necessary conditions (first-order) are then solved via a time finite-element method. The procedure produces a low dimensional, algebraic parameterization of the optimal control problem that provides a rigorous basis for a discrete controller with a first-order like hold output. Simulation has shown that the algorithm can successfully control a wide variety of plant forms including multi-input/multi-output systems and systems exhibiting significant nonlinearities. In order to firmly establish the efficacy of the algorithm, a laboratory control experiment was implemented to provide planar (bending) vibration attenuation of a highly flexible beam (with a first clamped-free mode of approximately 0.5 Hz).
Supervised Learning Based on Temporal Coding in Spiking Neural Networks.
Mostafa, Hesham
2017-08-01
Gradient descent training techniques are remarkably successful in training analog-valued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piecewise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior cannot be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.
Teaching human parasitology in China
2012-01-01
China has approximately one-fifth of the world’s population. Despite the recent success in controlling major parasitic diseases, parasitic diseases remain a significant human health problem in China. Hence, the discipline of human parasitology is considered as a core subject for undergraduate and postgraduate students of the medical sciences. We consider the teaching of human parasitology to be fundamental to the training of medical students, to the continued research on parasitic diseases, and to the prevention and control of human parasitic diseases. Here, we have summarized the distribution of educational institutions in China, particularly those that teach parasitology. In addition, we have described some existing parasitology courses in detail as well as the teaching methods used for different types of medical students. Finally, we have discussed the current problems in and reforms to human parasitology education. Our study indicates that 304 regular higher education institutions in China offer medical or related education. More than 70 universities have an independent department of parasitology that offers approximately 10 different parasitology courses. In addition, six universities in China have established excellence-building courses in human parasitology. PMID:22520237
El-Desoky, Hanaa S; Ghoneim, Mohamed M; El-Sheikh, Ragaa; Zidan, Naglaa M
2010-03-15
The indirect electrochemical removal of pollutants from effluents has become an attractive method in recent years. Removal (decolorization and mineralization) of Levafix Blue CA and Levafix Red CA reactive azo-dyes from aqueous media by electro-generated Fenton's reagent (Fe(2+)/H(2)O(2)) using a reticulated vitreous carbon cathode and a platinum gauze anode was optimized. Progress of oxidation (decolorization and mineralization) of the investigated azo-dyes with time of electro-Fenton's reaction was monitored by UV-visible absorbance measurements, Chemical oxygen demand (COD) removal and HPLC analysis. The results indicated that the electro-Fenton's oxidation system is efficient for treatment of such types of reactive dyes. Oxidation of each of the investigated azo-dyes by electro-generated Fenton's reagent up to complete decolorization and approximately 90-95% mineralization was achieved. Moreover, the optimized electro-Fenton's oxidation was successfully applied for complete decolorization and approximately 85-90% mineralization of both azo-dyes in real industrial wastewater samples collected from textile dyeing house at El-Mahalla El-Kobra, Egypt. (c) 2009 Elsevier B.V. All rights reserved.
Polarization effects in low-energy electron-CH4 elastic collisions in an exact exchange treatment
NASA Astrophysics Data System (ADS)
Jain, Ashok; Weatherford, C. A.; Thompson, D. G.; McNaughten, P.
1989-12-01
We have investigated the polarization effects in very-low-energy (below 1 eV) electron- CH4 collisions in an exact-exchange treatment. The two models of the parameter-free polarization potential are employed; one, the VpolJT potential, introduced by Jain and Thompson [J. Phys. B 15, L631 (1982)], is based on an approximate polarized-orbital method, and two, the correlation-polarization potential VpolCP, first proposed by O'Connel and Lane [Phys. Rev. A 27, 1893 (1983)], is given as a simple analytic form in terms of the charge density of the target. In this rather very low-energy region, the polarization effects play a decisive role, particularly in creating structure in the differential cross section (DCS) and producing the Ramsauer-Townsend minimum in the total cross section. Our DCS at 0.2, 0.4, and 0.6 eV are compared with recent measurements. We found that a local parameter-free approximation for the polarization potential is quite successful if it is determined under the polarized-orbital-type technique rather than based on the correlation-polarization approach.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Factors associated with breastfeeding initiation time in a baby-friendly hospital in Istanbul.
İnal, Sevil; Aydin, Yasemin; Canbulat, Nejla
2016-11-01
To investigate perinatal factors that affect breastfeeding of newborns delivered at a baby-friendly public hospital in Turkey, including the time of the first physical examination by a pediatrician, the first union with their mothers, and the first breastfeeding time after delivery. The research was conducted from May 2nd through June 30th, 2011, in a baby-friendly public hospital in Istanbul. The sample consisted of 194 mothers and their full-term newborns. The data were collected via an observation form developed by the researchers. In analyzing the data, the average, standard deviation, minimum, maximum values, Chi-square, and percentages were used. The results revealed that the first physical examinations of the newborns were performed approximately 53.02±39min (range, 1-180 min) after birth. The newborns were given to their mothers approximately 69.75±41min (range, 3-190 min) after birth. Consequently, the first initiated breastfeeding took place approximately 78.58±44min following birth, and active sucking was initiated after approximately 85.90±54min. A large percentage of the newborns (64.4%) were not examined by a specialist pediatrician within half an hour of birth, and 74.7% were not united with their mothers within the same period. Also, the newborns who initiated breastfeeding within the first half hour had significantly earlier success with active sucking and required significantly less assistance to achieve successful breastfeeding. The newborns in our study met with their mothers late in the birth ward because examinations of the newborns were delayed. The newborns began initial sucking later, and this chain reaction negatively impacted the breastfeeding success of the newborns. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Grüning, M.; Gritsenko, O. V.; Baerends, E. J.
2002-04-01
An approximate Kohn-Sham (KS) exchange potential vxσCEDA is developed, based on the common energy denominator approximation (CEDA) for the static orbital Green's function, which preserves the essential structure of the density response function. vxσCEDA is an explicit functional of the occupied KS orbitals, which has the Slater vSσ and response vrespσCEDA potentials as its components. The latter exhibits the characteristic step structure with "diagonal" contributions from the orbital densities |ψiσ|2, as well as "off-diagonal" ones from the occupied-occupied orbital products ψiσψj(≠1)σ*. Comparison of the results of atomic and molecular ground-state CEDA calculations with those of the Krieger-Li-Iafrate (KLI), exact exchange (EXX), and Hartree-Fock (HF) methods show, that both KLI and CEDA potentials can be considered as very good analytical "closure approximations" to the exact KS exchange potential. The total CEDA and KLI energies nearly coincide with the EXX ones and the corresponding orbital energies ɛiσ are rather close to each other for the light atoms and small molecules considered. The CEDA, KLI, EXX-ɛiσ values provide the qualitatively correct order of ionizations and they give an estimate of VIPs comparable to that of the HF Koopmans' theorem. However, the additional off-diagonal orbital structure of vxσCEDA appears to be essential for the calculated response properties of molecular chains. KLI already considerably improves the calculated (hyper)polarizabilities of the prototype hydrogen chains Hn over local density approximation (LDA) and standard generalized gradient approximations (GGAs), while the CEDA results are definitely an improvement over the KLI ones. The reasons of this success are the specific orbital structures of the CEDA and KLI response potentials, which produce in an external field an ultranonlocal field-counteracting exchange potential.
Yao, Xin; Zhou, Guisheng; Tang, Yuping; Li, Zhenhao; Su, Shulan; Qian, Dawei; Duan, Jin-Ao
2013-03-07
A sensitive and accurate ultra-performance liquid chromatography coupled with triple quadrupole mass (UPLC-MS/MS) method was developed for the determination of quercetin-3-O-β-D-glucopyranoside-(4→1)-α-L-rhamnoside (QGR) in rat plasma using rutin as internal standard. Chromatographic separation was achieved on a Acquity BEH C18 column (100 mm × 2.1 mm, 1.7 μm) with a gradient elution of acetonitrile and 0.10% formic acid (v/v) at a flow rate of 0.4 mL/min. QGR and rutin were detected using electrospray negative ionization mass spectrometry in the multiple reaction monitoring (MRM) mode. The method demonstrated good linearity and did not show any endogenous interference with the QGR and rutin peaks. This method was successfully applied to a pharmacokinetic study of QGR in rats after intravenous (20 mg/kg) and oral (40 mg/kg) administration, and the results showed that the compound was poorly absorbed, with an absolute bioavailability of approximately 3.41%.
Fractional spectral and pseudo-spectral methods in unbounded domains: Theory and applications
NASA Astrophysics Data System (ADS)
Khosravian-Arab, Hassan; Dehghan, Mehdi; Eslahchi, M. R.
2017-06-01
This paper is intended to provide exponentially accurate Galerkin, Petrov-Galerkin and pseudo-spectral methods for fractional differential equations on a semi-infinite interval. We start our discussion by introducing two new non-classical Lagrange basis functions: NLBFs-1 and NLBFs-2 which are based on the two new families of the associated Laguerre polynomials: GALFs-1 and GALFs-2 obtained recently by the authors in [28]. With respect to the NLBFs-1 and NLBFs-2, two new non-classical interpolants based on the associated- Laguerre-Gauss and Laguerre-Gauss-Radau points are introduced and then fractional (pseudo-spectral) differentiation (and integration) matrices are derived. Convergence and stability of the new interpolants are proved in detail. Several numerical examples are considered to demonstrate the validity and applicability of the basis functions to approximate fractional derivatives (and integrals) of some functions. Moreover, the pseudo-spectral, Galerkin and Petrov-Galerkin methods are successfully applied to solve some physical ordinary differential equations of either fractional orders or integer ones. Some useful comments from the numerical point of view on Galerkin and Petrov-Galerkin methods are listed at the end.
Parallel iterative solution for h and p approximations of the shallow water equations
Barragy, E.J.; Walters, R.A.
1998-01-01
A p finite element scheme and parallel iterative solver are introduced for a modified form of the shallow water equations. The governing equations are the three-dimensional shallow water equations. After a harmonic decomposition in time and rearrangement, the resulting equations are a complex Helmholz problem for surface elevation, and a complex momentum equation for the horizontal velocity. Both equations are nonlinear and the resulting system is solved using the Picard iteration combined with a preconditioned biconjugate gradient (PBCG) method for the linearized subproblems. A subdomain-based parallel preconditioner is developed which uses incomplete LU factorization with thresholding (ILUT) methods within subdomains, overlapping ILUT factorizations for subdomain boundaries and under-relaxed iteration for the resulting block system. The method builds on techniques successfully applied to linear elements by introducing ordering and condensation techniques to handle uniform p refinement. The combined methods show good performance for a range of p (element order), h (element size), and N (number of processors). Performance and scalability results are presented for a field scale problem where up to 512 processors are used. ?? 1998 Elsevier Science Ltd. All rights reserved.
Space Technology 5 Multi-point Measurements of Near-Earth Magnetic Fields: Initial Results
NASA Technical Reports Server (NTRS)
Slavin, James A.; Le, G.; Strangeway, R. L.; Wang, Y.; Boardsen, S.A.; Moldwin, M. B.; Spence, H. E.
2007-01-01
The Space Technology 5 (ST-5) mission successfully placed three micro-satellites in a 300 x 4500 km dawn-dusk orbit on 22 March 2006. Each spacecraft carried a boom-mounted vector fluxgate magnetometer that returned highly sensitive and accurate measurements of the geomagnetic field. These data allow, for the first time, the separation of temporal and spatial variations in field-aligned current (FAC) perturbations measured in low-Earth orbit on time scales of approximately 10 sec to 10 min. The constellation measurements are used to directly determine field-aligned current sheet motion, thickness and current density. In doing so, we demonstrate two multi-point methods for the inference of FAC current density that have not previously been possible in low-Earth orbit; 1) the "standard method," based upon s/c velocity, but corrected for FAC current sheet motion, and 2) the "gradiometer method" which uses simultaneous magnetic field measurements at two points with known separation. Future studies will apply these methods to the entire ST-5 data set and expand to include geomagnetic field gradient analyses as well as field-aligned and ionospheric currents.
Ku, Yu-Fu; Huang, Long-Sun
2018-01-01
Here, we provide a method and apparatus for real-time compensation of the thermal effect of single free-standing piezoresistive microcantilever-based biosensors. The sensor chip contained an on-chip fixed piezoresistor that served as a temperature sensor, and a multilayer microcantilever with an embedded piezoresistor served as a biomolecular sensor. This method employed the calibrated relationship between the resistance and the temperature of piezoresistors to eliminate the thermal effect on the sensor, including the temperature coefficient of resistance (TCR) and bimorph effect. From experimental results, the method was verified to reduce the signal of thermal effect from 25.6 μV/°C to 0.3 μV/°C, which was approximately two orders of magnitude less than that before the processing of the thermal elimination method. Furthermore, the proposed approach and system successfully demonstrated its effective real-time thermal self-elimination on biomolecular detection without any thermostat device to control the environmental temperature. This method realizes the miniaturization of an overall measurement system of the sensor, which can be used to develop portable medical devices and microarray analysis platforms. PMID:29495574
Ultrasonic Method for Deployment Mechanism Bolt Element Preload Verification
NASA Technical Reports Server (NTRS)
Johnson, Eric C.; Kim, Yong M.; Morris, Fred A.; Mitchell, Joel; Pan, Robert B.
2014-01-01
Deployment mechanisms play a pivotal role in mission success. These mechanisms often incorporate bolt elements for which a preload within a specified range is essential for proper operation. A common practice is to torque these bolt elements to a specified value during installation. The resulting preload, however, can vary significantly with applied torque for a number of reasons. The goal of this effort was to investigate ultrasonic methods as an alternative for bolt preload verification in such deployment mechanisms. A family of non-explosive release mechanisms widely used by satellite manufacturers was chosen for the work. A willing contractor permitted measurements on a sampling of bolt elements for these release mechanisms that were installed by a technician following a standard practice. A variation of approximately 50% (+/- 25%) in the resultant preloads was observed. An alternative ultrasonic method to set the preloads was then developed and calibration data was accumulated. The method was demonstrated on bolt elements installed in a fixture instrumented with a calibrated load cell and designed to mimic production practice. The ultrasonic method yielded results within +/- 3% of the load cell reading. The contractor has since adopted the alternative method for its future production. Introduction
NASA Astrophysics Data System (ADS)
Butler, Jason E.; Shaqfeh, Eric S. G.
2005-01-01
Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions.
Butler, Jason E; Shaqfeh, Eric S G
2005-01-01
Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions. (c) 2005 American Institute of Physics.
Cho, Kang Su; Jung, Hae Do; Ham, Won Sik; Chung, Doo Yong; Kang, Yong Jin; Jang, Won Sik; Kwon, Jong Kyou; Choi, Young Deuk; Lee, Joo Yong
2015-01-01
Objectives To investigate whether skin-to-stone distance (SSD), which remains controversial in patients with ureter stones, can be a predicting factor for one session success following extracorporeal shock wave lithotripsy (ESWL) in patients with upper ureter stones. Patients and Methods We retrospectively reviewed the medical records of 1,519 patients who underwent their first ESWL between January 2005 and December 2013. Among these patients, 492 had upper ureter stones that measured 4–20 mm and were eligible for our analyses. Maximal stone length, mean stone density (HU), and SSD were determined on pretreatment non-contrast computed tomography (NCCT). For subgroup analyses, patients were divided into four groups. Group 1 consisted of patients with SSD<25th percentile, group 2 consisted of patients with SSD in the 25th to 50th percentile, group 3 patients had SSD in the 50th to 75th percentile, and group 4 patients had SSD≥75th percentile. Results In analyses of group 2 patients versus others, there were no statistical differences in mean age, stone length and density. However, the one session success rate in group 2 was higher than other groups (77.9% vs. 67.0%; P = 0.032). The multivariate logistic regression model revealed that shorter stone length, lower stone density, and the group 2 SSD were positive predictors for successful outcomes in ESWL. Using the Bayesian model-averaging approach, longer stone length, lower stone density, and group 2 SSD can be also positive predictors for successful outcomes following ESWL. Conclusions Our data indicate that a group 2 SSD of approximately 10 cm is a positive predictor for success following ESWL. PMID:26659086
Wu, Gang; Guo, Jian-Ying; Wan, Fang-Hao; Xiao, Neng-Wen
2010-01-01
The beet armyworm, Spodoptera exigua (Hübner) (Lepidoptera: Noctuidae), is an important pest of numerous crops, and it causes economic damage in China. Use of secondary metabolic compounds in plants is an important method used to control this insect as a part of integrated pest management. In this study the growth, development, and food utilization of three successive generations of S. exigua fed on three cotton gossypol cultivars were examined. Significantly longer larval life-spans were observed in S. exigua fed on high gossypol cultivar M9101 compared with those fed on two low gossypol cultivars, ZMS13 and HZ401. The pupal weight of the first generation was significantly lower than that of the latter two generations fed on ZMS13 group. Significantly lower fecundity was observed in the second and third generations of S. exigua fed on M9101 compared with S. exigua fed on ZMS13 and HZ401. The efficiency of conversion was significantly higher in the first and third generations fed on HZ401 compared with those fed on ZMS13 and M9101. A significantly lower relative growth rate was observed in the three successive generations fed on M9101 compared with those fed on ZMS13 and HZ401. Cotton cultivars significantly affected the growth, development, and food utilization indices of S. exigua, except for frass and approximate digestibility. Development of S. exigua was significantly affected by relative consumption rate and efficiency of conversion of ingested food, but not by relative growth rate or approximate digestibility, suggesting that diet-utilization efficiency was different based on food quality and generation. Measuring the development and food utilization of S. exigua at the individual and population levels over more than one generation provided more meaningful predictions of long-term population dynamics.
Succession Planning in State Health Agencies in the United States: A Brief Report.
Harper, Elizabeth; Leider, Jonathon P; Coronado, Fatima; Beck, Angela J
2017-11-02
Approximately 25% of the public health workforce plans to retire by 2020. Succession planning is a core capability of the governmental public health enterprise; however, limited data are available regarding these efforts in state health agencies (SHAs). We analyzed 2016 Workforce Gaps Survey data regarding succession planning in SHAs using the US Office of Personnel Management's (OPM's) succession planning model, including 6 domains and 27 activities. Descriptive statistics were calculated for all 41 responding SHAs. On average, SHAs self-reported adequately addressing 11 of 27 succession planning activities, with 93% of SHAs adequately addressing 1 or more activities and 61% adequately addressing 1 or more activities in each domain. The majority of OPM-recommended succession planning activities are not being addressed, and limited succession planning occurs across SHAs. Greater activity in the OPM-identified succession planning domains may help SHAs contend with significant turnover and better preserve institutional knowledge.
Exposing the Secrets of HIV's Success | Center for Cancer Research
An estimated 40 million people were living with HIV and approximately 3 million people died of AIDS worldwide in 2005, making HIV the deadliest infectious agent of the modern era. HIV owes much of its pathogenic success to two factors —its rapid and imprecise replication, which can lead to drug resistance, and its ability to survive at low levels in the presence of antiviral
2012-01-01
Background In 2005, we reported on the success of Comprehensive School Health (CSH) in improving diets, activity levels, and body weights. The successful program was recognized as a "best practice" and has inspired the development of the Alberta Project Promoting active Living and healthy Eating (APPLE) Schools. The project includes 10 schools, most of which are located in socioeconomically disadvantaged areas. The present study examines the effectiveness of a CSH program adopted from a "best practice" example in another setting by evaluating temporal changes in diets, activity levels and body weight. Methods In 2008 and 2010, we surveyed grade 5 students from approximately 150 randomly selected schools from the Canadian province of Alberta and students from 10 APPLE Schools. Students completed the Harvard Youth/Adolescent Food Frequency Questionnaire, questions on physical activity, and had their height and weight measured. Multilevel regression methods were used to analyze changes in diets, activity levels, and body weight between 2008 and 2010. Results In 2010 relative to 2008, students attending APPLE Schools were eating more fruits and vegetables, consuming fewer calories, were more physically active and were less likely obese. These changes contrasted changes observed among students elsewhere in the province. Conclusions These findings provide evidence on the effectiveness of CSH in improving health behaviors. They show that an example of "best practice" may lead to success in another setting. Herewith the study provides the evidence that investments for broader program implementation based on "best practice" are justified. PMID:22413778
Family planning uses traditional theater in Mali.
Schubert, J
1988-01-01
Mali's branch of the International Planned Parenthood Federation has found a vehicle that effectively conveys the idea of family planning through the use of contraception, a method that blends the country's cultural heritage and modern technology. Despite becoming the first sub-Saharan francophone country to promote family planning, Mali only counted 1% of its population using a modern method of contraception. So with the aid of The Johns Hopkins University/Population COmmunication Services (JHU/PCS), the Association Malienne pour la Protection et la Promotion de la Famille (AMPPF) developed several programs to promote contraception, but none were more successful than the Koteba Project, which used Mali's traditional theater form to communicate the message. While comical, the Koteba generally deals with social issues -- it informs and entertains. This particular Koteba told the story of two government employees, one with two wives and many children, the other with one wife and few children. The first one sees nothing but family problems: fighting wives and delinquent children. The second one, who had used family planning, enjoys a peaceful home. Upon hearing of his friend's successes with family planning, the tormented government employee becomes convinced of its needs, and persuades his wives to accompany him to a family planning clinic. Developed at a cost of approximately US $3000 and televised nationwide, the Koteba proved effective. A survey of 500 people attending an AMPPF clinic revealed that 1/4 of them remembered the program. With the success of the Koteba, JHU/PCS and AMPPF are now exploring other traditional channels of communication.
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Cabrera, Olga L; Munsterman, Leonard E; Cárdenas, Rocío; Gutiérrez, Reynaldo; Ferro, Cristina
2002-09-01
For epidemiological studies and control programs of leishmaniasis, taxonomic identification of the etiologic agent of the disease in the insect vector is of critical importance. The implementation of molecular techniques such as the polymerase chain reaction (PCR) has permitted great advances in the efficacy and sensitivity of parasite identification. Previously, these investigations involved labor-intensive dissections and required expert personnel. The present work evaluates the effects of storage methods of phlebotomine samples in the optimization of PCR identification of Leishmania. Females of Lutzomyia longipalpis, from the colony of the Instituto Nacional de Salud, were experimentally infected with Leishmania chagasi (= L. infantum), from the upper Magdalena Valley (Quipile, Cundinamarca, Colombia). The infected insects were preserved in three solutions: 100% ethanol, 70% ethanol, and TE; subsamples of each class were stored at -80 degrees C, -20 degrees C and room temperature. To determine infection rates, samples were dissected and screened microscopically. Chelex 100 was used for extraction of total Leishmania DNA. For PCR amplification, the kinetoplastic minicircle DNA primers OL1 and OL2 of Leishmania were used, and the products were visualized by electrophoresis in 1% agarose gels. For each of the 3 storage conditions, amplifications were successful, producing a approximately 120 base pair product unique to Leishmania. The results demonstrated the advantage of PCR as a routine screening method for detecting infected flies in endemic foci of visceral leishmaniasis. Since storage method did not affect PCR amplification success, the most cost effective method -70% ethanol at room temperature--is the option recommended for storing entomological samples in vector incrimination studies.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
Ndefo, Uche Anadu; Norman, Rolicia; Henry, Andrea
2017-01-01
Background When initiated by a health plan, academic detailing can be used to change prescribing practices, which can lead to increased safety and savings. Objective To evaluate the impact of academic detailing on prescribing and prescription drug costs of cefixime to a health plan. Methods A prospective intervention study was carried out that evaluated the prescribing practices and prescription drug costs of cefixime. A total of 11 prescribers were detailed by 1 pharmacist between August 2014 and March 2015. Two of the 11 prescribers did not respond to the academic detailing and were not followed up. The physicians' prescribing habits and prescription costs were compared before and after detailing to evaluate the effectiveness of the intervention. Data were collected for approximately 5 months before and after the intervention. Each prescriber served as his or her own control. Results Overall, an approximate 36% reduction in the number of cefixime prescriptions written and an approximate 20% decrease in prescription costs was seen with academic detailing compared with the year before the intervention. In 9 of 11 (82%) prescribers, intervention with academic detailing was successful and resulted in fewer prescriptions for cefixime during the study period. Conclusion Academic detailing had a positive impact on prescribing, by decreasing the number of cefixime prescriptions and lowering the drug costs to the health plan. PMID:28626509
Photovoltaic stand-alone modular systems, phase 2
NASA Technical Reports Server (NTRS)
Naff, G. J.; Marshall, N. A.
1983-01-01
The final hardware and system qualification phase of a two part stand-alone photovoltaic (PV) system development is covered. The final design incorporated modular, power blocks capable of expanding incrementally from 320 watts to twenty kilowatts (PK). The basic power unit (PU) was nominally rated 1.28 kWp. The controls units, power collection buses and main lugs, electrical protection subsystems, power switching, and load management circuits are housed in a common control enclosure. Photo-voltaic modules are electrically connected in a horizontal daisy-chain method via Amp Solarlok plugs mating with compatible connectors installed on the back side of each photovoltaic module. A pair of channel rails accommodate the mounting of the modules into a frameless panel support structure. Foundations are of a unique planter (tub-like) configuration to allow for world-wide deployment without restriction as to types of soil. One battery string capable of supplying approximately 240 ampere hours nominal of carryover power is specified for each basic power unit. Load prioritization and shedding circuits are included to protect critical loads and selectively shed and defer lower priority or noncritical power demands. The baseline system, operating at approximately 2 1/2 PUs (3.2 kW pk.) was installed and deployed. Qualification was successfully complete in March 1983; since that time, the demonstration system has logged approximately 3000 hours of continuous operation under load without major incident.
Photovoltaic stand-alone modular systems, phase 2
NASA Astrophysics Data System (ADS)
Naff, G. J.; Marshall, N. A.
1983-07-01
The final hardware and system qualification phase of a two part stand-alone photovoltaic (PV) system development is covered. The final design incorporated modular, power blocks capable of expanding incrementally from 320 watts to twenty kilowatts (PK). The basic power unit (PU) was nominally rated 1.28 kWp. The controls units, power collection buses and main lugs, electrical protection subsystems, power switching, and load management circuits are housed in a common control enclosure. Photo-voltaic modules are electrically connected in a horizontal daisy-chain method via Amp Solarlok plugs mating with compatible connectors installed on the back side of each photovoltaic module. A pair of channel rails accommodate the mounting of the modules into a frameless panel support structure. Foundations are of a unique planter (tub-like) configuration to allow for world-wide deployment without restriction as to types of soil. One battery string capable of supplying approximately 240 ampere hours nominal of carryover power is specified for each basic power unit. Load prioritization and shedding circuits are included to protect critical loads and selectively shed and defer lower priority or noncritical power demands. The baseline system, operating at approximately 2 1/2 PUs (3.2 kW pk.) was installed and deployed. Qualification was successfully complete in March 1983; since that time, the demonstration system has logged approximately 3000 hours of continuous operation under load without major incident.
Klemenc-Ketis, Zalika; Bulc, Mateja; Kersnik, Janko
2011-01-01
Aim To assess patients’ attitudes toward changing unhealthy lifestyle, confidence in the success, and desired involvement of their family physicians in facilitating this change. Methods We conducted a cross-sectional study in 15 family physicians’ practices on a consecutive sample of 472 patients (44.9% men, mean age [± standard deviation] 49.3 ± 10.9 years) from October 2007 to May 2008. Patients were given a self-administered questionnaire on attitudes toward changing unhealthy diet, increasing physical activity, and reducing body weight. It also included questions on confidence in the success, planning lifestyle changes, and advice from family physicians. Results Nearly 20% of patients planned to change their eating habits, increase physical activity, and reach normal body weight. Approximately 30% of patients (more men than women) said that they wanted to receive advice on this issue from their family physicians. Younger patients and patients with higher education were more confident that they could improve their lifestyle. Patients who planned to change their lifestyle and were more confident in the success wanted to receive advice from their family physicians. Conclusion Family physicians should regularly ask the patients about the intention of changing their lifestyle and offer them help in carrying out this intention. PMID:21495204
Eapen, Valsamma; Grove, Rachel; Aylward, Elizabeth; Joosten, Annette V; Miller, Scott I; Van Der Watt, Gerdamari; Fordyce, Kathryn; Dissanayake, Cheryl; Maya, Jacqueline; Tucker, Madonna; DeBlasio, Antonia
2017-01-01
AIM To evaluate the characteristics that are associated with successful transition to school outcomes in preschool aged children with autism. METHODS Twenty-one participants transitioning from an early intervention program were assessed at two time points; at the end of their preschool placement and approximately 5 mo later following their transition to school. Child characteristics were assessed using the Mullen Scales of Early Learning, Vineland Adaptive Behaviour Scales, Social Communication Questionnaire and the Repetitive Behaviour Scale. Transition outcomes were assessed using Teacher Rating Scale of School Adjustment and the Social Skills Improvement System Rating Scales to provide an understanding of each child’s school adjustment. The relationship between child characteristics and school outcomes was evaluated. RESULTS Cognitive ability and adaptive behaviour were shown to be associated with successful transition to school outcomes including participation in the classroom and being comfortable with the classroom teacher. These factors were also associated with social skills in the classroom including assertiveness and engagement. CONCLUSION Supporting children on the spectrum in the domains of adaptive behaviour and cognitive ability, including language skills, is important for a successful transition to school. Providing the appropriate support within structured transition programs will assist children on the spectrum with this important transition, allowing them to maximise their learning and behavioural potential. PMID:29259892
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
NASA Astrophysics Data System (ADS)
Schnoerr, David; Sanguinetti, Guido; Grima, Ramon
2017-03-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
Approximate Dispersion Relations for Waves on Arbitrary Shear Flows
NASA Astrophysics Data System (ADS)
Ellingsen, S. À.; Li, Y.
2017-12-01
An approximate dispersion relation is derived and presented for linear surface waves atop a shear current whose magnitude and direction can vary arbitrarily with depth. The approximation, derived to first order of deviation from potential flow, is shown to produce good approximations at all wavelengths for a wide range of naturally occuring shear flows as well as widely used model flows. The relation reduces in many cases to a 3-D generalization of the much used approximation by Skop (1987), developed further by Kirby and Chen (1989), but is shown to be more robust, succeeding in situations where the Kirby and Chen model fails. The two approximations incur the same numerical cost and difficulty. While the Kirby and Chen approximation is excellent for a wide range of currents, the exact criteria for its applicability have not been known. We explain the apparently serendipitous success of the latter and derive proper conditions of applicability for both approximate dispersion relations. Our new model has a greater range of applicability. A second order approximation is also derived. It greatly improves accuracy, which is shown to be important in difficult cases. It has an advantage over the corresponding second-order expression proposed by Kirby and Chen that its criterion of accuracy is explicitly known, which is not currently the case for the latter to our knowledge. Our second-order term is also arguably significantly simpler to implement, and more physically transparent, than its sibling due to Kirby and Chen.
NASA Technical Reports Server (NTRS)
Patniak, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.
1998-01-01
Nonlinear mathematical-programming-based design optimization can be an elegant method. However, the calculations required to generate the merit function, constraints, and their gradients, which are frequently required, can make the process computational intensive. The computational burden can be greatly reduced by using approximating analyzers derived from an original analyzer utilizing neural networks and linear regression methods. The experience gained from using both of these approximation methods in the design optimization of a high speed civil transport aircraft is the subject of this paper. The Langley Research Center's Flight Optimization System was selected for the aircraft analysis. This software was exercised to generate a set of training data with which a neural network and a regression method were trained, thereby producing the two approximating analyzers. The derived analyzers were coupled to the Lewis Research Center's CometBoards test bed to provide the optimization capability. With the combined software, both approximation methods were examined for use in aircraft design optimization, and both performed satisfactorily. The CPU time for solution of the problem, which had been measured in hours, was reduced to minutes with the neural network approximation and to seconds with the regression method. Instability encountered in the aircraft analysis software at certain design points was also eliminated. On the other hand, there were costs and difficulties associated with training the approximating analyzers. The CPU time required to generate the input-output pairs and to train the approximating analyzers was seven times that required for solution of the problem.
Yokoyama, Shozo; Takenaka, Naomi
2005-04-01
Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.
Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis
Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor
2011-01-01
Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 ± 0.44 mm (Mean ± STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 ± 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use. PMID:21992346
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yagi, Mamiko; Ito, Mitsuki; Shirakashi, Jun-ichi, E-mail: shrakash@cc.tuat.ac.jp
We report a new method for fabrication of Ni nanogaps based on electromigration induced by a field emission current. This method is called “activation” and is demonstrated here using a current source with alternately reversing polarities. The activation procedure with alternating current bias, in which the current source polarity alternates between positive and negative bias conditions, is performed with planar Ni nanogaps defined on SiO{sub 2}/Si substrates at room temperature. During negative biasing, a Fowler-Nordheim field emission current flows from the source (cathode) to the drain (anode) electrode. The Ni atoms at the tip of the drain electrode are thusmore » activated and then migrate across the gap from the drain to the source electrode. In contrast, in the positive bias case, the field emission current moves the activated atoms from the source to the drain electrode. These two procedures are repeated until the tunnel resistance of the nanogaps is successively reduced from 100 TΩ to 48 kΩ. Scanning electron microscopy and atomic force microscopy studies showed that the gap separation narrowed from approximately 95 nm to less than 10 nm because of the Ni atoms that accumulated at the tips of both the source and drain electrodes. These results show that the alternately biased activation process, which is a newly proposed atom transfer technique, can successfully control the tunnel resistance of the Ni nanogaps and is a suitable method for formation of ultrasmall nanogap structures.« less