NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
An efficient numerical technique for calculating thermal spreading resistance
NASA Technical Reports Server (NTRS)
Gale, E. H., Jr.
1977-01-01
An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.
Numerical simulation of steady cavitating flow of viscous fluid in a Francis hydroturbine
NASA Astrophysics Data System (ADS)
Panov, L. V.; Chirkov, D. V.; Cherny, S. G.; Pylev, I. M.; Sotnikov, A. A.
2012-09-01
Numerical technique was developed for simulation of cavitating flows through the flow passage of a hydraulic turbine. The technique is based on solution of steady 3D Navier—Stokes equations with a liquid phase transfer equation. The approch for setting boundary conditions meeting the requirements of cavitation testing standard was suggested. Four different models of evaporation and condensation were compared. Numerical simulations for turbines of different specific speed were compared with experiment.
Aguayo-Ortiz, A; Mendoza, S; Olvera, D
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and "Rankine-Hugoniot" jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges.
Mendoza, S.; Olvera, D.
2018-01-01
In this article we develop a Primitive Variable Recovery Scheme (PVRS) to solve any system of coupled differential conservative equations. This method obtains directly the primitive variables applying the chain rule to the time term of the conservative equations. With this, a traditional finite volume method for the flux is applied in order avoid violation of both, the entropy and “Rankine-Hugoniot” jump conditions. The time evolution is then computed using a forward finite difference scheme. This numerical technique evades the recovery of the primitive vector by solving an algebraic system of equations as it is often used and so, it generalises standard techniques to solve these kind of coupled systems. The article is presented bearing in mind special relativistic hydrodynamic numerical schemes with an added pedagogical view in the appendix section in order to easily comprehend the PVRS. We present the convergence of the method for standard shock-tube problems of special relativistic hydrodynamics and a graphical visualisation of the errors using the fluctuations of the numerical values with respect to exact analytic solutions. The PVRS circumvents the sometimes arduous computation that arises from standard numerical methods techniques, which obtain the desired primitive vector solution through an algebraic polynomial of the charges. PMID:29659602
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
Fercher, A; Hitzenberger, C; Sticker, M; Zawadzki, R; Karamata, B; Lasser, T
2001-12-03
Dispersive samples introduce a wavelength dependent phase distortion to the probe beam. This leads to a noticeable loss of depth resolution in high resolution OCT using broadband light sources. The standard technique to avoid this consequence is to balance the dispersion of the sample byarrangingadispersive materialinthereference arm. However, the impact of dispersion is depth dependent. A corresponding depth dependent dispersion balancing technique is diffcult to implement. Here we present a numerical dispersion compensation technique for Partial Coherence Interferometry (PCI) and Optical Coherence Tomography (OCT) based on numerical correlation of the depth scan signal with a depth variant kernel. It can be used a posteriori and provides depth dependent dispersion compensation. Examples of dispersion compensated depth scan signals obtained from microscope cover glasses are presented.
On the numerical treatment of Coulomb forces in scattering problems
NASA Astrophysics Data System (ADS)
Randazzo, J. M.; Ancarani, L. U.; Colavecchia, F. D.; Gasaneo, G.; Frapiccini, A. L.
2012-11-01
We investigate the limiting procedures to obtain Coulomb interactions from short-range potentials. The application of standard techniques used for the two-body case (exponential and sharp cutoff) to the three-body break-up problem is illustrated numerically by considering the Temkin-Poet (TP) model of e-H processes.
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-01-01
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626
Jha, Abhinav K; Caffo, Brian; Frey, Eric C
2016-04-07
The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.
Numeric data distribution: The vital role of data exchange in today's world
NASA Technical Reports Server (NTRS)
Chase, Malcolm W.
1994-01-01
The major aim of the NIST standard Reference Data Program (SRD) is to provide critically evaluated numeric data to the scientific and technical community in a convenient and accessible form. A second aim of the program is to provide feedback into the experimental and theoretical programs to help raise the general standards of measurement. By communicating the experience gained in evaluating the world output of data in the physical sciences, NIST/SRD helps to advance the level of experimental techniques and improve the reliability of physical measurements.
Laboratory techniques and rhythmometry
NASA Technical Reports Server (NTRS)
Halberg, F.
1973-01-01
Some of the procedures used for the analysis of rhythms are illustrated, notably as these apply to current medical and biological practice. For a quantitative approach to medical and broader socio-ecologic goals, the chronobiologist gathers numerical objective reference standards for rhythmic biophysical, biochemical, and behavioral variables. These biological reference standards can be derived by specialized computer analyses of largely self-measured (until eventually automatically recorded) time series (autorhythmometry). Objective numerical values for individual and population parameters of reproductive cycles can be obtained concomitantly with characteristics of about-yearly (circannual), about-daily (circadian) and other rhythms.
Computer Program For Linear Algebra
NASA Technical Reports Server (NTRS)
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank
2013-01-01
This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.
Dynamic Environmental Qualification Techniques.
1981-12-01
environments peculiar to military operations and requirements. numerous dynamic qualification test methods have been established. It was the purpose...requires the achievement of the highest practicable degree in the standard- ization of items, materials and engineering practices within the...standard is described as "A document that established engineering and technical requirements for processes, pro’cedures, practices and methods that have
Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component
2013-09-30
application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical
Turovets, Sergei; Volkov, Vasily; Zherdetsky, Aleksej; Prakonina, Alena; Malony, Allen D
2014-01-01
The Electrical Impedance Tomography (EIT) and electroencephalography (EEG) forward problems in anisotropic inhomogeneous media like the human head belongs to the class of the three-dimensional boundary value problems for elliptic equations with mixed derivatives. We introduce and explore the performance of several new promising numerical techniques, which seem to be more suitable for solving these problems. The proposed numerical schemes combine the fictitious domain approach together with the finite-difference method and the optimally preconditioned Conjugate Gradient- (CG-) type iterative method for treatment of the discrete model. The numerical scheme includes the standard operations of summation and multiplication of sparse matrices and vector, as well as FFT, making it easy to implement and eligible for the effective parallel implementation. Some typical use cases for the EIT/EEG problems are considered demonstrating high efficiency of the proposed numerical technique.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less
Multi-technique comparison of troposphere zenith delays and gradients during CONT08
NASA Astrophysics Data System (ADS)
Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo
2011-07-01
CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.
Reduction of lithologic-log data to numbers for use in the digital computer
Morgan, C.O.; McNellis, J.M.
1971-01-01
The development of a standardized system for conveniently coding lithologic-log data for use in the digital computer has long been needed. The technique suggested involves a reduction of the original written alphanumeric log to a numeric log by use of computer programs. This numeric log can then be retrieved as a written log, interrogated for pertinent information, or analyzed statistically. ?? 1971 Plenum Publishing Corporation.
Regularization in Orbital Mechanics; Theory and Practice
NASA Astrophysics Data System (ADS)
Roa, Javier
2017-09-01
Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.
NASA Astrophysics Data System (ADS)
He, Yang; Sun, Yajuan; Zhang, Ruili; Wang, Yulei; Liu, Jian; Qin, Hong
2016-09-01
We construct high order symmetric volume-preserving methods for the relativistic dynamics of a charged particle by the splitting technique with processing. By expanding the phase space to include the time t, we give a more general construction of volume-preserving methods that can be applied to systems with time-dependent electromagnetic fields. The newly derived methods provide numerical solutions with good accuracy and conservative properties over long time of simulation. Furthermore, because of the use of an accuracy-enhancing processing technique, the explicit methods obtain high-order accuracy and are more efficient than the methods derived from standard compositions. The results are verified by the numerical experiments. Linear stability analysis of the methods shows that the high order processed method allows larger time step size in numerical integrations.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
NASA Astrophysics Data System (ADS)
Aoki, Sinya
2013-07-01
We review the potential method in lattice QCD, which has recently been proposed to extract nucleon-nucleon interactions via numerical simulations. We focus on the methodology of this approach by emphasizing the strategy of the potential method, the theoretical foundation behind it, and special numerical techniques. We compare the potential method with the standard finite volume method in lattice QCD, in order to make pros and cons of the approach clear. We also present several numerical results for nucleon-nucleon potentials.
SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH
While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...
Efficient calculation of the polarizability: a simplified effective-energy technique
NASA Astrophysics Data System (ADS)
Berger, J. A.; Reining, L.; Sottile, F.
2012-09-01
In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.
Graefe, F.; Marschke, J.; Dimpfl, T.; Tunn, R.
2012-01-01
Vaginal vault suspension during hysterectomy for prolapse is both a therapy for apical insufficiency and helps prevent recurrence. Numerous techniques exist, with different anatomical results and differing complications. The description of the different approaches together with a description of the vaginal vault suspension technique used at the Department for Urogynaecology at St. Hedwig Hospital could serve as a basis for reassessment and for recommendations by scientific associations regarding general standards. PMID:25278621
Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
Discrete distributed strain sensing of intelligent structures
NASA Technical Reports Server (NTRS)
Anderson, Mark S.; Crawley, Edward F.
1992-01-01
Techniques are developed for the design of discrete highly distributed sensor systems for use in intelligent structures. First the functional requirements for such a system are presented. Discrete spatially averaging strain sensors are then identified as satisfying the functional requirements. A variety of spatial weightings for spatially averaging sensors are examined, and their wave number characteristics are determined. Preferable spatial weightings are identified. Several numerical integration rules used to integrate such sensors in order to determine the global deflection of the structure are discussed. A numerical simulation is conducted using point and rectangular sensors mounted on a cantilevered beam under static loading. Gage factor and sensor position uncertainties are incorporated to assess the absolute error and standard deviation of the error in the estimated tip displacement found by numerically integrating the sensor outputs. An experiment is carried out using a statically loaded cantilevered beam with five point sensors. It is found that in most cases the actual experimental error is within one standard deviation of the absolute error as found in the numerical simulation.
The generation and use of numerical shape models for irregular Solar System objects
NASA Technical Reports Server (NTRS)
Simonelli, Damon P.; Thomas, Peter C.; Carcich, Brian T.; Veverka, Joseph
1993-01-01
We describe a procedure that allows the efficient generation of numerical shape models for irregular Solar System objects, where a numerical model is simply a table of evenly spaced body-centered latitudes and longitudes and their associated radii. This modeling technique uses a combination of data from limbs, terminators, and control points, and produces shape models that have some important advantages over analytical shape models. Accurate numerical shape models make it feasible to study irregular objects with a wide range of standard scientific analysis techniques. These applications include the determination of moments of inertia and surface gravity, the mapping of surface locations and structural orientations, photometric measurement and analysis, the reprojection and mosaicking of digital images, and the generation of albedo maps. The capabilities of our modeling procedure are illustrated through the development of an accurate numerical shape model for Phobos and the production of a global, high-resolution, high-pass-filtered digital image mosaic of this Martian moon. Other irregular objects that have been modeled, or are being modeled, include the asteroid Gaspra and the satellites Deimos, Amalthea, Epimetheus, Janus, Hyperion, and Proteus.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Gene Identification Algorithms Using Exploratory Statistical Analysis of Periodicity
NASA Astrophysics Data System (ADS)
Mukherjee, Shashi Bajaj; Sen, Pradip Kumar
2010-10-01
Studying periodic pattern is expected as a standard line of attack for recognizing DNA sequence in identification of gene and similar problems. But peculiarly very little significant work is done in this direction. This paper studies statistical properties of DNA sequences of complete genome using a new technique. A DNA sequence is converted to a numeric sequence using various types of mappings and standard Fourier technique is applied to study the periodicity. Distinct statistical behaviour of periodicity parameters is found in coding and non-coding sequences, which can be used to distinguish between these parts. Here DNA sequences of Drosophila melanogaster were analyzed with significant accuracy.
Importance of inlet boundary conditions for numerical simulation of combustor flows
NASA Technical Reports Server (NTRS)
Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.
1983-01-01
Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.
NASA Astrophysics Data System (ADS)
Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik
2017-03-01
Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.
Proposal of a micromagnetic standard problem for ferromagnetic resonance simulations
NASA Astrophysics Data System (ADS)
Baker, Alexander; Beg, Marijan; Ashton, Gregory; Albert, Maximilian; Chernyshenko, Dmitri; Wang, Weiwei; Zhang, Shilei; Bisotti, Marc-Antonio; Franchin, Matteo; Hu, Chun Lian; Stamps, Robert; Hesjedal, Thorsten; Fangohr, Hans
2017-01-01
Nowadays, micromagnetic simulations are a common tool for studying a wide range of different magnetic phenomena, including the ferromagnetic resonance. A technique for evaluating reliability and validity of different micromagnetic simulation tools is the simulation of proposed standard problems. We propose a new standard problem by providing a detailed specification and analysis of a sufficiently simple problem. By analyzing the magnetization dynamics in a thin permalloy square sample, triggered by a well defined excitation, we obtain the ferromagnetic resonance spectrum and identify the resonance modes via Fourier transform. Simulations are performed using both finite difference and finite element numerical methods, with OOMMF and Nmag simulators, respectively. We report the effects of initial conditions and simulation parameters on the character of the observed resonance modes for this standard problem. We provide detailed instructions and code to assist in using the results for evaluation of new simulator tools, and to help with numerical calculation of ferromagnetic resonance spectra and modes in general.
Numerical evaluation of the radiation from unbaffled, finite plates using the FFT
NASA Technical Reports Server (NTRS)
Williams, E. G.
1983-01-01
An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.
de Jong, Marjan; Lucas, Cees; Bredero, Hansje; van Adrichem, Leon; Tibboel, Dick; van Dijk, Monique
2012-08-01
This article is a report of a randomized controlled trial of the effects of 'M' technique massage with or without mandarin oil compared to standard postoperative care on infants' levels of pain and distress, heart rate and mean arterial pressure after major craniofacial surgery. There is a growing interest in non-pharmacological interventions such as aromatherapy massage in hospitalized children to relieve pain and distress but well performed studies are lacking. This randomized controlled trial allocated 60 children aged 3-36 months after craniofacial surgery from January 2008 to August 2009 to one of three conditions; 'M' technique massage with carrier oil, 'M' technique massage with mandarin oil or standard postoperative care. Primary outcome measures were changes in COMFORT behaviour scores, Numeric Rating Scale pain and Numeric Rating Scale distress scores assessed from videotape by an observer blinded for the condition. In all three groups, the mean postintervention COMFORT behaviour scores were higher than the baseline scores, but differences were not statistically significant. Heart rate and mean arterial pressure showed a statistically significant change across the three assessment periods in all three groups. These changes were not related with the intervention. Results do not support a benefit of 'M' technique massage with or without mandarin oil in these young postoperative patients. Several reasons may account for this: massage given too soon after general anaesthesia, young patients' fear of strangers touching them, patients not used to massage. © 2011 Blackwell Publishing Ltd.
SUBOPT: A CAD program for suboptimal linear regulators
NASA Technical Reports Server (NTRS)
Fleming, P. J.
1985-01-01
An interactive software package which provides design solutions for both standard linear quadratic regulator (LQR) and suboptimal linear regulator problems is described. Intended for time-invariant continuous systems, the package is easily modified to include sampled-data systems. LQR designs are obtained by established techniques while the large class of suboptimal problems containing controller and/or performance index options is solved using a robust gradient minimization technique. Numerical examples demonstrate features of the package and recent developments are described.
RE-NUMERATE: A Workshop to Restore Essential Numerical Skills and Thinking via Astronomy Education
NASA Astrophysics Data System (ADS)
McCarthy, D.; Follette, K.
2013-04-01
The quality of science teaching for all ages is degraded by our students' gross lack of skills in elementary arithmetic and their unwillingness to think, and to express themselves, numerically. Out of frustration educators, and science communicators, often choose to avoid these problems, thereby reinforcing the belief that math is only needed in “math class” and preventing students from maturing into capable, well informed citizens. In this sense we teach students a pseudo science, not its real nature, beauty, and value. This workshop encourages and equips educators to immerse students in numerical thinking throughout a science course. The workshop begins by identifying common deficiencies in skills and attitudes among non-science collegians (freshman-senior) enrolled in General Education astronomy courses. The bulk of the workshop engages participants in well-tested techniques (e.g., presentation methods, curriculum, activities, mentoring approaches, etc.) for improving students' arithmetic skills, increasing their confidence, and improving their abilities in numerical expression. These techniques are grounded in 25+ years of experience in college classrooms and pre-college informal education. They are suited for use in classrooms (K-12 and college), informal venues, and science communication in general and could be applied across the standard school curriculum.
NSWC Library of Mathematics Subroutines
1990-01-01
sufficiently many zero elements for it to be worthwhile to use special techniques that avoid storing and operating with the zeros.U The scheme adopted by the... general purpose numerical mathematics subroutines began. The subroutines are written in ANSI standard Fortran. This manual describes the subroutines in...PLCOPYDPCOPY ............ ...................... 113 Addition of Polynomials - PADD ,DPADD ............. .................. I.... 115 Subtraction of
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
NASA Astrophysics Data System (ADS)
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Saumil S.; Fischer, Paul F.; Min, Misun
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
NASA Astrophysics Data System (ADS)
Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed
2018-04-01
With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-29
...] Guidance for Industry on Standards for Securing the Drug Supply Chain--Standardized Numerical... industry entitled ``Standards for Securing the Drug Supply Chain-Standardized Numerical Identification for... the Drug Supply Chain-Standardized Numerical Identification for Prescription Drug Packages.'' In the...
NASA Astrophysics Data System (ADS)
Wang, H. L.; Han, W.; Xu, M.
2011-12-01
Measurement of the water flow rate in microchannel has been one of the hottest points in the applications of microfluidics, medical, biological, chemical analyses and so on. In this study, the scanning microscale particle image velocimetry (scanning micro-PIV) technique is used for the measurements of water flow rates in a straight microchannel of 200μm width and 60μm depth under the standard flow rates ranging from 2.481μL/min to 8.269μL/min. The main effort of this measurement technique is to obtain three-dimensional velocity distribution on the cross sections of microchannel by measuring velocities of the different fluid layers along the out-of-plane direction in the microchannel, so the water flow rates can be evaluated from the discrete surface integral of velocities on the cross section. At the same time, the three-dimensional velocity fields in the measured microchannel are simulated numerically using the FLUENT software in order to verify the velocity accuracy of measurement results. The results show that the experimental values of flow rates are well consistent to the standard flow rates input by the syringe pump and the compared results between numerical simulation and experiment are consistent fundamentally. This study indicates that the micro-flow rate evaluated from three-dimensional velocity by the scanning micro-PIV technique is a promising method for the micro-flow rate research.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
NASA Astrophysics Data System (ADS)
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute fluxes - where Hydrus simulations may fail to converge - no numerical problem appears, and ii) accuracy of simulations even for loose spatial domain discretisations, which can only be obtained by Hydrus with fine discretisations.
Regnier, D.; Litaize, O.; Serot, O.
2015-12-23
Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less
Applications of numerical methods to simulate the movement of contaminants in groundwater.
Sun, N Z
1989-01-01
This paper reviews mathematical models and numerical methods that have been extensively used to simulate the movement of contaminants through the subsurface. The major emphasis is placed on the numerical methods of advection-dominated transport problems and inverse problems. Several mathematical models that are commonly used in field problems are listed. A variety of numerical solutions for three-dimensional models are introduced, including the multiple cell balance method that can be considered a variation of the finite element method. The multiple cell balance method is easy to understand and convenient for solving field problems. When the advection transport dominates the dispersion transport, two kinds of numerical difficulties, overshoot and numerical dispersion, are always involved in solving standard, finite difference methods and finite element methods. To overcome these numerical difficulties, various numerical techniques are developed, such as upstream weighting methods and moving point methods. A complete review of these methods is given and we also mention the problems of parameter identification, reliability analysis, and optimal-experiment design that are absolutely necessary for constructing a practical model. PMID:2695327
Numerical Taxonomy of Some Bacteria Isolated from Antarctic and Tropical Seawaters1
Pfister, Robert M.; Burkholder, Paul R.
1965-01-01
Pfister, Robert M. (Lamont Geological Observatory, Palisades, N.Y.), and Paul R. Burkholder. Numerical taxonomy of some bacteria isolated from Antarctic and tropical seawaters. J. Bacteriol. 90:863–872. 1965.—Microorganisms from Antarctic seas and from tropical waters near Puerto Rico were examined with a series of morphological, physiological, and biochemical tests. The results of these analyses were coded on punch cards, and similarity matrices were computed with a program for an IBM 1620 computer. When the matrix was reordered by use of the single-linkage technique, and the results were plotted with four symbols for different per cent similarity ranges, nine groups of microorganisms were revealed. The data suggest that organisms occurring in different areas of the open ocean may be profitably studied with standardized computer techniques. PMID:5847807
A modified form of conjugate gradient method for unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa
2016-06-01
Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.
Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais
2017-01-01
Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
Insertion device calculations with mathematica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, R.; Lidia, S.
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectorymore » solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.« less
Numerical analysis of standard and modified osteosynthesis in long bone fractures treatment.
Sisljagić, Vladimir; Jovanović, Savo; Mrcela, Tomislav; Radić, Radivoje; Selthofer, Robert; Mrcela, Milanka
2010-03-01
The fundamental problem in osteoporotic fracture treatment is significant decrease in bone mass and bone tissue density resulting in decreased firmness and elasticity of osteoporotic bone. Application of standard implants and standard surgical techniques in osteoporotic bone fracture treatment makes it almost impossible to achieve stable osteosynthesis sufficient for early mobility, verticalization and load. Taking into account the form and the size of the contact surface as well as distribution of forces between the osteosynthetic materials and the bone tissue numerical analysis showed advantages of modified osteosynthesis with bone cement filling in the screw bed. The applied numerical model consisted of three sub-models: 3D model from solid elements, 3D cross section of the contact between the plate and the bone and the part of 3D cross section of the screw head and body. We have reached the conclusion that modified osteosynthesis with bone cement resulted in weaker strain in the part of the plate above the fracture fissure, more even strain on the screws, plate and bone, more even strain distribution along all the screws' bodies, significantly greater strain in the part of the screw head opposite to the fracture fissure, firm connection of the screw head and neck and the plate hole with the whole plate and more even bone strain around the screw.
The role of computerized symbolic manipulation in rotorcraft dynamics analysis
NASA Technical Reports Server (NTRS)
Crespo Da Silva, Marcelo R. M.; Hodges, Dewey H.
1986-01-01
The potential role of symbolic manipulation programs in development and solution of the governing equations for rotorcraft dynamics problems is discussed and illustrated. Nonlinear equations of motion for a helicopter rotor blade represented by a rotating beam are developed making use of the computerized symbolic manipulation program MACSYMA. The use of computerized symbolic manipulation allows the analyst to concentrate on more meaningful tasks, such as establishment of physical assumptions, without being sidetracked by the tedious and trivial details of the algebraic manipulations. Furthermore, the resulting equations can be produced, if necessary, in a format suitable for numerical solution. A perturbation-type solution for the resulting dynamical equations is shown to be possible with a combination of symbolic manipulation and standard numerical techniques. This should ultimately lead to a greater physical understanding of the behavior of the solution than is possible with purely numerical techniques. The perturbation analysis of the flapping motion of a rigid rotor blade in forward flight is presented, for illustrative purposes, via computerized symbolic manipulation with a method that bypasses Floquet theory.
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.
2017-12-01
Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.
Generalization of von Neumann analysis for a model of two discrete half-spaces: The acoustic case
Haney, M.M.
2007-01-01
Evaluating the performance of finite-difference algorithms typically uses a technique known as von Neumann analysis. For a given algorithm, application of the technique yields both a dispersion relation valid for the discrete time-space grid and a mathematical condition for stability. In practice, a major shortcoming of conventional von Neumann analysis is that it can be applied only to an idealized numerical model - that of an infinite, homogeneous whole space. Experience has shown that numerical instabilities often arise in finite-difference simulations of wave propagation at interfaces with strong material contrasts. These interface instabilities occur even though the conventional von Neumann stability criterion may be satisfied at each point of the numerical model. To address this issue, I generalize von Neumann analysis for a model of two half-spaces. I perform the analysis for the case of acoustic wave propagation using a standard staggered-grid finite-difference numerical scheme. By deriving expressions for the discrete reflection and transmission coefficients, I study under what conditions the discrete reflection and transmission coefficients become unbounded. I find that instabilities encountered in numerical modeling near interfaces with strong material contrasts are linked to these cases and develop a modified stability criterion that takes into account the resulting instabilities. I test and verify the stability criterion by executing a finite-difference algorithm under conditions predicted to be stable and unstable. ?? 2007 Society of Exploration Geophysicists.
NASA Astrophysics Data System (ADS)
Jafari, Azadeh; Deville, Michel O.; Fiétier, Nicolas
2008-09-01
This study discusses the capability of the constitutive laws for the matrix logarithm of the conformation tensor (LCT model) within the framework of the spectral elements method. The high Weissenberg number problems (HWNP) usually produce a lack of convergence of the numerical algorithms. Even though the question whether the HWNP is a purely numerical problem or rather a breakdown of the constitutive law of the model has remained somewhat of a mystery, it has been recognized that the selection of an appropriate constitutive equation constitutes a very crucial step although implementing a suitable numerical technique is still important for successful discrete modeling of non-Newtonian flows. The LCT model formulation of the viscoelastic equations originally suggested by Fattal and Kupferman is applied for 2-dimensional (2D) FENE-CR model. The Planar Poiseuille flow is considered as a benchmark problem to test this representation at high Weissenberg number. The numerical results are compared with numerical solution of the standard constitutive equation.
Concept and numerical simulations of a reactive anti-fragment armour layer
NASA Astrophysics Data System (ADS)
Hušek, Martin; Kala, Jiří; Král, Petr; Hokeš, Filip
2017-07-01
The contribution describes the concept and numerical simulation of a ballistic protective layer which is able to actively resist projectiles or smaller colliding fragments flying at high speed. The principle of the layer was designed on the basis of the action/reaction system of reactive armour which is used for the protection of armoured vehicles. As the designed ballistic layer consists of steel plates simultaneously combined with explosive material - primary explosive and secondary explosive - the technique of coupling the Finite Element Method with Smoothed Particle Hydrodynamics was used for the simulations. Certain standard situations which the ballistic layer should resist were simulated. The contribution describes the principles for the successful execution of numerical simulations, their results, and an evaluation of the functionality of the ballistic layer.
Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels
2005-02-01
A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.
A numerical analysis of the aortic blood flow pattern during pulsed cardiopulmonary bypass.
Gramigna, V; Caruso, M V; Rossi, M; Serraino, G F; Renzulli, A; Fragomeni, G
2015-01-01
In the modern era, stroke remains a main cause of morbidity after cardiac surgery despite continuing improvements in the cardiopulmonary bypass (CPB) techniques. The aim of the current work was to numerically investigate the blood flow in aorta and epiaortic vessels during standard and pulsed CPB, obtained with the intra-aortic balloon pump (IABP). A multi-scale model, realized coupling a 3D computational fluid dynamics study with a 0D model, was developed and validated with in vivo data. The presence of IABP improved the flow pattern directed towards the epiaortic vessels with a mean flow increase of 6.3% and reduced flow vorticity.
Robertson, Scott; Leonhardt, Ulf
2014-11-01
Hawking radiation has become experimentally testable thanks to the many analog systems which mimic the effects of the event horizon on wave propagation. These systems are typically dominated by dispersion and give rise to a numerically soluble and stable ordinary differential equation only if the rest-frame dispersion relation Ω^{2}(k) is a polynomial of relatively low degree. Here we present a new method for the calculation of wave scattering in a one-dimensional medium of arbitrary dispersion. It views the wave equation as an integral equation in Fourier space, which can be solved using standard and efficient numerical techniques.
The gravitational potential of axially symmetric bodies from a regularized green kernel
NASA Astrophysics Data System (ADS)
Trova, A.; Huré, J.-M.; Hersant, F.
2011-12-01
The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.
Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method
NASA Astrophysics Data System (ADS)
Bekhoucha, F.; Rechak, S.; Cadou, J. M.
2016-12-01
In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.
Optical Imaging of Ionizing Radiation from Clinical Sources
Shaffer, Travis M.; Drain, Charles Michael
2016-01-01
Nuclear medicine uses ionizing radiation for both in vivo diagnosis and therapy. Ionizing radiation comes from a variety of sources, including x-rays, beam therapy, brachytherapy, and various injected radionuclides. Although PET and SPECT remain clinical mainstays, optical readouts of ionizing radiation offer numerous benefits and complement these standard techniques. Furthermore, for ionizing radiation sources that cannot be imaged using these standard techniques, optical imaging offers a unique imaging alternative. This article reviews optical imaging of both radionuclide- and beam-based ionizing radiation from high-energy photons and charged particles through mechanisms including radioluminescence, Cerenkov luminescence, and scintillation. Therapeutically, these visible photons have been combined with photodynamic therapeutic agents preclinically for increasing therapeutic response at depths difficult to reach with external light sources. Last, new microscopy methods that allow single-cell optical imaging of radionuclides are reviewed. PMID:27688469
Towards Effective Clustering Techniques for the Analysis of Electric Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh
2013-11-30
Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less
Evaluation of possible head injuries ensuing a cricket ball impact.
Mohotti, Damith; Fernando, P L N; Zaghloul, Amir
2018-05-01
The aim of this research is to study the behaviour of a human head during the event of an impact of a cricket ball. While many recent incidents were reported in relation to head injuries caused by the impact of cricket balls, there is no clear information available in the published literature about the possible threat levels and the protection level of the current protective equipment. This research investigates the effects of an impact of a cricket ball on a human head and the level of protection offered by the existing standard cricket helmet. An experimental program was carried out to measure the localised pressure caused by the impact of standard cricket balls. The balls were directed at a speed of 110 km/h on a 3D printed head model, with and without a standard cricket helmet. Numerical simulations were carried out using advanced finite element package LS-DYNA to validate the experimental results. The experimental and numerical results showed approximately a 60% reduction in the pressure on the head model when the helmet was used. Both frontal and side impact resulted in head acceleration values in the range of 225-250 g at a ball speed of 110 km/h. There was a 36% reduction observed in the peak acceleration of the brain when wearing a helmet. Furthermore, numerical simulations showed a 67% reduction in the force on the skull and a 95% reduction in the skull internal energy when introducing the helmet. (1) Upon impact, high localised pressure could cause concussion for a player without helmet. (2) When a helmet was used, the acceleration of the brain observed in the numerical results was at non-critical levels according to existing standards. (3) A significant increase in the threat levels was observed for a player without helmet, based on force, pressure, acceleration and energy criteria, which resulted in recommending the compulsory use of the cricket helmet. (4) Numerical results showed a good correlation with experimental results and hence, the numerical technique used in this study can be recommended for future applications. Copyright © 2018 Elsevier B.V. All rights reserved.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Steady-State Cycle Deck Launcher Developed for Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
VanDrei, Donald E.
1997-01-01
One of the objectives of NASA's High Performance Computing and Communications Program's (HPCCP) Numerical Propulsion System Simulation (NPSS) is to reduce the time and cost of generating aerothermal numerical representations of engines, called customer decks. These customer decks, which are delivered to airframe companies by various U.S. engine companies, numerically characterize an engine's performance as defined by the particular U.S. airframe manufacturer. Until recently, all numerical models were provided with a Fortran-compatible interface in compliance with the Society of Automotive Engineers (SAE) document AS681F, and data communication was performed via a standard, labeled common structure in compliance with AS681F. Recently, the SAE committee began to develop a new standard: AS681G. AS681G addresses multiple language requirements for customer decks along with alternative data communication techniques. Along with the SAE committee, the NPSS Steady-State Cycle Deck project team developed a standard Application Program Interface (API) supported by a graphical user interface. This work will result in Aerospace Recommended Practice 4868 (ARP4868). The Steady-State Cycle Deck work was validated against the Energy Efficient Engine customer deck, which is publicly available. The Energy Efficient Engine wrapper was used not only to validate ARP4868 but also to demonstrate how to wrap an existing customer deck. The graphical user interface for the Steady-State Cycle Deck facilitates the use of the new standard and makes it easier to design and analyze a customer deck. This software was developed following I. Jacobson's Object-Oriented Design methodology and is implemented in C++. The AS681G standard will establish a common generic interface for U.S. engine companies and airframe manufacturers. This will lead to more accurate cycle models, quicker model generation, and faster validation leading to specifications. The standard will facilitate cooperative work between industry and NASA. The NPSS Steady-State Cycle Deck team released a batch version of the Steady-State Cycle Deck in March 1996. Version 1.1 was released in June 1996. During fiscal 1997, NPSS accepted enhancements and modifications to the Steady-State Cycle Deck launcher. Consistent with NPSS' commercialization plan, these modifications will be done by a third party that can provide long-term software support.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Vandoormaal, J. P.; Turan, A.; Raithby, G. D.
1986-01-01
The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
An investigation of dynamic-analysis methods for variable-geometry structures
NASA Technical Reports Server (NTRS)
Austin, F.
1980-01-01
Selected space structure configurations were reviewed in order to define dynamic analysis problems associated with variable geometry. The dynamics of a beam being constructed from a flexible base and the relocation of the completed beam by rotating the remote manipulator system about the shoulder joint were selected. Equations of motion were formulated in physical coordinates for both of these problems, and FORTRAN programs were developed to generate solutions by numerically integrating the equations. These solutions served as a standard of comparison to gauge the accuracy of approximate solution techniques that were developed and studied. Good control was achieved in both problems. Unstable control system coupling with the system flexibility did not occur. An approximate method was developed for each problem to enable the analyst to investigate variable geometry effects during a short time span using standard fixed geometry programs such as NASTRAN. The average angle and average length techniques are discussed.
Optical Imaging of Ionizing Radiation from Clinical Sources.
Shaffer, Travis M; Drain, Charles Michael; Grimm, Jan
2016-11-01
Nuclear medicine uses ionizing radiation for both in vivo diagnosis and therapy. Ionizing radiation comes from a variety of sources, including x-rays, beam therapy, brachytherapy, and various injected radionuclides. Although PET and SPECT remain clinical mainstays, optical readouts of ionizing radiation offer numerous benefits and complement these standard techniques. Furthermore, for ionizing radiation sources that cannot be imaged using these standard techniques, optical imaging offers a unique imaging alternative. This article reviews optical imaging of both radionuclide- and beam-based ionizing radiation from high-energy photons and charged particles through mechanisms including radioluminescence, Cerenkov luminescence, and scintillation. Therapeutically, these visible photons have been combined with photodynamic therapeutic agents preclinically for increasing therapeutic response at depths difficult to reach with external light sources. Last, new microscopy methods that allow single-cell optical imaging of radionuclides are reviewed. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Goldberg, Robert R.; Goldberg, Michael R.
1999-05-01
A previous paper by the authors presented an algorithm that successfully segmented organs grown in vitro from their surroundings. It was noticed that one difficulty in standard dyeing techniques for the analysis of contours in organs was due to the fact that the antigen necessary to bind with the fluorescent dye was not uniform throughout the cell borders. To address these concerns, a new fluorescent technique was utilized. A transgenic mouse line was genetically engineered utilizing the hoxb7/gfp (green fluorescent protein). Whereas the original technique (fixed and blocking) required a numerous number of noise removal filtering and sophisticated segmentation techniques, segmentation on the GFP kidney required only an adaptive binary threshold technique which yielded excellent results without the need for specific noise reduction. This is important for tracking the growth of kidney development through time.
Minimal access surgery of pediatric inguinal hernias: a review.
Saranga Bharathi, Ramanathan; Arora, Manu; Baskaran, Vasudevan
2008-08-01
Inguinal hernia is a common problem among children, and herniotomy has been its standard of care. Laparoscopy, which gained a toehold initially in the management of pediatric inguinal hernia (PIH), has managed to steer world opinion against routine contralateral groin exploration by precise detection of contralateral patencies. Besides detection, its ability to repair simultaneously all forms of inguinal hernias (indirect, direct, combined, recurrent, and incarcerated) together with contralateral patencies has cemented its role as a viable alternative to conventional repair. Numerous minimally invasive techniques for addressing PIH have mushroomed in the past two decades. These techniques vary considerably in their approaches to the internal ring (intraperitoneal, extraperitoneal), use of ports (three, two, one), endoscopic instruments (two, one, or none), sutures (absorbable, nonabsorbable), and techniques of knotting (intracorporeal, extracorporeal). In addition to the surgeons' experience and the merits/limitations of individual techniques, it is the nature of the defect that should govern the choice of technique. The emerging techniques show a trend toward increasing use of extracorporeal knotting and diminishing use of working ports and endoscopic instruments. These favor wider adoption of minimal access surgery in addressing PIH by surgeons, irrespective of their laparoscopic skills and experience. Growing experience, wider adoption, decreasing complications, and increasing advantages favor emergence of minimal access surgery as the gold standard for the treatment of PIH in the future. This article comprehensively reviews the laparoscopic techniques of addressing PIH.
NASA Astrophysics Data System (ADS)
Kanaun, S.; Markov, A.
2017-06-01
An efficient numerical method for solution of static problems of elasticity for an infinite homogeneous medium containing inhomogeneities (cracks and inclusions) is developed. Finite number of heterogeneous inclusions and planar parallel cracks of arbitrary shapes is considered. The problem is reduced to a system of surface integral equations for crack opening vectors and volume integral equations for stress tensors inside the inclusions. For the numerical solution of these equations, a class of Gaussian approximating functions is used. The method based on these functions is mesh free. For such functions, the elements of the matrix of the discretized system are combinations of explicit analytical functions and five standard 1D-integrals that can be tabulated. Thus, the numerical integration is excluded from the construction of the matrix of the discretized problem. For regular node grids, the matrix of the discretized system has Toeplitz's properties, and Fast Fourier Transform technique can be used for calculation matrix-vector products of such matrices.
Knudsen Cell Studies of Ti-Al Thermodynamics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Copland, Evan H.; Mehrotra, Gopal M.; Auping, Judith; Gray, Hugh R. (Technical Monitor)
2002-01-01
In this paper we describe the Knudsen cell technique for measurement of thermodynamic activities in alloys. Numerous experimental details must be adhered to in order to obtain useful experimental data. These include introduction of an in-situ standard, precise temperature measurement, elimination of thermal gradients, and precise cell positioning. Our first design is discussed and some sample data on Ti-Al alloys is presented. The second modification and associated improvements are also discussed.
Molecular orientation in a dielectric liquid-vapor interphase
NASA Astrophysics Data System (ADS)
Chacón, E.; Mederos, L.; Navascués, G.; Tarazona, P.
1985-04-01
The density functional theory of Chacón et al. is used to study the molecular orientation in an interphase of a weak dipolar fluid. Explicit expressions are obtained using standard perturbation techniques. Molecular orientation, local susceptibility, and the Gibbsean surface susceptibility are evaluated for a Stockmayer model of dipolar fluid. The effect of the surface structure on the bulk ferroelectric transition is discussed in the light of the present theory and the numerical results.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
NASA Astrophysics Data System (ADS)
Dutta, P. S.; Bhat, H. L.; Kumar, Vikram
1995-09-01
Numerical analysis has been carried out to determine the deviation of the growth rate from the ampoule lowering rate and the shape of the isotherms during the growth of gallium antimonide using the vertical Bridgman technique in a single-zone furnace. Electrical analogues have been used to model the thermal behaviour of the growth system. The standard circuit analysis technique has been used to calculate the temperature distribution in the growing crystal under various growth conditions. The effects of furnace temperature gradient near the melt-solid interface, the ampoule lowering rate, the ampoule geometry, the thermal conductivity of the melt, the mode of heat extraction from the tip of the ampoule and the extent of lateral heat loss from the side walls of the ampoule on the shape of isotherms in the crystal have been evaluated. The theoretical results presented here agree well with our previously obtained experimental results.
Simulating the cold dark matter-neutrino dipole with TianNu
Inman, Derek; Yu, Hao-Ran; Zhu, Hong-Ming; ...
2017-04-20
Measurements of neutrino mass in cosmological observations rely on two-point statistics that are hindered by significant degeneracies with the optical depth and galaxy bias. The relative velocity effect between cold dark matter and neutrinos induces a large scale dipole in the matter density field and may be able to provide orthogonal constraints to standard techniques. In this paper, we numerically investigate this dipole in the TianNu simulation, which contains cold dark matter and 50 meV neutrinos. We first compute the dipole using a new linear response technique where we treat the displacement caused by the relative velocity as a phasemore » in Fourier space and then integrate the matter power spectrum over redshift. Then, we compute the dipole numerically in real space using the simulation density and velocity fields. We find excellent agreement between the linear response and N-body methods. Finally, utilizing the dipole as an observational tool requires two tracers of the matter distribution that are differently biased with respect to the neutrino density.« less
NASA Astrophysics Data System (ADS)
Zhang, Tie-Yan; Zhao, Yan; Xie, Xiang-Peng
2012-12-01
This paper is concerned with the problem of stability analysis of nonlinear Roesser-type two-dimensional (2D) systems. Firstly, the fuzzy modeling method for the usual one-dimensional (1D) systems is extended to the 2D case so that the underlying nonlinear 2D system can be represented by the 2D Takagi—Sugeno (TS) fuzzy model, which is convenient for implementing the stability analysis. Secondly, a new kind of fuzzy Lyapunov function, which is a homogeneous polynomially parameter dependent on fuzzy membership functions, is developed to conceive less conservative stability conditions for the TS Roesser-type 2D system. In the process of stability analysis, the obtained stability conditions approach exactness in the sense of convergence by applying some novel relaxed techniques. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is also given to demonstrate the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Vagh, Hardik A.; Baghai-Wadji, Alireza
2008-12-01
Current technological challenges in materials science and high-tech device industry require the solution of boundary value problems (BVPs) involving regions of various scales, e.g. multiple thin layers, fibre-reinforced composites, and nano/micro pores. In most cases straightforward application of standard variational techniques to BVPs of practical relevance necessarily leads to unsatisfactorily ill-conditioned analytical and/or numerical results. To remedy the computational challenges associated with sub-sectional heterogeneities various sophisticated homogenization techniques need to be employed. Homogenization refers to the systematic process of smoothing out the sub-structural heterogeneities, leading to the determination of effective constitutive coefficients. Ordinarily, homogenization involves a sophisticated averaging and asymptotic order analysis to obtain solutions. In the majority of the cases only zero-order terms are constructed due to the complexity of the processes involved. In this paper we propose a constructive scheme for obtaining homogenized solutions involving higher order terms, and thus, guaranteeing higher accuracy and greater robustness of the numerical results. We present
Computing black hole partition functions from quasinormal modes
Arnold, Peter; Szepietowski, Phillip; Vaman, Diana
2016-07-07
We propose a method of computing one-loop determinants in black hole space-times (with emphasis on asymptotically anti-de Sitter black holes) that may be used for numerics when completely-analytic results are unattainable. The method utilizes the expression for one-loop determinants in terms of quasinormal frequencies determined by Denef, Hartnoll and Sachdev in [1]. A numerical evaluation must face the fact that the sum over the quasinormal modes, indexed by momentum and overtone numbers, is divergent. A necessary ingredient is then a regularization scheme to handle the divergent contributions of individual fixed-momentum sectors to the partition function. To this end, we formulatemore » an effective two-dimensional problem in which a natural refinement of standard heat kernel techniques can be used to account for contributions to the partition function at fixed momentum. We test our method in a concrete case by reproducing the scalar one-loop determinant in the BTZ black hole background. Furthermore, we then discuss the application of such techniques to more complicated spacetimes.« less
Current Status of Interventional Radiology Treatment of Infrapopliteal Arterial Disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rand, T., E-mail: thomas.rand@wienkav.at; Uberoi, R.
2013-06-15
Treatment of infrapopliteal arteries has developed to a standard technique during the past two decades. With the introduction of innovative devices, a variety of techniques has been created and is still under investigation. Treatment options range from plain balloon angioplasty (POBA), all sorts of stent applications, such as bare metal, balloon expanding, self-expanding, coated and drug-eluting stents, and bio-absorbable stents, to latest developments, such as drug-eluting balloons. Regarding the scientific background, several prospective, randomized studies with relevant numbers of patients have been (or will be) published that are Level I evidence. In contrast to older studies, which primarily were basedmore » mostly on numeric parameters, such as diameters or residual stenoses, more recent study concepts focus increasingly on clinical features, such as amputation rate improvement or changes of clinical stages and quality of life standards. Although it is still not decided, which of the individual techniques might be the best one, we can definitely conclude that whatever treatment of infrapopliteal arteries will be used it is of substantial benefit for the patient. Therefore, the goal of this review is to give an overview about the current developments and techniques for the treatment of infrapopliteal arteries, to present clinical and technical results, to weigh individual techniques, and to discuss the recent developments.« less
The acoustics of ducted propellers
NASA Astrophysics Data System (ADS)
Ali, Sherif F.
The return of the propeller to the long haul commercial service may be rapidly approaching in the form of advanced "prop fans". It is believed that the advanced turboprop will considerably reduce the operational cost. However, such aircraft will come into general use only if their noise levels meet the standards of community acceptability currently applied to existing aircraft. In this work a time-marching boundary-element technique is developed, and used to study the acoustics of ducted propeller. The numerical technique is developed in this work eliminated the inherent instability suffered by conventional approaches. The methodology is validated against other numerical and analytical results. The results show excellent agreement with the analytical solution and show no indication of unstable behavior. For the ducted propeller problem, the propeller is modeled by a rotating source-sink pairs, and the duct is modeled by rigid annular body of elliptical cross-section. Using the model and the developed technique, the effect of different parameters on the acoustic field is predicted and analyzed. This includes the effect of duct length, propeller axial location, and source Mach number. The results of this study show that installing a short duct around the propeller can reduce the noise that reaches an observer on a side line.
Solving the transient water age distribution problem in environmental flow systems
NASA Astrophysics Data System (ADS)
Cornaton, F. J.
2011-12-01
The temporal evolution of groundwater age and its frequency distributions can display important changes as flow regimes vary due to the natural change in climate and hydrologic conditions and/or to human induced pressures on the resource to satisfy the water demand. Groundwater age being nowadays frequently used to investigate reservoir properties and recharge conditions, special attention needs to be put on the way this property is characterized, would it be using isotopic methods, multiple tracer techniques, or mathematical modelling. Steady-state age frequency distributions can be modelled using standard numerical techniques, since the general balance equation describing age transport under steady-state flow conditions is exactly equivalent to a standard advection-dispersion equation. The time-dependent problem is however described by an extended transport operator that incorporates an additional coordinate for water age. The consequence is that numerical solutions can hardly be achieved, especially for real 3-D applications over large time periods of interest. The absence of any robust method has thus left us in the quantitative hydrogeology community dodging the issue of transience. Novel algorithms for solving the age distribution problem under time-varying flow regimes are presented and, for some specific configurations, extended to the problem of generalized component exposure time. The solution strategy is based on the combination of the Laplace Transform technique applied to the age (or exposure time) coordinate with standard time-marching schemes. The method is well-suited for groundwater problems with possible density-dependency of fluid flow (e.g. coupled flow and heat/salt concentration problems), but also presents significance to the homogeneous flow (compressible case) problem. The approach is validated using 1-D analytical solutions and exercised on some demonstration problems that are relevant to topical issues in groundwater age, including analysis of transfer times in the vadose zone, aquifer-aquitard interactions and the induction of transient age distributions when a well pump is started.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
A new polarimetric active radar calibrator and calibration technique
NASA Astrophysics Data System (ADS)
Tang, Jianguo; Xu, Xiaojian
2015-10-01
Polarimetric active radar calibrator (PARC) is one of the most important calibrators with high radar cross section (RCS) for polarimetry measurement. In this paper, a new double-antenna polarimetric active radar calibrator (DPARC) is proposed, which consists of two rotatable antennas with wideband electromagnetic polarization filters (EMPF) to achieve lower cross-polarization for transmission and reception. With two antennas which are rotatable around the radar line of sight (LOS), the DPARC provides a variety of standard polarimetric scattering matrices (PSM) through the rotation combination of receiving and transmitting polarization, which are useful for polarimatric calibration in different applications. In addition, a technique based on Fourier analysis is proposed for calibration processing. Numerical simulation results are presented to demonstrate the superior performance of the proposed DPARC and processing technique.
NASA Technical Reports Server (NTRS)
Hardin, Jay C.; Pope, D. Stuart
1989-01-01
An engineering estimate of the spectrum of atmospheric microburst noise radiation in the range 2-20 Hz is developed. This prediction is obtained via a marriage of standard aeroacoustic theory with a numerical computation of the relevant fluid dynamics. The 'computational aeroacoustics' technique applied here to the interpretation of atmospheric noise measurements is illustrative of a methodology that can now be employed in a wide class of problems.
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2013 CFR
2013-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2012 CFR
2012-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2011 CFR
2011-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
43 CFR 3809.202 - Under what conditions will BLM defer to State regulation of operations?
Code of Federal Regulations, 2014 CFR
2014-10-01
... standards on a provision-by-provision basis to determine— (i) Whether non-numerical State standards are functionally equivalent to BLM counterparts; and (ii) Whether numerical State standards are the same as corresponding numerical BLM standards, except that State review and approval time frames do not have to be the...
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1991-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.
A numerical algorithm for optimal feedback gains in high dimensional LQR problems
NASA Technical Reports Server (NTRS)
Banks, H. T.; Ito, K.
1986-01-01
A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.
NASA Astrophysics Data System (ADS)
Hopkin, D. J.; El-Rimawi, J.; Lennon, T.; Silberschmidt, V. V.
2011-07-01
The advent of the structural Eurocodes has allowed civil engineers to be more creative in the design of structures exposed to fire. Rather than rely upon regulatory guidance and prescriptive methods engineers are now able to use such codes to design buildings on the basis of credible design fires rather than accepted unrealistic standard-fire time-temperature curves. Through this process safer and more efficient structural designs are achievable. The key development in enabling performance-based fire design is the emergence of validated numerical models capable of predicting the mechanical response of a whole building or sub-assemblies at elevated temperature. In such a way, efficiency savings have been achieved in the design of steel, concrete and composite structures. However, at present, due to a combination of limited fundamental research and restrictions in the UK National Annex to the timber Eurocode, the design of fire-exposed timber structures using numerical modelling techniques is not generally undertaken. The 'fire design' of timber structures is covered in Eurocode 5 part 1.2 (EN 1995-1-2). In this code there is an advanced calculation annex (Annex B) intended to facilitate the implementation of numerical models in the design of fire-exposed timber structures. The properties contained in the code can, at present, only be applied to standard-fire exposure conditions. This is due to existing limitations related to the available thermal properties which are only valid for standard fire exposure. In an attempt to overcome this barrier the authors have proposed a 'modified conductivity model' (MCM) for determining the temperature of timber structural elements during the heating phase of non-standard fires. This is briefly outlined in this paper. In addition, in a further study, the MCM has been implemented in a coupled thermo-mechanical analysis of uniaxially loaded timber elements exposed to non-standard fires. The finite element package DIANA was adopted with plane-strain elements assuming two-dimensional heat flow. The resulting predictions of failure time for given levels of load are discussed and compared with the simplified 'effective cross section' method presented in EN 1995-1-2.
Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio
2018-03-01
To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Eulerian-Lagrangian Model to Simulate Two-Phase/Particulate Flows
NASA Technical Reports Server (NTRS)
Apte, S. V.; Mahesh, K.; Lundgren, T.
2003-01-01
Figure 1 shows a snapshot of liquid fuel spray coming out of an injector nozzle in a realistic gas-turbine combustor. Here the spray atomization was simulated using a stochastic secondary breakup model (Apte et al. 2003a) with point-particle approximation for the droplets. Very close to the injector, it is observed that the spray density is large and the droplets cannot be treated as point-particles. The volume displaced by the liquid in this region is significant and can alter the gas-phase ow and spray evolution. In order to address this issue, one can compute the dense spray regime by an Eulerian-Lagrangian technique using advanced interface tracking/level-set methods (Sussman et al. 1994; Tryggvason et al. 2001; Herrmann 2003). This, however, is computationally intensive and may not be viable in realistic complex configurations. We therefore plan to develop a methodology based on Eulerian-Lagrangian technique which will allow us to capture the essential features of primary atomization using models to capture interactions between the fluid and droplets and which can be directly applied to the standard atomization models used in practice. The numerical scheme for unstructured grids developed by Mahesh et al. (2003) for incompressible flows is modified to take into account the droplet volume fraction. The numerical framework is directly applicable to realistic combustor geometries. Our main objectives in this work are: Develop a numerical formulation based on Eulerian-Lagrangian techniques with models for interaction terms between the fluid and particles to capture the Kelvin- Helmholtz type instabilities observed during primary atomization. Validate this technique for various two-phase and particulate flows. Assess its applicability to capture primary atomization of liquid jets in conjunction with secondary atomization models.
Povz, Meta; Sumer, Suzana
2003-01-01
Cobitis elongata Heckel et Kner inhabits the rivers Sava, Kolpa, Krka, Gracnica and Hudinja (the Danube river basin). The species is common in its distribution area. In the Red List of endangered Pisces and Cyclostomata in Slovenia, it is classified as endangered. Status and distribution data of the species from previous reports and recent research were summarized. A total of 31 specimens from the river Kolpa were morphologically studied. Sixteen morphometric and four meristic characteristics were analysed using standard numerical taxonomic techniques. 99.8% of the total variation of standard length was explained by preanal distance, dorsal and ventral fin lengths as well as minimum body height.
A boundary element method for Stokes flows with interfaces
NASA Astrophysics Data System (ADS)
Alinovi, Edoardo; Bottaro, Alessandro
2018-03-01
The boundary element method is a widely used and powerful technique to numerically describe multiphase flows with interfaces, satisfying Stokes' approximation. However, low viscosity ratios between immiscible fluids in contact at an interface and large surface tensions may lead to consistency issues as far as mass conservation is concerned. A simple and effective approach is described to ensure mass conservation at all viscosity ratios and capillary numbers within a standard boundary element framework. Benchmark cases are initially considered demonstrating the efficacy of the proposed technique in satisfying mass conservation, comparing with approaches and other solutions present in the literature. The methodology developed is finally applied to the problem of slippage over superhydrophobic surfaces.
Parallel gene analysis with allele-specific padlock probes and tag microarrays
Banér, Johan; Isaksson, Anders; Waldenström, Erik; Jarvius, Jonas; Landegren, Ulf; Nilsson, Mats
2003-01-01
Parallel, highly specific analysis methods are required to take advantage of the extensive information about DNA sequence variation and of expressed sequences. We present a scalable laboratory technique suitable to analyze numerous target sequences in multiplexed assays. Sets of padlock probes were applied to analyze single nucleotide variation directly in total genomic DNA or cDNA for parallel genotyping or gene expression analysis. All reacted probes were then co-amplified and identified by hybridization to a standard tag oligonucleotide array. The technique was illustrated by analyzing normal and pathogenic variation within the Wilson disease-related ATP7B gene, both at the level of DNA and RNA, using allele-specific padlock probes. PMID:12930977
Numerical dissipation vs. subgrid-scale modelling for large eddy simulation
NASA Astrophysics Data System (ADS)
Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos
2017-05-01
This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.
On a framework for generating PoD curves assisted by numerical simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here wemore » develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.« less
On a framework for generating PoD curves assisted by numerical simulations
NASA Astrophysics Data System (ADS)
Subair, S. Mohamed; Agrawal, Shweta; Balasubramaniam, Krishnan; Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-01
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
Beyond single-stream with the Schrödinger method
NASA Astrophysics Data System (ADS)
Uhlemann, Cora; Kopp, Michael
2016-10-01
We investigate large scale structure formation of collisionless dark matter in the phase space description based on the Vlasov-Poisson equation. We present the Schrödinger method, originally proposed by \\cite{WK93} as numerical technique based on the Schrödinger Poisson equation, as an analytical tool which is superior to the common standard pressureless fluid model. Whereas the dust model fails and develops singularities at shell crossing the Schrödinger method encompasses multi-streaming and even virialization.
Numerical modeling of the divided bar measurements
NASA Astrophysics Data System (ADS)
LEE, Y.; Keehm, Y.
2011-12-01
The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.
NASA Astrophysics Data System (ADS)
Giorli, Giacomo; Drazen, Jeffrey C.; Neuheimer, Anna B.; Copeland, Adrienne; Au, Whitlow W. L.
2018-01-01
Pelagic animals that form deep sea scattering layers (DSLs) represent an important link in the food web between zooplankton and top predators. While estimating the composition, density and location of the DSL is important to understand mesopelagic ecosystem dynamics and to predict top predators' distribution, DSL composition and density are often estimated from trawls which may be biased in terms of extrusion, avoidance, and gear-associated biases. Instead, location and biomass of DSLs can be estimated from active acoustic techniques, though estimates are often in aggregate without regard to size or taxon specific information. For the first time in the open ocean, we used a DIDSON sonar to characterize the fauna in DSLs. Estimates of the numerical density and length of animals at different depths and locations along the Kona coast of the Island of Hawaii were determined. Data were collected below and inside the DSLs with the sonar mounted on a profiler. A total of 7068 animals were counted and sized. We estimated numerical densities ranging from 1 to 7 animals/m3 and individuals as long as 3 m were detected. These numerical densities were orders of magnitude higher than those estimated from trawls and average sizes of animals were much larger as well. A mixed model was used to characterize numerical density and length of animals as a function of deep sea layer sampled, location, time of day, and day of the year. Numerical density and length of animals varied by month, with numerical density also a function of depth. The DIDSON proved to be a good tool for open-ocean/deep-sea estimation of the numerical density and size of marine animals, especially larger ones. Further work is needed to understand how this methodology relates to estimates of volume backscatters obtained with standard echosounding techniques, density measures obtained with other sampling methodologies, and to precisely evaluate sampling biases.
Going Beyond QCD in Lattice Gauge Theory
NASA Astrophysics Data System (ADS)
Fleming, G. T.
2011-01-01
Strongly coupled gauge theories (SCGT's) have been studied theoretically for many decades using numerous techniques. The obvious motivation for these efforts stemmed from a desire to understand the source of the strong nuclear force: Quantum Chromo-dynamics (QCD). Guided by experimental results, theorists generally consider QCD to be a well-understood SCGT. Unfortunately, it is not clear how to extend the lessons learned from QCD to other SCGT's. Particularly urgent motivators for new studies of other SCGT's are the ongoing searches for physics beyond the standard model (BSM) at the Large Hadron Collider (LHC) and the Tevatron. Lattice gauge theory (LGT) is a technique for systematically-improvable calculations in many SCGT's. It has become the standard for non-perturbative calculations in QCD and it is widely believed that it may be useful for study of other SCGT's in the realm of BSM physics. We will discuss the prospects and potential pitfalls for these LGT studies, focusing primarily on the flavor dependence of SU(3) gauge theory.
Study of Variable Frequency Induction Heating in Steel Making Process
NASA Astrophysics Data System (ADS)
Fukutani, Kazuhiko; Umetsu, Kenji; Itou, Takeo; Isobe, Takanori; Kitahara, Tadayuki; Shimada, Ryuichi
Induction heating technologies have been the standard technologies employed in steel making processes because they are clean, they have a high energy density, they are highly the controllable, etc. However, there is a problem in using them; in general, frequencies of the electric circuits have to be kept fixed to improve their power factors, and this constraint makes the processes inflexible. In order to overcome this problem, we have developed a new heating technique-variable frequency power supply with magnetic energy recovery switching. This technique helps us in improving the quality of steel products as well as the productivity. We have also performed numerical calculations and experiments to evaluate its effect on temperature distributions on heated steel plates. The obtained results indicate that the application of the technique in steel making processes would be advantageous.
Using SWAT to enhance watershed-based plans to meet numeric water quality standards
USDA-ARS?s Scientific Manuscript database
The number of states that have adopted numeric nutrient water-quality standards has increased to 23, up from ten in 1998. One state with both stream and reservoir phosphorus (P) numeric water-quality standards is Oklahoma. There were two primary objectives of this research: (1) determine if Oklaho...
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
Isotropic three-dimensional T2 mapping of knee cartilage: Development and validation.
Colotti, Roberto; Omoumi, Patrick; Bonanno, Gabriele; Ledoux, Jean-Baptiste; van Heeswijk, Ruud B
2018-02-01
1) To implement a higher-resolution isotropic 3D T 2 mapping technique that uses sequential T 2 -prepared segmented gradient-recalled echo (Iso3DGRE) images for knee cartilage evaluation, and 2) to validate it both in vitro and in vivo in healthy volunteers and patients with knee osteoarthritis. The Iso3DGRE sequence with an isotropic 0.6 mm spatial resolution was developed on a clinical 3T MR scanner. Numerical simulations were performed to optimize the pulse sequence parameters. A phantom study was performed to validate the T 2 estimation accuracy. The repeatability of the sequence was assessed in healthy volunteers (n = 7). T 2 values were compared with those from a clinical standard 2D multislice multiecho (MSME) T 2 mapping sequence in knees of healthy volunteers (n = 13) and in patients with knee osteoarthritis (OA, n = 5). The numerical simulations resulted in 100 excitations per segment and an optimal radiofrequency (RF) excitation angle of 15°. The phantom study demonstrated a good correlation of the technique with the reference standard (slope 0.9 ± 0.05, intercept 0.2 ± 1.7 msec, R 2 ≥ 0.99). Repeated measurements of cartilage T 2 values in healthy volunteers showed a coefficient of variation of 5.6%. Both Iso3DGRE and MSME techniques found significantly higher cartilage T 2 values (P < 0.03) in OA patients. Iso3DGRE precision was equal to that of the MSME T 2 mapping in healthy volunteers, and significantly higher in OA (P = 0.01). This study successfully demonstrated that high-resolution isotropic 3D T 2 mapping for knee cartilage characterization is feasible, accurate, repeatable, and precise. The technique allows for multiplanar reformatting and thus T 2 quantification in any plane of interest. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:362-371. © 2017 International Society for Magnetic Resonance in Medicine.
Quantitative analysis of time-resolved microwave conductivity data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, Obadiah G.; Moore, David T.; Li, Zhen
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
Quantitative analysis of time-resolved microwave conductivity data
Reid, Obadiah G.; Moore, David T.; Li, Zhen; ...
2017-11-10
Flash-photolysis time-resolved microwave conductivity (fp-TRMC) is a versatile, highly sensitive technique for studying the complex photoconductivity of solution, solid, and gas-phase samples. The purpose of this paper is to provide a standard reference work for experimentalists interested in using microwave conductivity methods to study functional electronic materials, describing how to conduct and calibrate these experiments in order to obtain quantitative results. The main focus of the paper is on calculating the calibration factor, K, which is used to connect the measured change in microwave power absorption to the conductance of the sample. We describe the standard analytical formulae that havemore » been used in the past, and compare them to numerical simulations. This comparison shows that the most widely used analytical analysis of fp-TRMC data systematically under-estimates the transient conductivity by ~60%. We suggest a more accurate semi-empirical way of calibrating these experiments. However, we emphasize that the full numerical calculation is necessary to quantify both transient and steady-state conductance for arbitrary sample properties and geometry.« less
NASA Astrophysics Data System (ADS)
Artz, Jerry; Alchemy, John; Weilepp, Anne; Bongiovanni, Michael; Siddhartha, Kumar
2014-03-01
A medical, biophysics, engineering collaboration has produced a standardized cloud-based application for creating automated WPI ratings. The project assigns numerical values to injuries/illness in accordance with the American Medical Association Guides to the Evaluation of Permanent Impairment, Fifth Edition, AMA Press handbook, 5th edition (with 63 medical contributors and 89 medical reviewers). The AMA Guide serves as the industry standard for assigning impairment values for 32 US states and 190 other countries. Clinical medical data is collected using a menu-driven user interface which is computationally combined into a single numeric value. A medical doctor performs a biometric analysis and enters the quantitative data into a mobile device. The data is analyzed using proprietary validation algorithms, and a WPI Impairment rating is created. The findings are imbedded into a formalized medicolegal report in a matter of minutes. This particular presentation will concentrate upon the WPI rating of the spine--cervical, thoracic, and lumbar. Both common rating techniques will be presented--i.e., Diagnosis Related Estimates (DRE) and Range of Motion (ROM).
Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study
Chen, Jie; Gutmark, Ephraim
2013-01-01
Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907
A multilevel control system for the large space telescope. [numerical analysis/optimal control
NASA Technical Reports Server (NTRS)
Siljak, D. D.; Sundareshan, S. K.; Vukcevic, M. B.
1975-01-01
A multilevel scheme was proposed for control of Large Space Telescope (LST) modeled by a three-axis-six-order nonlinear equation. Local controllers were used on the subsystem level to stabilize motions corresponding to the three axes. Global controllers were applied to reduce (and sometimes nullify) the interactions among the subsystems. A multilevel optimization method was developed whereby local quadratic optimizations were performed on the subsystem level, and global control was again used to reduce (nullify) the effect of interactions. The multilevel stabilization and optimization methods are presented as general tools for design and then used in the design of the LST Control System. The methods are entirely computerized, so that they can accommodate higher order LST models with both conceptual and numerical advantages over standard straightforward design techniques.
Further studies on stability analysis of nonlinear Roesser-type two-dimensional systems
NASA Astrophysics Data System (ADS)
Dai, Xiao-Lin
2014-04-01
This paper is concerned with further relaxations of the stability analysis of nonlinear Roesser-type two-dimensional (2D) systems in the Takagi-Sugeno fuzzy form. To achieve the goal, a novel slack matrix variable technique, which is homogenous polynomially parameter-dependent on the normalized fuzzy weighting functions with arbitrary degree, is developed and the algebraic properties of the normalized fuzzy weighting functions are collected into a set of augmented matrices. Consequently, more information about the normalized fuzzy weighting functions is involved and the relaxation quality of the stability analysis is significantly improved. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed result.
Strongly interacting dynamics beyond the standard model on a space-time lattice.
Lucini, Biagio
2010-08-13
Strong theoretical arguments suggest that the Higgs sector of the standard model of electroweak interactions is an effective low-energy theory, with a more fundamental theory expected to emerge at an energy scale of the order of a teraelectronvolt. One possibility is that the more fundamental theory is strongly interacting and the Higgs sector is given by the low-energy dynamics of the underlying theory. I review recent works aimed at determining observable quantities by numerical simulations of strongly interacting theories proposed in the literature to explain the electroweak symmetry-breaking mechanism. These investigations are based on Monte Carlo simulations of the theory formulated on a space-time lattice. I focus on the so-called minimal walking technicolour scenario, an SU(2) gauge theory with two flavours of fermions in the adjoint representation. The emerging picture is that this theory has an infrared fixed point that dominates the large-distance physics. I shall discuss the first numerical determinations of quantities of phenomenological interest for this theory and analyse future directions of quantitative studies of strongly interacting theories beyond the standard model with lattice techniques. In particular, I report on a finite size scaling determination of the chiral condensate anomalous dimension gamma, for which 0.05 < or = gamma < or = 0.25.
Numerical Determination of Critical Conditions for Thermal Ignition
NASA Technical Reports Server (NTRS)
Luo, W.; Wake, G. C.; Hawk, C. W.; Litchford, R. J.
2008-01-01
The determination of ignition or thermal explosion in an oxidizing porous body of material, as described by a dimensionless reaction-diffusion equation of the form .tu = .2u + .e-1/u over the bounded region O, is critically reexamined from a modern perspective using numerical methodologies. First, the classic stationary model is revisited to establish the proper reference frame for the steady-state solution space, and it is demonstrated how the resulting nonlinear two-point boundary value problem can be reexpressed as an initial value problem for a system of first-order differential equations, which may be readily solved using standard algorithms. Then, the numerical procedure is implemented and thoroughly validated against previous computational results based on sophisticated path-following techniques. Next, the transient nonstationary model is attacked, and the full nonlinear form of the reaction-diffusion equation, including a generalized convective boundary condition, is discretized and expressed as a system of linear algebraic equations. The numerical methodology is implemented as a computer algorithm, and validation computations are carried out as a prelude to a broad-ranging evaluation of the assembly problem and identification of the watershed critical initial temperature conditions for thermal ignition. This numerical methodology is then used as the basis for studying the relationship between the shape of the critical initial temperature distribution and the corresponding spatial moments of its energy content integral and an attempt to forge a fundamental conjecture governing this relation. Finally, the effects of dynamic boundary conditions on the classic storage problem are investigated and the groundwork is laid for the development of an approximate solution methodology based on adaptation of the standard stationary model.
Advantage of four-electrode over two-electrode defibrillators
NASA Astrophysics Data System (ADS)
Bragard, J.; Šimić, A.; Laroze, D.; Elorza, J.
2015-12-01
Defibrillation is the standard clinical treatment used to stop ventricular fibrillation. An electrical device delivers a controlled amount of electrical energy via a pair of electrodes in order to reestablish a normal heart rate. We propose a technique that is a combination of biphasic shocks applied with a four-electrode system rather than the standard two-electrode system. We use a numerical model of a one-dimensional ring of cardiac tissue in order to test and evaluate the benefit of this technique. We compare three different shock protocols, namely a monophasic and two types of biphasic shocks. The results obtained by using a four-electrode system are compared quantitatively with those obtained with the standard two-electrode system. We find that a huge reduction in defibrillation threshold is achieved with the four-electrode system. For the most efficient protocol (asymmetric biphasic), we obtain a reduction in excess of 80% in the energy required for a defibrillation success rate of 90%. The mechanisms of successful defibrillation are also analyzed. This reveals that the advantage of asymmetric biphasic shocks with four electrodes lies in the duration of the cathodal and anodal phase of the shock.
NASA Technical Reports Server (NTRS)
Nguyen, Quang-Viet; Kojima, Jun
2005-01-01
Researchers from NASA Glenn Research Center s Combustion Branch and the Ohio Aerospace Institute (OAI) have developed a transferable calibration standard for an optical technique called spontaneous Raman scattering (SRS) in high-pressure flames. SRS is perhaps the only technique that provides spatially and temporally resolved, simultaneous multiscalar measurements in turbulent flames. Such measurements are critical for the validation of numerical models of combustion. This study has been a combined experimental and theoretical effort to develop a spectral calibration database for multiscalar diagnostics using SRS in high-pressure flames. However, in the past such measurements have used a one-of-a-kind experimental setup and a setup-dependent calibration procedure to empirically account for spectral interferences, or crosstalk, among the major species of interest. Such calibration procedures, being non-transferable, are prohibitively expensive to duplicate. A goal of this effort is to provide an SRS calibration database using transferable standards that can be implemented widely by other researchers for both atmospheric-pressure and high-pressure (less than 30 atm) SRS studies. A secondary goal of this effort is to provide quantitative multiscalar diagnostics in high pressure environments to validate computational combustion codes.
The Inhibition of the Rayleigh-Taylor Instability by Rotation.
Baldwin, Kyle A; Scase, Matthew M; Hill, Richard J A
2015-07-01
It is well-established that the Coriolis force that acts on fluid in a rotating system can act to stabilise otherwise unstable flows. Chandrasekhar considered theoretically the effect of the Coriolis force on the Rayleigh-Taylor instability, which occurs at the interface between a dense fluid lying on top of a lighter fluid under gravity, concluding that rotation alone could not stabilise this system indefinitely. Recent numerical work suggests that rotation may, nevertheless, slow the growth of the instability. Experimental verification of these results using standard techniques is problematic, owing to the practical difficulty in establishing the initial conditions. Here, we present a new experimental technique for studying the Rayleigh-Taylor instability under rotation that side-steps the problems encountered with standard techniques by using a strong magnetic field to destabilize an otherwise stable system. We find that rotation about an axis normal to the interface acts to retard the growth rate of the instability and stabilise long wavelength modes; the scale of the observed structures decreases with increasing rotation rate, asymptoting to a minimum wavelength controlled by viscosity. We present a critical rotation rate, dependent on Atwood number and the aspect ratio of the system, for stabilising the most unstable mode.
The Inhibition of the Rayleigh-Taylor Instability by Rotation
Baldwin, Kyle A.; Scase, Matthew M.; Hill, Richard J. A.
2015-01-01
It is well-established that the Coriolis force that acts on fluid in a rotating system can act to stabilise otherwise unstable flows. Chandrasekhar considered theoretically the effect of the Coriolis force on the Rayleigh-Taylor instability, which occurs at the interface between a dense fluid lying on top of a lighter fluid under gravity, concluding that rotation alone could not stabilise this system indefinitely. Recent numerical work suggests that rotation may, nevertheless, slow the growth of the instability. Experimental verification of these results using standard techniques is problematic, owing to the practical difficulty in establishing the initial conditions. Here, we present a new experimental technique for studying the Rayleigh-Taylor instability under rotation that side-steps the problems encountered with standard techniques by using a strong magnetic field to destabilize an otherwise stable system. We find that rotation about an axis normal to the interface acts to retard the growth rate of the instability and stabilise long wavelength modes; the scale of the observed structures decreases with increasing rotation rate, asymptoting to a minimum wavelength controlled by viscosity. We present a critical rotation rate, dependent on Atwood number and the aspect ratio of the system, for stabilising the most unstable mode. PMID:26130005
Effective Thermal Conductivity of High Porosity Open Cell Nickel Foam
NASA Technical Reports Server (NTRS)
Sullins, Alan D.; Daryabeigi, Kamran
2001-01-01
The effective thermal conductivity of high-porosity open cell nickel foam samples was measured over a wide range of temperatures and pressures using a standard steady-state technique. The samples, measuring 23.8 mm, 18.7 mm, and 13.6 mm in thickness, were constructed with layers of 1.7 mm thick foam with a porosity of 0.968. Tests were conducted with the specimens subjected to temperature differences of 100 to 1000 K across the thickness and at environmental pressures of 10(exp -4) to 750 mm Hg. All test were conducted in a gaseous nitrogen environment. A one-dimensional finite volume numerical model was developed to model combined radiation/conduction heat transfer in the foam. The radiation heat transfer was modeled using the two-flux approximation. Solid and gas conduction were modeled using standard techniques for high porosity media. A parameter estimation technique was used in conjunction with the measured and predicted thermal conductivities at pressures of 10(exp -4) and 750 mm Hg to determine the extinction coefficient, albedo of scattering, and weighting factors for modeling the conduction thermal conductivity. The measured and predicted conductivities over the intermediate pressure values differed by 13%.
NNLO computational techniques: The cases H→γγ and H→gg
NASA Astrophysics Data System (ADS)
Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro
2009-04-01
A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
NASA Astrophysics Data System (ADS)
Uddin, H.; Kramer, R. M. J.; Pantano, C.
2014-04-01
An immersed boundary methodology to solve the compressible Navier-Stokes equations around complex geometries in Cartesian fluid dynamics solvers is described. The objective of the new approach is to enable smooth reconstruction of pressure and viscous stresses around the embedded objects without spurious numerical artifacts. A standard level set represents the boundary of the object and defines a fictitious domain into which the flow fields are smoothly extended. Boundary conditions on the surface are enforced by an approach inspired by analytic continuation. Each fluid field is extended independently, constrained only by the boundary condition associated with that field. Unlike most existing methods, no jump conditions or explicit derivation of them from the boundary conditions are required in this approach. Numerical stiffness that arises when the fluid-solid interface is close to grid points of the mesh is addressed by preconditioning. In addition, the embedded geometry technique is coupled with a stable high-order adaptive discretization that is enabled around the object boundary to enhance resolution. The stencils used to transition the order of accuracy of the discretization are derived using the summation-by-parts technique that ensures stability. Applications to shock reflections, shock-ramp interactions, and supersonic and low-Mach number flows over two- and three-dimensional geometries are presented.
Tensile-Creep Test Specimen Preparation Practices of Surface Support Liners
NASA Astrophysics Data System (ADS)
Guner, Dogukan; Ozturk, Hasan
2017-12-01
Ground support has always been considered as a challenging issue in all underground operations. Many forms of support systems and supporting techniques are available in the mining/tunnelling industry. In the last two decades, a new polymer based material, Thin Spray-on Liner (TSL), has attained a place in the market as an alternative to the current areal ground support systems. Although TSL provides numerous merits and has different application purposes, the knowledge on mechanical properties and performance of this material is still limited. In laboratory studies, since tensile rupture is the most commonly observed failure mechanism in field applications, researchers have generally studied the tensile testing of TSLs with modification of American Society for Testing and Materials (ASTM) D-638 standards. For tensile creep testing, specimen preparation process also follows the ASTM standards. Two different specimen dimension types (Type I, Type IV) are widely preferred in TSL tensile testing that conform to the related standards. Moreover, molding and die cutting are commonly used specimen preparation techniques. In literature, there is a great variability of test results due to the difference in specimen preparation techniques and practices. In this study, a ductile TSL product was tested in order to investigate the effect of both specimen preparation techniques and specimen dimensions under 7-day curing time. As a result, ultimate tensile strength, tensile yield strength, tensile modulus, and elongation at break values were obtained for 4 different test series. It is concluded that Type IV specimens have higher strength values compared to Type I specimens and moulded specimens have lower results than that of prepared by using die cutter. Moreover, specimens prepared by molding techniques have scattered test results. Type IV specimens prepared by die cutter technique are suggested for preparation of tensile test and Type I specimens prepared by die cutter technique should be preferred for tensile creep tests.
(Machine) learning to do more with less
NASA Astrophysics Data System (ADS)
Cohen, Timothy; Freytsis, Marat; Ostdiek, Bryan
2018-02-01
Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (which relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail — both analytically and numerically — with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to a class of systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC. Example code is provided on GitHub.
Numerically stable finite difference simulation for ultrasonic NDE in anisotropic composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Quintanilla, Francisco Hernando; Cole, Christina M.
2018-04-01
Simulation tools can enable optimized inspection of advanced materials and complex geometry structures. Recent work at NASA Langley is focused on the development of custom simulation tools for modeling ultrasonic wave behavior in composite materials. Prior work focused on the use of a standard staggered grid finite difference type of mathematical approach, by implementing a three-dimensional (3D) anisotropic Elastodynamic Finite Integration Technique (EFIT) code. However, observations showed that the anisotropic EFIT method displays numerically unstable behavior at the locations of stress-free boundaries for some cases of anisotropic materials. This paper gives examples of the numerical instabilities observed for EFIT and discusses the source of instability. As an alternative to EFIT, the 3D Lebedev Finite Difference (LFD) method has been implemented. The paper briefly describes the LFD approach and shows examples of stable behavior in the presence of stress-free boundaries for a monoclinic anisotropy case. The LFD results are also compared to experimental results and dispersion curves.
Gyrokinetic simulation of ITG modes in a three-mode coupling model
NASA Astrophysics Data System (ADS)
Jenkins, Thomas G.; Lee, W. W.
2004-11-01
A three-mode coupling model of ITG modes with adiabatic electrons is studied both analytically and numerically in 2-dimensional slab geometry using the gyrokinetic formalism. It can be shown analytically that the (quasilinear) saturation amplitude of the waves in the system should be enhanced by the inclusion of the parallel velocity nonlinearity in the governing gyrokinetic equation. The effect of this (frequently neglected) nonlinearity on the steady-state transport properties of the plasma is studied numerically using standard gyrokinetic particle simulation techniques. The balance [1] between various steady-state transport properties of the model (particle and heat flux, entropy production, and collisional dissipation) is examined. Effects resulting from the inclusion of nonadiabatic electrons in the model are also considered numerically, making use of the gyrokinetic split-weight scheme [2] in the simulations. [1] W. W. Lee and W. M. Tang, Phys. Fluids 31, 612 (1988). [2] I. Manuilskiy and W. W. Lee, Phys. Plasmas 7, 1381 (2000).
Steering Quantum Dynamics of a Two-Qubit System via Optimal Bang-Bang Control
NASA Astrophysics Data System (ADS)
Hu, Juju; Ke, Qiang; Ji, Yinghua
2018-02-01
The optimization of control time for quantum systems has been an important field of control science attracting decades of focus, which is beneficial for efficiency improvement and decoherence suppression caused by the environment. Based on analyzing the advantages and disadvantages of the existing Lyapunov control, using a bang-bang optimal control technique, we investigate the fast state control in a closed two-qubit quantum system, and give three optimized control field design methods. Numerical simulation experiments indicate the effectiveness of the methods. Compared to the standard Lyapunov control or standard bang-bang control method, the optimized control field design methods effectively shorten the state control time and avoid high-frequency oscillation that occurs in bang-bang control.
Automated monitor and control for deep space network subsystems
NASA Technical Reports Server (NTRS)
Smyth, P.
1989-01-01
The problem of automating monitor and control loops for Deep Space Network (DSN) subsystems is considered and an overview of currently available automation techniques is given. The use of standard numerical models, knowledge-based systems, and neural networks is considered. It is argued that none of these techniques alone possess sufficient generality to deal with the demands imposed by the DSN environment. However, it is shown that schemes that integrate the better aspects of each approach and are referenced to a formal system model show considerable promise, although such an integrated technology is not yet available for implementation. Frequent reference is made to the receiver subsystem since this work was largely motivated by experience in developing an automated monitor and control loop for the advanced receiver.
Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography
NASA Astrophysics Data System (ADS)
Hahn, Bernadette N.
2017-12-01
A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.
Farooq, Zerwa; Behzadi, Ashkan Heshmatzadeh; Blumenfeld, Jon D; Zhao, Yize; Prince, Martin R
To compare MRI segmentation methods for measuring liver cyst volumes in autosomal dominant polycystic kidney disease (ADPKD). Liver cyst volumes in 42 ADPKD patients were measured using region growing, thresholding and cyst diameter techniques. Manual segmentation was the reference standard. Root mean square deviation was 113, 155, and 500 for cyst diameter, thresholding and region growing respectively. Thresholding error for cyst volumes below 500ml was 550% vs 17% for cyst volumes above 500ml (p<0.001). For measuring volume of a small number of cysts, cyst diameter and manual segmentation methods are recommended. For severe disease with numerous, large hepatic cysts, thresholding is an acceptable alternative. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Hongmei; Feng, Qing; Li, Ning; Xu, Sheng
2016-12-01
Limited information is available regarding the metal-ceramic bond strength of dental Co-Cr alloys fabricated by casting (CAST), computer numerical control (CNC) milling, and selective laser melting (SLM). The purpose of this in vitro study was to evaluate the metal-ceramic bond characteristics of 3 dental Co-Cr alloys fabricated by casting, computer numerical control milling, and selective laser melting techniques using the 3-point bend test (International Organization for Standardization [ISO] standard 9693). Forty-five specimens (25×3×0.5 mm) made of dental Co-Cr alloys were prepared by CAST, CNC milling, and SLM techniques. The morphology of the oxidation surface of metal specimens was evaluated by scanning electron microscopy (SEM). After porcelain application, the interfacial characterization was evaluated by SEM equipped with energy-dispersive spectrometry (EDS) analysis, and the metal-ceramic bond strength was assessed with the 3-point bend test. Failure type and elemental composition on the debonding interface were assessed by SEM/EDS. The bond strength was statistically analyzed by 1-way ANOVA and Tukey honest significant difference test (α=.05). The oxidation surfaces of the CAST, CNC, and SLM groups were different. They were porous in the CAST group but compact and irregular in the CNC and SLM groups. The metal-ceramic interfaces of the SLM and CNC groups showed excellent combination compared with those of the CAST group. The bond strength was 37.7 ±6.5 MPa for CAST, 43.3 ±9.2 MPa for CNC, and 46.8 ±5.1 MPa for the SLM group. Statistically significant differences were found among the 3 groups tested (P=.028). The debonding surfaces of all specimens exhibited cohesive failure mode. The oxidation surface morphologies and thicknesses of dental Co-Cr alloys are dependent on the different fabrication techniques used. The bond strength of all 3 groups exceed the minimum acceptable value of 25 MPa recommended by ISO 9693; hence, dental Co-Cr alloy fabricated with the SLM techniques could be a promising alternative for metal ceramic restorations. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
Standards and Methodologies for Characterizing Radiobiological Impact of High-Z Nanoparticles
Subiel, Anna; Ashmore, Reece; Schettino, Giuseppe
2016-01-01
Research on the application of high-Z nanoparticles (NPs) in cancer treatment and diagnosis has recently been the subject of growing interest, with much promise being shown with regards to a potential transition into clinical practice. In spite of numerous publications related to the development and application of nanoparticles for use with ionizing radiation, the literature is lacking coherent and systematic experimental approaches to fully evaluate the radiobiological effectiveness of NPs, validate mechanistic models and allow direct comparison of the studies undertaken by various research groups. The lack of standards and established methodology is commonly recognised as a major obstacle for the transition of innovative research ideas into clinical practice. This review provides a comprehensive overview of radiobiological techniques and quantification methods used in in vitro studies on high-Z nanoparticles and aims to provide recommendations for future standardization for NP-mediated radiation research. PMID:27446499
Robust tuning of robot control systems
NASA Technical Reports Server (NTRS)
Minis, I.; Uebel, M.
1992-01-01
The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.
Reinersman, Phillip N; Carder, Kendall L
2004-05-01
A hybrid method is presented by which Monte Carlo (MC) techniques are combined with an iterative relaxation algorithm to solve the radiative transfer equation in arbitrary one-, two-, or three-dimensional optical environments. The optical environments are first divided into contiguous subregions, or elements. MC techniques are employed to determine the optical response function of each type of element. The elements are combined, and relaxation techniques are used to determine simultaneously the radiance field on the boundary and throughout the interior of the modeled environment. One-dimensional results compare well with a standard radiative transfer model. The light field beneath and adjacent to a long barge is modeled in two dimensions and displayed. Ramifications for underwater video imaging are discussed. The hybrid model is currently capable of providing estimates of the underwater light field needed to expedite inspection of ship hulls and port facilities.
Milosevic, Matija; McConville, Kristiina M Valter
2012-01-01
Operation of handheld power tools results in exposure to hand-arm vibrations, which over time lead to numerous health complications. The objective of this study was to evaluate protective equipment and working techniques for the reduction of vibration exposure. Vibration transmissions were recorded during different work techniques: with one- and two-handed grip, while wearing protective gloves (standard, air and anti-vibration gloves) and while holding a foam-covered tool handle. The effect was examined by analyzing the reduction of transmitted vibrations at the wrist. The vibration transmission was recorded with a portable device using a triaxial accelerometer. The results suggest large and significant reductions of vibration with appropriate safety equipment. Reductions of 85.6% were achieved when anti-vibration gloves were used. Our results indicated that transmitted vibrations were affected by several factors and could be measured and significantly reduced.
Ultrasound-guided piriformis muscle injection. A new approach.
Bevilacqua Alén, E; Diz Villar, A; Curt Nuño, F; Illodo Miramontes, G; Refojos Arencibia, F J; López González, J M
2016-12-01
Piriformis syndrome is an uncommon cause of buttock and leg pain. Some treatment options include the injection of piriformis muscle with local anesthetic and steroids. Various techniques for piriformis muscle injection have been described. Ultrasound allows direct visualization and real time injection of the piriformis muscle. We describe 5 consecutive patients, diagnosed of piriformis syndrome with no improvement after pharmacological treatment. Piriformis muscle injection with local anesthetics and steroids was performed using an ultrasound technique based on a standard technique. All 5 patients have improved their pain measured by numeric verbal scale. One patient had a sciatic after injection that improved in 10 days spontaneously. We describe an ultrasound-guided piriformis muscle injection that has the advantages of being effective, simple, and safe. Copyright © 2016 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
A k-Vector Approach to Sampling, Interpolation, and Approximation
NASA Astrophysics Data System (ADS)
Mortari, Daniele; Rogers, Jonathan
2013-12-01
The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.
Le Floch, Jean-Michel; Fan, Y; Humbert, Georges; Shan, Qingxiao; Férachou, Denis; Bara-Maillet, Romain; Aubourg, Michel; Hartnett, John G; Madrangeas, Valerie; Cros, Dominique; Blondy, Jean-Marc; Krupka, Jerzy; Tobar, Michael E
2014-03-01
Dielectric resonators are key elements in many applications in micro to millimeter wave circuits, including ultra-narrow band filters and frequency-determining components for precision frequency synthesis. Distributed-layered and bulk low-loss crystalline and polycrystalline dielectric structures have become very important for building these devices. Proper design requires careful electromagnetic characterization of low-loss material properties. This includes exact simulation with precision numerical software and precise measurements of resonant modes. For example, we have developed the Whispering Gallery mode technique for microwave applications, which has now become the standard for characterizing low-loss structures. This paper will give some of the most common characterization techniques used in the micro to millimeter wave regime at room and cryogenic temperatures for designing high-Q dielectric loaded cavities.
NASA Astrophysics Data System (ADS)
Various papers on electromagnetic compatibility are presented. Some of the optics considered include: field-to-wire coupling 1 to 18 GHz, SHF/EHF field-to-wire coupling model, numerical method for the analysis of coupling to thin wire structures, spread-spectrum system with an adaptive array for combating interference, technique to select the optimum modulation indices for suppression of undesired signals for simultaneous range and data operations, development of a MHz RF leak detector technique for aircraft harness surveillance, and performance of standard aperture shielding techniques at microwave frequncies. Also discussed are: spectrum efficiency of spread-spectrum systems, control of power supply ripple produced sidebands in microwave transistor amplifiers, an intership SATCOM versus radar electromagnetic interference prediction model, considerations in the design of a broadband E-field sensing system, unique bonding methods for spacecraft, and review of EMC practice for launch vehicle systems.
Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model
NASA Astrophysics Data System (ADS)
Petit, O.; Mulu, B.; Nilsson, H.; Cervantes, M.
2010-08-01
The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Älvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.
The aggregated unfitted finite element method for elliptic problems
NASA Astrophysics Data System (ADS)
Badia, Santiago; Verdugo, Francesc; Martín, Alberto F.
2018-07-01
Unfitted finite element techniques are valuable tools in different applications where the generation of body-fitted meshes is difficult. However, these techniques are prone to severe ill conditioning problems that obstruct the efficient use of iterative Krylov methods and, in consequence, hinders the practical usage of unfitted methods for realistic large scale applications. In this work, we present a technique that addresses such conditioning problems by constructing enhanced finite element spaces based on a cell aggregation technique. The presented method, called aggregated unfitted finite element method, is easy to implement, and can be used, in contrast to previous works, in Galerkin approximations of coercive problems with conforming Lagrangian finite element spaces. The mathematical analysis of the new method states that the condition number of the resulting linear system matrix scales as in standard finite elements for body-fitted meshes, without being affected by small cut cells, and that the method leads to the optimal finite element convergence order. These theoretical results are confirmed with 2D and 3D numerical experiments.
EXPERIMENTAL MODELLING OF AORTIC ANEURYSMS
Doyle, Barry J; Corbett, Timothy J; Cloonan, Aidan J; O’Donnell, Michael R; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M
2009-01-01
A range of silicone rubbers were created based on existing commercially available materials. These silicones were designed to be visually different from one another and have distinct material properties, in particular, ultimate tensile strengths and tear strengths. In total, eleven silicone rubbers were manufactured, with the materials designed to have a range of increasing tensile strengths from approximately 2-4MPa, and increasing tear strengths from approximately 0.45-0.7N/mm. The variations in silicones were detected using a standard colour analysis technique. Calibration curves were then created relating colour intensity to individual material properties. All eleven materials were characterised and a 1st order Ogden strain energy function applied. Material coefficients were determined and examined for effectiveness. Six idealised abdominal aortic aneurysm models were also created using the two base materials of the study, with a further model created using a new mixing technique to create a rubber model with randomly assigned material properties. These models were then examined using videoextensometry and compared to numerical results. Colour analysis revealed a statistically significant linear relationship (p<0.0009) with both tensile strength and tear strength, allowing material strength to be determined using a non-destructive experimental technique. The effectiveness of this technique was assessed by comparing predicted material properties to experimentally measured methods, with good agreement in the results. Videoextensometry and numerical modelling revealed minor percentage differences, with all results achieving significance (p<0.0009). This study has successfully designed and developed a range of silicone rubbers that have unique colour intensities and material strengths. Strengths can be readily determined using a non-destructive analysis technique with proven effectiveness. These silicones may further aid towards an improved understanding of the biomechanical behaviour of aneurysms using experimental techniques. PMID:19595622
NASA Technical Reports Server (NTRS)
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
Treating convection in sequential solvers
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth
1992-01-01
The treatment of the convection terms in the sequential solver, a standard procedure found in virtually all pressure based algorithms, to compute the flow problems with sharp gradients and source terms is investigated. Both scalar model problems and one-dimensional gas dynamics equations have been used to study the various issues involved. Different approaches including the use of nonlinear filtering techniques and adoption of TVD type schemes have been investigated. Special treatments of the source terms such as pressure gradients and heat release have also been devised, yielding insight and improved accuracy of the numerical procedure adopted.
Design of WLAN microstrip antenna for 5.17 - 5.835 GHz
NASA Astrophysics Data System (ADS)
Bugaj, Jarosław; Bugaj, Marek; Wnuk, Marian
2017-04-01
This paper presents the project of miniaturized WLAN Antenna made in microstrip technique working at a frequency of 5.17 - 5.835 GHz in 802.11ac IEEE standard. This dual layer antenna is designed on RT/duroid 5870 ROGERS CORPORATION substrate with dielectric constant 2.33 and thickness of 3.175 mm. The antenna parameters such as return loss, VSWR, gain and directivity are simulated and optimized using commercial computer simulation technology microwave studio (CST MWS). The paper presents the results of discussed numerical analysis.
Wall function treatment for bubbly boundary layers at low void fractions.
Soares, Daniel V; Bitencourt, Marcelo C; Loureiro, Juliana B R; Silva Freire, Atila P
2018-01-01
The present work investigates the role of different treatments of the lower boundary condition on the numerical prediction of bubbly flows. Two different wall function formulations are tested against experimental data obtained for bubbly boundary layers: (i) a new analytical solution derived through asymptotic techniques and (ii) the previous formulation of Troshko and Hassan (IJHMT, 44, 871-875, 2001a). A modified k-e model is used to close the averaged Navier-Stokes equations together with the hypothesis that turbulence can be modelled by a linear superposition of bubble and shear induced eddy viscosities. The work shows, in particular, how four corrections must the implemented in the standard single-phase k-e model to account for the effects of bubbles. The numerical implementation of the near wall functions is made through a finite elements code.
Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.
2006-01-01
The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.
Side-branch resonators modelling with Green's function methods
NASA Astrophysics Data System (ADS)
Perrey-Debain, E.; Maréchal, R.; Ville, J. M.
2014-09-01
This paper deals with strategies for computing efficiently the propagation of sound waves in ducts containing passive components. In many cases of practical interest, these components are acoustic cavities which are connected to the duct. Though standard Finite Element software could be used for the numerical prediction of sound transmission through such a system, the method is known to be extremely demanding, both in terms of data preparation and computation, especially in the mid-frequency range. To alleviate this, a numerical technique that exploits the benefit of the FEM and the BEM approach has been devised. First, a set of eigenmodes is computed in the cavity to produce a numerical impedance matrix connecting the pressure and the acoustic velocity on the duct wall interface. Then an integral representation for the acoustic pressure in the main duct is used. By choosing an appropriate Green's function for the duct, the integration procedure is limited to the duct-cavity interface only. This allows an accurate computation of the scattering matrix of such an acoustic system with a numerical complexity that grows very mildly with the frequency. Typical applications involving Helmholtz and Herschel-Quincke resonators are presented.
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
An information theory approach to the density of the earth
NASA Technical Reports Server (NTRS)
Graber, M. A.
1977-01-01
Information theory can develop a technique which takes experimentally determined numbers and produces a uniquely specified best density model satisfying those numbers. A model was generated using five numerical parameters: the mass of the earth, its moment of inertia, three zero-node torsional normal modes (L = 2, 8, 26). In order to determine the stability of the solution, six additional densities were generated, in each of which the period of one of the three normal modes was increased or decreased by one standard deviation. The superposition of the seven models is shown. It indicates that current knowledge of the torsional modes is sufficient to specify the density in the upper mantle but that the lower mantle and core will require smaller standard deviations before they can be accurately specified.
Discrete size optimization of steel trusses using a refined big bang-big crunch algorithm
NASA Astrophysics Data System (ADS)
Hasançebi, O.; Kazemzadeh Azad, S.
2014-01-01
This article presents a methodology that provides a method for design optimization of steel truss structures based on a refined big bang-big crunch (BB-BC) algorithm. It is shown that a standard formulation of the BB-BC algorithm occasionally falls short of producing acceptable solutions to problems from discrete size optimum design of steel trusses. A reformulation of the algorithm is proposed and implemented for design optimization of various discrete truss structures according to American Institute of Steel Construction Allowable Stress Design (AISC-ASD) specifications. Furthermore, the performance of the proposed BB-BC algorithm is compared to its standard version as well as other well-known metaheuristic techniques. The numerical results confirm the efficiency of the proposed algorithm in practical design optimization of truss structures.
Optically sectioned in vivo imaging with speckle illumination HiLo microscopy
Lim, Daryl; Ford, Tim N.; Chu, Kengyeh K.; Mertz, Jerome
2011-01-01
We present a simple wide-field imaging technique, called HiLo microscopy, that is capable of producing optically sectioned images in real time, comparable in quality to confocal laser scanning microscopy. The technique is based on the fusion of two raw images, one acquired with speckle illumination and another with standard uniform illumination. The fusion can be numerically adjusted, using a single parameter, to produce optically sectioned images of varying thicknesses with the same raw data. Direct comparison between our HiLo microscope and a commercial confocal laser scanning microscope is made on the basis of sectioning strength and imaging performance. Specifically, we show that HiLo and confocal 3-D imaging of a GFP-labeled mouse brain hippocampus are comparable in quality. Moreover, HiLo microscopy is capable of faster, near video rate imaging over larger fields of view than attainable with standard confocal microscopes. The goal of this paper is to advertise the simplicity, robustness, and versatility of HiLo microscopy, which we highlight with in vivo imaging of common model organisms including planaria, C. elegans, and zebrafish. PMID:21280920
Optically sectioned in vivo imaging with speckle illumination HiLo microscopy.
Lim, Daryl; Ford, Tim N; Chu, Kengyeh K; Mertz, Jerome
2011-01-01
We present a simple wide-field imaging technique, called HiLo microscopy, that is capable of producing optically sectioned images in real time, comparable in quality to confocal laser scanning microscopy. The technique is based on the fusion of two raw images, one acquired with speckle illumination and another with standard uniform illumination. The fusion can be numerically adjusted, using a single parameter, to produce optically sectioned images of varying thicknesses with the same raw data. Direct comparison between our HiLo microscope and a commercial confocal laser scanning microscope is made on the basis of sectioning strength and imaging performance. Specifically, we show that HiLo and confocal 3-D imaging of a GFP-labeled mouse brain hippocampus are comparable in quality. Moreover, HiLo microscopy is capable of faster, near video rate imaging over larger fields of view than attainable with standard confocal microscopes. The goal of this paper is to advertise the simplicity, robustness, and versatility of HiLo microscopy, which we highlight with in vivo imaging of common model organisms including planaria, C. elegans, and zebrafish.
Optically sectioned in vivo imaging with speckle illumination HiLo microscopy
NASA Astrophysics Data System (ADS)
Lim, Daryl; Ford, Tim N.; Chu, Kengyeh K.; Mertz, Jerome
2011-01-01
We present a simple wide-field imaging technique, called HiLo microscopy, that is capable of producing optically sectioned images in real time, comparable in quality to confocal laser scanning microscopy. The technique is based on the fusion of two raw images, one acquired with speckle illumination and another with standard uniform illumination. The fusion can be numerically adjusted, using a single parameter, to produce optically sectioned images of varying thicknesses with the same raw data. Direct comparison between our HiLo microscope and a commercial confocal laser scanning microscope is made on the basis of sectioning strength and imaging performance. Specifically, we show that HiLo and confocal 3-D imaging of a GFP-labeled mouse brain hippocampus are comparable in quality. Moreover, HiLo microscopy is capable of faster, near video rate imaging over larger fields of view than attainable with standard confocal microscopes. The goal of this paper is to advertise the simplicity, robustness, and versatility of HiLo microscopy, which we highlight with in vivo imaging of common model organisms including planaria, C. elegans, and zebrafish.
Baryon Acoustic Oscillations reconstruction with pixels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obuljen, Andrej; Villaescusa-Navarro, Francisco; Castorina, Emanuele
2017-09-01
Gravitational non-linear evolution induces a shift in the position of the baryon acoustic oscillations (BAO) peak together with a damping and broadening of its shape that bias and degrades the accuracy with which the position of the peak can be determined. BAO reconstruction is a technique developed to undo part of the effect of non-linearities. We present and analyse a reconstruction method that consists of displacing pixels instead of galaxies and whose implementation is easier than the standard reconstruction method. We show that this method is equivalent to the standard reconstruction technique in the limit where the number of pixelsmore » becomes very large. This method is particularly useful in surveys where individual galaxies are not resolved, as in 21cm intensity mapping observations. We validate this method by reconstructing mock pixelated maps, that we build from the distribution of matter and halos in real- and redshift-space, from a large set of numerical simulations. We find that this method is able to decrease the uncertainty in the BAO peak position by 30-50% over the typical angular resolution scales of 21 cm intensity mapping experiments.« less
Improved analysis techniques for cylindrical and spherical double probes.
Beal, Brian; Johnson, Lee; Brown, Daniel; Blakely, Joseph; Bromaghim, Daron
2012-07-01
A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T(i)/T(e) ≪ 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 × 10(12)-1 × 10(17) m(-3) and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%∕-34% in density and +∕-30% in electron temperature.
Characterization of Hall effect thruster propellant distributors with flame visualization
NASA Astrophysics Data System (ADS)
Langendorf, S.; Walker, M. L. R.
2013-01-01
A novel method for the characterization and qualification of Hall effect thruster propellant distributors is presented. A quantitative measurement of the azimuthal number density uniformity, a metric which impacts propellant utilization, is obtained from photographs of a premixed flame anchored on the exit plane of the propellant distributor. The technique is demonstrated for three propellant distributors using a propane-air mixture at reservoir pressure of 40 psi (gauge) (377 kPa) exhausting to atmosphere, with volumetric flow rates ranging from 15-145 cfh (7.2-68 l/min) with equivalence ratios from 1.2 to 2.1. The visualization is compared with in-vacuum pressure measurements 1 mm downstream of the distributor exit plane (chamber pressure held below 2.7 × 10-5 Torr-Xe at all flow rates). Both methods indicate a non-uniformity in line with the propellant inlet, supporting the validity of the technique of flow visualization with flame luminosity for propellant distributor characterization. The technique is applied to a propellant distributor with a manufacturing defect in a known location and is able to identify the defect and characterize its impact. The technique is also applied to a distributor with numerous small orifices at the exit plane and is able to resolve the resulting non-uniformity. Luminosity data are collected with a spatial resolution of 48.2-76.1 μm (pixel width). The azimuthal uniformity is characterized in the form of standard deviation of azimuthal luminosities, normalized by the mean azimuthal luminosity. The distributors investigated achieve standard deviations of 0.346 ± 0.0212, 0.108 ± 0.0178, and 0.708 ± 0.0230 mean-normalized luminosity units respectively, where a value of 0 corresponds to perfect uniformity and a value of 1 represents a standard deviation equivalent to the mean.
Hierro, Núria; Esteve-Zarzoso, Braulio; González, Ángel; Mas, Albert; Guillamón, Jose M.
2006-01-01
Real-time PCR, or quantitative PCR (QPCR), has been developed to rapidly detect and quantify the total number of yeasts in wine without culturing. Universal yeast primers were designed from the variable D1/D2 domains of the 26S rRNA gene. These primers showed good specificity with all the wine yeasts tested, and they did not amplify the most representative wine species of acetic acid bacteria and lactic acid bacteria. Numerous standard curves were constructed with different strains and species grown in yeast extract-peptone-dextrose medium or incubated in wine. The small standard errors with these replicas proved that the assay is reproducible and highly robust. This technique was validated with artificially contaminated and natural wine samples. We also performed a reverse transcription-QPCR (RT-QPCR) assay from rRNA for total viable yeast quantification. This technique had a low detection limit and was more accurate than QPCR because the dead cells were not quantified. As far as we know, this is the first time that RT-QPCR has been performed to quantify viable yeasts from rRNA. RT-QPCR is a rapid and accurate technique for enumerating yeasts during industrial wine fermentation and controlling the risk of wine spoilage. PMID:17088381
Four decades of implicit Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wollaber, Allan B.
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
Four decades of implicit Monte Carlo
Wollaber, Allan B.
2016-02-23
In 1971, Fleck and Cummings derived a system of equations to enable robust Monte Carlo simulations of time-dependent, thermal radiative transfer problems. Denoted the “Implicit Monte Carlo” (IMC) equations, their solution remains the de facto standard of high-fidelity radiative transfer simulations. Over the course of 44 years, their numerical properties have become better understood, and accuracy enhancements, novel acceleration methods, and variance reduction techniques have been suggested. In this review, we rederive the IMC equations—explicitly highlighting assumptions as they are made—and outfit the equations with a Monte Carlo interpretation. We put the IMC equations in context with other approximate formsmore » of the radiative transfer equations and present a new demonstration of their equivalence to another well-used linearization solved with deterministic transport methods for frequency-independent problems. We discuss physical and numerical limitations of the IMC equations for asymptotically small time steps, stability characteristics and the potential of maximum principle violations for large time steps, and solution behaviors in an asymptotically thick diffusive limit. We provide a new stability analysis for opacities with general monomial dependence on temperature. Here, we consider spatial accuracy limitations of the IMC equations and discussion acceleration and variance reduction techniques.« less
A review of numerical techniques approaching microstructures of crystalline rocks
NASA Astrophysics Data System (ADS)
Zhang, Yahui; Wong, Louis Ngai Yuen
2018-06-01
The macro-mechanical behavior of crystalline rocks including strength, deformability and failure pattern are dominantly influenced by their grain-scale structures. Numerical technique is commonly used to assist understanding the complicated mechanisms from a microscopic perspective. Each numerical method has its respective strengths and limitations. This review paper elucidates how numerical techniques take geometrical aspects of the grain into consideration. Four categories of numerical methods are examined: particle-based methods, block-based methods, grain-based methods, and node-based methods. Focusing on the grain-scale characters, specific relevant issues including increasing complexity of micro-structure, deformation and breakage of model elements, fracturing and fragmentation process are described in more detail. Therefore, the intrinsic capabilities and limitations of different numerical approaches in terms of accounting for the micro-mechanics of crystalline rocks and their phenomenal mechanical behavior are explicitly presented.
NASA Astrophysics Data System (ADS)
Hirt, Christian; Reußner, Elisabeth; Rexer, Moritz; Kuhn, Michael
2016-09-01
Over the past years, spectral techniques have become a standard to model Earth's global gravity field to 10 km scales, with the EGM2008 geopotential model being a prominent example. For some geophysical applications of EGM2008, particularly Bouguer gravity computation with spectral techniques, a topographic potential model of adequate resolution is required. However, current topographic potential models have not yet been successfully validated to degree 2160, and notable discrepancies between spectral modeling and Newtonian (numerical) integration well beyond the 10 mGal level have been reported. Here we accurately compute and validate gravity implied by a degree 2160 model of Earth's topographic masses. Our experiments are based on two key strategies, both of which require advanced computational resources. First, we construct a spectrally complete model of the gravity field which is generated by the degree 2160 Earth topography model. This involves expansion of the topographic potential to the 15th integer power of the topography and modeling of short-scale gravity signals to ultrahigh degree of 21,600, translating into unprecedented fine scales of 1 km. Second, we apply Newtonian integration in the space domain with high spatial resolution to reduce discretization errors. Our numerical study demonstrates excellent agreement (8 μGgal RMS) between gravity from both forward modeling techniques and provides insight into the convergence process associated with spectral modeling of gravity signals at very short scales (few km). As key conclusion, our work successfully validates the spectral domain forward modeling technique for degree 2160 topography and increases the confidence in new high-resolution global Bouguer gravity maps.
Zhang, Xiaoliang; Martin, Alastair; Jordan, Caroline; Lillaney, Prasheel; Losey, Aaron; Pang, Yong; Hu, Jeffrey; Wilson, Mark; Cooke, Daniel; Hetts, Steven W
2017-04-01
It is technically challenging to design compact yet sensitive miniature catheter radio frequency (RF) coils for endovascular interventional MR imaging. In this work, a new design method for catheter RF coils is proposed based on the coaxial transmission line resonator (TLR) technique. Due to its distributed circuit, the TLR catheter coil does not need any lumped capacitors to support its resonance, which simplifies the practical design and construction and provides a straightforward technique for designing miniature catheter-mounted imaging coils that are appropriate for interventional neurovascular procedures. The outer conductor of the TLR serves as an RF shield, which prevents electromagnetic energy loss, and improves coil Q factors. It also minimizes interaction with surrounding tissues and signal losses along the catheter coil. To investigate the technique, a prototype catheter coil was built using the proposed coaxial TLR technique and evaluated with standard RF testing and measurement methods and MR imaging experiments. Numerical simulation was carried out to assess the RF electromagnetic field behavior of the proposed TLR catheter coil and the conventional lumped-element catheter coil. The proposed TLR catheter coil was successfully tuned to 64 MHz for proton imaging at 1.5 T. B 1 fields were numerically calculated, showing improved magnetic field intensity of the TLR catheter coil over the conventional lumped-element catheter coil. MR images were acquired from a dedicated vascular phantom using the TLR catheter coil and also the system body coil. The TLR catheter coil is able to provide a significant signal-to-noise ratio (SNR) increase (a factor of 200 to 300) over its imaging volume relative to the body coil. Catheter imaging RF coil design using the proposed coaxial TLR technique is feasible and advantageous in endovascular interventional MR imaging applications.
NASA Astrophysics Data System (ADS)
Schum, Paul A.
If international report cards were issued today, to all industrialized nations world wide, the United States would receive a "C" at best in mathematics and science. This is not simply a temporary or simple cause and effect circumstance that can easily be addressed. The disappointing truth is that this downward trend in mathematics and science mastery by American students has been occurring steadily for at least the last eight years of international testing, and that there are numerous and varied bases for this reality. In response to this crisis, The National Science Teachers Association (NSTA), The American Association for the Advancement of Science (AAAS), and The National Research Council (NRC) each have proposed relatively consistent, but individual sets of professional science teaching standards, designed to improve science instruction in American schools. It is of extreme value to the scientific, educational community to know if any or all of these standards lead to improved student performance. This study investigates the correlation between six, specific teacher behaviors that are common to these national standards and which behaviors, if any, result in improved student performance, as demonstrated on the Science Reasoning sub-test of the ACT Assessment. These standards focus classroom science teachers on professional development, leading toward student mastery of scientific interpretation, concept development, and constructive relationship building. Because all individual teachers interpret roles, expectations, and guiding philosophies from different lenses, effective professional practice may reflect consistency in rationale and methodology yet will be best evidenced by an examination of specific teaching techniques. In this study, these teaching techniques are evidenced by self-reported teacher awareness and adherence to these consensual standards. Assessment instruments vary widely, and the results of student performance often reflect the congruency of curricular methodology and explicit testing domains. Although the recent educational impetus for change is most notably governed numerically by test scores, the true goal of scientific literacy is in the application of logic. Therefore, the ultimate thematic analysis in this study attempts to relate both educational theory and practice with positive change at the classroom level. The data gathered in this study is insufficient in establishing a significant correlation between adherence to national science teaching standards and student performance on the ACT in Jefferson County, Kentucky, for either public or Catholic school students. However, with respect to mean student scores on the Science Reasoning sub-test of the ACT, there is statistically significant evidence for superior performance of Catholic school students compared with that of public school students in this region.
A Novel Polygonal Finite Element Method: Virtual Node Method
NASA Astrophysics Data System (ADS)
Tang, X. H.; Zheng, C.; Zhang, J. H.
2010-05-01
Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.
Eliminating cubic terms in the pseudopotential lattice Boltzmann model for multiphase flow
NASA Astrophysics Data System (ADS)
Huang, Rongzong; Wu, Huiying; Adams, Nikolaus A.
2018-05-01
It is well recognized that there exist additional cubic terms of velocity in the lattice Boltzmann (LB) model based on the standard lattice. In this work, elimination of these cubic terms in the pseudopotential LB model for multiphase flow is investigated, where the force term and density gradient are considered. By retaining high-order (≥3 ) Hermite terms in the equilibrium distribution function and the discrete force term, as well as introducing correction terms in the LB equation, the additional cubic terms of velocity are entirely eliminated. With this technique, the computational simplicity of the pseudopotential LB model is well maintained. Numerical tests, including stationary and moving flat and circular interface problems, are carried out to show the effects of such cubic terms on the simulation of multiphase flow. It is found that the elimination of additional cubic terms is beneficial to reduce the numerical error, especially when the velocity is relatively large. Numerical results also suggest that these cubic terms mainly take effect in the interfacial region and that the density-gradient-related cubic terms are more important than the other cubic terms for multiphase flow.
Numerical study of coupled turbulent flow and solidification for steel slab casters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aboutalebi, M.R.; Hasan, M.; Guthrie, R.I.L.
1995-09-01
A two-dimensional numerical modeling study was undertaken to account for coupled turbulent flow and heat transfer with solidification in the mold and submold regions of a steel slab coaster. Liquid steel is introduced into a water-cooled mold through a bifurcated submerged entry nozzle. Turbulence phenomena in the melt pool of the caster were accounted for, using a modified version of the low-Reynolds-number {kappa}-{epsilon} turbulence model of Launder and Sharma. The mushy region solidification, in the presence of turbulence, was taken into account by modifying the standard enthalpy-porosity technique, which is presently popular for modeling solidification problems. Thermocapillary and buoyancy effectsmore » have been considered in this model to evaluate the influences of the liquid surface tension gradient at the meniscus surface, and natural convection on flow patterns in the liquid pool. Parametric studies were carried out to evaluate the effects of typical variables, such as inlet superheat and casting speed, on the fluid flow and heat transfer results. The numerical predictions were compared with available experimental data.« less
Influence of the conservative rotor loads on the near wake of a wind turbine
NASA Astrophysics Data System (ADS)
Herráez, I.; Micallef, D.; van Kuik, G. A. M.
2017-05-01
The presence of conservative forces on rotor blades is neglected in the blade element theory and all the numerical methods derived from it (like e.g. the blade element momentum theory and the actuator line technique). This might seem a reasonable simplification of the real flow of rotor blades, since conservative loads, by definition, do not contribute to the power conversion. However, conservative loads originating from the chordwise bound vorticity might affect the tip vortex trajectory, as we discussed in a previous work. In that work we also hypothesized that this effect, in turn, could influence the wake induction and correspondingly the rotor performance. In the current work we extend a standard actuator line model in order to account for the conservative loads at the blade tip. This allows to isolate the influence of conservative forces from other effects. The comparison of numerical results with and without conservative loads enables to confirm qualitatively their relevance for the near wake and the rotor performance. However, an accurate quantitative assessment of the effect still remains out of reach due to the inherent uncertainty of the numerical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, Alejandro; Ibanescu, Mihai; Joannopoulos, J. D.
2007-09-15
We describe a numerical method to compute Casimir forces in arbitrary geometries, for arbitrary dielectric and metallic materials, with arbitrary accuracy (given sufficient computational resources). Our approach, based on well-established integration of the mean stress tensor evaluated via the fluctuation-dissipation theorem, is designed to directly exploit fast methods developed for classical computational electromagnetism, since it only involves repeated evaluation of the Green's function for imaginary frequencies (equivalently, real frequencies in imaginary time). We develop the approach by systematically examining various formulations of Casimir forces from the previous decades and evaluating them according to their suitability for numerical computation. We illustratemore » our approach with a simple finite-difference frequency-domain implementation, test it for known geometries such as a cylinder and a plate, and apply it to new geometries. In particular, we show that a pistonlike geometry of two squares sliding between metal walls, in both two and three dimensions with both perfect and realistic metallic materials, exhibits a surprising nonmonotonic ''lateral'' force from the walls.« less
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Optimal control design of turbo spin‐echo sequences with applications to parallel‐transmit systems
Hoogduin, Hans; Hajnal, Joseph V.; van den Berg, Cornelis A. T.; Luijten, Peter R.; Malik, Shaihan J.
2016-01-01
Purpose The design of turbo spin‐echo sequences is modeled as a dynamic optimization problem which includes the case of inhomogeneous transmit radiofrequency fields. This problem is efficiently solved by optimal control techniques making it possible to design patient‐specific sequences online. Theory and Methods The extended phase graph formalism is employed to model the signal evolution. The design problem is cast as an optimal control problem and an efficient numerical procedure for its solution is given. The numerical and experimental tests address standard multiecho sequences and pTx configurations. Results Standard, analytically derived flip angle trains are recovered by the numerical optimal control approach. New sequences are designed where constraints on radiofrequency total and peak power are included. In the case of parallel transmit application, the method is able to calculate the optimal echo train for two‐dimensional and three‐dimensional turbo spin echo sequences in the order of 10 s with a single central processing unit (CPU) implementation. The image contrast is maintained through the whole field of view despite inhomogeneities of the radiofrequency fields. Conclusion The optimal control design sheds new light on the sequence design process and makes it possible to design sequences in an online, patient‐specific fashion. Magn Reson Med 77:361–373, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine PMID:26800383
NASA Astrophysics Data System (ADS)
Sokolov, A. K.
2017-09-01
This article presents the technique of assessing the maximum allowable (standard) discharge of waste waters with several harmful substances into a water reservoir. The technique makes it possible to take into account the summation of their effect provided that the limiting harmful indices are the same. The expressions for the determination of the discharge limit of waste waters have been derived from the conditions of admissibility of the effect of several harmful substances on the waters of a reservoir. Mathematical conditions of admissibility of the effect of wastewaters on a reservoir are given for the characteristic combinations of limiting harmful indices and hazard classes of several substances. The conditions of admissibility of effects are presented in the form of logical products of the sums of relative concentrations that should not exceed the value of 1. It is shown that the calculation of the process of wastewater dilution in a flowing water reservoir is possible only on the basis of a numerical method to assess the wastewater discharge limit. An example of the numerical calculation of the standard limit of industrial enterprise wastewater discharges that contain polysulfide oil, flocculant VPK-101, and fungicide captan is given to test this method. In addition to these three harmful substances, the water reservoir also contained a fourth substance, namely, Zellek-Super herbicide, above the waste discharge point. The summation of the harmful effect was taken into account for VPK-101, captan, and Zellek-Super. The reliability of the technique was tested by the calculation of concentrations of the four substances in the control point of the flowing reservoir during the estimated maximum allowable wastewater discharge. It is shown that the value of the maximum allowable discharge limit was almost two times higher for the example under consideration, taking into account that the effect of harmful substances was unidirectional, which provides a higher level of environmental safety for them.
NASA Astrophysics Data System (ADS)
Vadivel, P.; Sakthivel, R.; Mathiyalagan, K.; Thangaraj, P.
2013-02-01
This paper addresses the problem of passivity analysis issue for a class of fuzzy bidirectional associative memory (BAM) neural networks with Markovian jumping parameters and time varying delays. A set of sufficient conditions for the passiveness of the considered fuzzy BAM neural network model is derived in terms of linear matrix inequalities by using the delay fractioning technique together with the Lyapunov function approach. In addition, the uncertainties are inevitable in neural networks because of the existence of modeling errors and external disturbance. Further, this result is extended to study the robust passivity criteria for uncertain fuzzy BAM neural networks with time varying delays and uncertainties. These criteria are expressed in the form of linear matrix inequalities (LMIs), which can be efficiently solved via standard numerical software. Two numerical examples are provided to demonstrate the effectiveness of the obtained results.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Traceable Coulomb blockade thermometry
NASA Astrophysics Data System (ADS)
Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.
2017-02-01
We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k = 1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.
Radiofrequency dosimetry in subjects implanted with metallic straight wires: a numerical study.
Mattei, E; Calcagnini, G; Censi, F; Triventi, M; Bartolini, P
2008-01-01
A numerical study to investigate the effects of the exposure to electromagnetic fields (EMF) at 900 and 1800 MHz on biological tissues implanted with thin metallic structures has been carried out, using the finite difference time domain (FDTD) solution technique. The results of the model show that the presence of a metallic wire yields to a significant increase in the local specific energy absorption rate (SAR). The present standards and/or guidelines on safe exposures of humans to EMF does not cover persons with implanted devices and thus the threshold levels to define safe exposure conditions might not apply in presence of high SAR gradients, such as the ones generated by thin metallic implanted objects. However, exposure to EMF fields below the actual safe levels even in presence of thin conductive structures cause rather low temperature rises (1 degrees C).
A solution to the Navier-Stokes equations based upon the Newton Kantorovich method
NASA Technical Reports Server (NTRS)
Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.
1977-01-01
An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.
Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello
2008-12-01
This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.
Catfish science: Status and trends in the 21st century
Kwak, Thomas J.; Porath, Mark T.; Michaletz, Paul H.; Travnichek, Vincent H.
2011-01-01
Catfish science, the study of the fish order Siluriformes, is a diverse and expanding field in terms of advances and breadth of topics. We compiled literature from primary fisheries journals as an index of interest and advances in catfish science to examine temporal trends in the field. The number of catfish scientific publications varied over the past century with strong peaks during 1975–1979 and 2005–2010, which may be the result of interactive scientific and societal influences. Catfish biology was the predominant publication topic until the late 1990s, when ecology, techniques, and management publications became more prevalent. Articles on catfish ecology were most numerous in both the first and second international catfish symposia, but publications on techniques and conservation were more numerous in the second catfish symposium than the first. We summarize the state of knowledge, recent advances, and areas for future attention among topics in catfish science, including sampling and aging techniques, population dynamics, ecology, fisheries management, species diversity, nonnative catfish, and human dimensions, with an emphasis on the gains in this second symposium. Areas that we expect to be pursued in the future are development of new techniques and validation of existing methods; expansion of research to less-studied catfish species; broadening temporal, spatial, and organizational scales; interdisciplinary approaches; and research on societal views and constituent demands. Meeting these challenges will require scientists to span beyond their professional comfort zones to effectively reach higher standards. We look forward to the coming decade and the many advances in the conservation, ecology, and management of catfish that will be shared.
NASA Astrophysics Data System (ADS)
Maling, George C., Jr.
Recent advances in noise analysis and control theory and technology are discussed in reviews and reports. Topics addressed include noise generation; sound-wave propagation; noise control by external treatments; vibration and shock generation, transmission, isolation, and reduction; multiple sources and paths of environmental noise; noise perception and the physiological and psychological effects of noise; instrumentation, signal processing, and analysis techniques; and noise standards and legal aspects. Diagrams, drawings, graphs, photographs, and tables of numerical data are provided.
Graded bit patterned magnetic arrays fabricated via angled low-energy He ion irradiation.
Chang, L V; Nasruallah, A; Ruchhoeft, P; Khizroev, S; Litvinov, D
2012-07-11
A bit patterned magnetic array based on Co/Pd magnetic multilayers with a binary perpendicular magnetic anisotropy distribution was fabricated. The binary anisotropy distribution was attained through angled helium ion irradiation of a bit edge using hydrogen silsesquioxane (HSQ) resist as an ion stopping layer to protect the rest of the bit. The viability of this technique was explored numerically and evaluated through magnetic measurements of the prepared bit patterned magnetic array. The resulting graded bit patterned magnetic array showed a 35% reduction in coercivity and a 9% narrowing of the standard deviation of the switching field.
The exact fundamental solution for the Benes tracking problem
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam
2009-05-01
The universal continuous-discrete tracking problem requires the solution of a Fokker-Planck-Kolmogorov forward equation (FPKfe) for an arbitrary initial condition. Using results from quantum mechanics, the exact fundamental solution for the FPKfe is derived for the state model of arbitrary dimension with Benes drift that requires only the computation of elementary transcendental functions and standard linear algebra techniques- no ordinary or partial differential equations need to be solved. The measurement process may be an arbitrary, discrete-time nonlinear stochastic process, and the time step size can be arbitrary. Numerical examples are included, demonstrating its utility in practical implementation.
Power Saving Control for Battery-Powered Portable WLAN APs
NASA Astrophysics Data System (ADS)
Ogawa, Masakatsu; Hiraguri, Takefumi
This paper proposes a power saving control function for battery-powered portable wireless LAN (WLAN) access points (APs) to extend the battery life. The IEEE802.11 standard does not support power saving control for APs. To enable a sleep state for an AP, the AP forces the stations (STAs) to refrain from transmitting frames using the network allocation vector (NAV) while the AP is sleeping. Thus the sleep state for the AP can be employed without causing frame loss at the STAs. Numerical analysis and computer simulation reveal that the newly proposed control technique conserves power compared to the conventional control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Longhurst, G.R.
This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.
NASA Astrophysics Data System (ADS)
Raikovskiy, N. A.; Tretyakov, A. V.; Abramov, S. A.; Nazmeev, F. G.; Pavlichev, S. V.
2017-08-01
The paper presents a numerical study method of the cooling medium flowing in the water jacket of self-lubricating sliding bearing based on ANSYS CFX. The results of numerical calculations have satisfactory convergence with the empirical data obtained on the testbed. Verification data confirm the possibility of applying this numerical technique for the analysis of coolant flowings in the self-lubricating bearing containing the water jacket.
NASA Astrophysics Data System (ADS)
Scanu, Sergio; Peviani, Maximo; Carli, Filippo Maria; Paladini de Mendoza, Francesco; Piermattei, Viviana; Bonamano, Simone; Marcelli, Marco
2015-04-01
This work proposes a multidisciplinary approach in which wave power potential maps are used as baseline for the application of environmental monitoring techniques identified through the use of a Database for Environmental Monitoring Techniques and Equipment (DEMTE), derived in the frame of the project "Marine Renewables Infrastructure Network for Emerging Energy Technologies" (Marinet - FP7). This approach aims to standardize the monitoring of the marine environment in the event of installation, operation and decommissioning of Marine Energy Conversion Systems. The database has been obtained through the collection of techniques and instrumentation available among the partners of the consortium, in relation with all environmental marine compounds potentially affected by any impacts. Furthermore in order to plan marine energy conversion schemes, the wave potential was assessed at regional and local scales using the numerical modelling downscaling methodology. The regional scale lead to the elaboration of the Italian Wave Power Atlas, while the local scale lead to the definition of nearshore hot spots useful for the planning of devices installation along the Latium coast. The present work focus in the application of environmental monitoring techniques identified in the DEMTE, in correspondence of the hotspot derived from the wave potential maps with particular reference to the biological interaction of the devices and the management of the marine space. The obtained results are the bases for the development of standardized procedures which aims to an effective application of marine environmental monitoring techniques during the installation, operation and decommissioning of Marine Energy Conversion Systems. The present work gives a consistent contribution to overcome non-technological barriers in the concession procedures, as far as the protection of the marine environment is of concern.
Distributed geospatial model sharing based on open interoperability standards
Feng, Min; Liu, Shuguang; Euliss, Ned H.; Fang, Yin
2009-01-01
Numerous geospatial computational models have been developed based on sound principles and published in journals or presented in conferences. However modelers have made few advances in the development of computable modules that facilitate sharing during model development or utilization. Constraints hampering development of model sharing technology includes limitations on computing, storage, and connectivity; traditional stand-alone and closed network systems cannot fully support sharing and integrating geospatial models. To address this need, we have identified methods for sharing geospatial computational models using Service Oriented Architecture (SOA) techniques and open geospatial standards. The service-oriented model sharing service is accessible using any tools or systems compliant with open geospatial standards, making it possible to utilize vast scientific resources available from around the world to solve highly sophisticated application problems. The methods also allow model services to be empowered by diverse computational devices and technologies, such as portable devices and GRID computing infrastructures. Based on the generic and abstract operations and data structures required for Web Processing Service (WPS) standards, we developed an interactive interface for model sharing to help reduce interoperability problems for model use. Geospatial computational models are shared on model services, where the computational processes provided by models can be accessed through tools and systems compliant with WPS. We developed a platform to help modelers publish individual models in a simplified and efficient way. Finally, we illustrate our technique using wetland hydrological models we developed for the prairie pothole region of North America.
International Standards for Properties and Performance of Advanced Ceramics - 30 years of Excellence
NASA Technical Reports Server (NTRS)
Jenkins, Michael G.; Salem, Jonathan A.; Helfinstine, John; Quinn, George D.; Gonczy, Stephen T.
2016-01-01
Mechanical and physical properties/performance of brittle bodies (e.g., advanced ceramics and glasses) can be difficult to measure correctly unless the proper techniques are used. For three decades, ASTM Committee C28 on Advanced Ceramics, has developed numerous full-consensus standards (e.g., test methods, practices, guides, terminology) to measure various properties and performance of a monolithic and composite ceramics and coatings that, in some cases, may be applicable to glasses. These standards give the "what, how, how not, why, why not, etc." for many mechanical, physical, thermal, properties and performance of advanced ceramics. Use of these standards provides accurate, reliable, repeatable and complete data. Involvement in ASTM Committee C28 has included users, producers, researchers, designers, academicians, etc. who write, continually update, and validate through round robin test programmes, more than 45 standards in the 30 years since the Committee's inception in 1986. Included in this poster is a pictogram of the ASTM Committee C28 standards and how to obtain them either as i) individual copies with full details or ii) a complete collection in one volume. A listing of other ASTM committees of interest is included. In addition, some examples of the tangible benefits of standards for advanced ceramics are employed to demonstrate their practical application.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
A hybrid perturbation-Galerkin technique for partial differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Anderson, Carl M.
1990-01-01
A two-step hybrid perturbation-Galerkin technique for improving the usefulness of perturbation solutions to partial differential equations which contain a parameter is presented and discussed. In the first step of the method, the leading terms in the asymptotic expansion(s) of the solution about one or more values of the perturbation parameter are obtained using standard perturbation methods. In the second step, the perturbation functions obtained in the first step are used as trial functions in a Bubnov-Galerkin approximation. This semi-analytical, semi-numerical hybrid technique appears to overcome some of the drawbacks of the perturbation and Galerkin methods when they are applied by themselves, while combining some of the good features of each. The technique is illustrated first by a simple example. It is then applied to the problem of determining the flow of a slightly compressible fluid past a circular cylinder and to the problem of determining the shape of a free surface due to a sink above the surface. Solutions obtained by the hybrid method are compared with other approximate solutions, and its possible application to certain problems associated with domain decomposition is discussed.
NASA Technical Reports Server (NTRS)
Waugh, Darryn W.; Plumb, R. Alan
1994-01-01
We present a trajectory technique, contour advection with surgery (CAS), for tracing the evolution of material contours in a specified (including observed) evolving flow. CAS uses the algorithms developed by Dritschel for contour dynamics/surgery to trace the evolution of specified contours. The contours are represented by a series of particles, which are advected by a specified, gridded, wind distribution. The resolution of the contours is preserved by continually adjusting the number of particles, and finescale features are produced that are not present in the input data (and cannot easily be generated using standard trajectory techniques). The reliability, and dependence on the spatial and temporal resolution of the wind field, of the CAS procedure is examined by comparisons with high-resolution numerical data (from contour dynamics calculations and from a general circulation model), and with routine stratospheric analyses. These comparisons show that the large-scale motions dominate the deformation field and that CAS can accurately reproduce small scales from low-resolution wind fields. The CAS technique therefore enables examination of atmospheric tracer transport at previously unattainable resolution.
A radial transmission line material measurement apparatus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warne, L.K.; Moyer, R.D.; Koontz, T.E.
1993-05-01
A radial transmission line material measurement sample apparatus (sample holder, offset short standards, measurement software, and instrumentation) is described which has been proposed, analyzed, designed, constructed, and tested. The purpose of the apparatus is to obtain accurate surface impedance measurements of lossy, possibly anisotropic, samples at low and intermediate frequencies (vhf and low uhf). The samples typically take the form of sections of the material coatings on conducting objects. Such measurements thus provide the key input data for predictive numerical scattering codes. Prediction of the sample surface impedance from the coaxial input impedance measurement is carried out by two techniques.more » The first is an analytical model for the coaxial-to-radial transmission line junction. The second is an empirical determination of the bilinear transformation model of the junction by the measurement of three full standards. The standards take the form of three offset shorts (and an additional lossy Salisbury load), which have also been constructed. The accuracy achievable with the device appears to be near one percent.« less
NASA-JSC Protocol for the Characterization of Single Wall Carbon Nanotube Material Quality
NASA Technical Reports Server (NTRS)
Arepalli, Sivaram; Nikolaev, Pasha; Gorelik, Olga; Hadjiev, Victor; Holmes, William; Devivar, Rodrigo; Files, Bradley; Yowell, Leonard
2010-01-01
It is well known that the raw as well as purified single wall carbon nanotube (SWCNT) material always contain certain amount of impurities of varying composition (mostly metal catalyst and non-tubular carbon). Particular purification method also creates defects and/or functional groups in the SWCNT material and therefore affects the its dispersability in solvents (important to subsequent application development). A number of analytical characterization tools have been used successfully in the past years to assess various properties of nanotube materials, but lack of standards makes it difficult to compare these measurements across the board. In this work we report the protocol developed at NASA-JSC which standardizes measurements using TEM, SEM, TGA, Raman and UV-Vis-NIR absorption techniques. Numerical measures are established for parameters such as metal content, homogeneity, thermal stability and dispersability, to allow easy comparison of SWCNT materials. We will also report on the recent progress in quantitative measurement of non-tubular carbon impurities and a possible purity standard for SWCNT materials.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
Solid immersion terahertz imaging with sub-wavelength resolution
NASA Astrophysics Data System (ADS)
Chernomyrdin, Nikita V.; Schadko, Aleksander O.; Lebedev, Sergey P.; Tolstoguzov, Viktor L.; Kurlov, Vladimir N.; Reshetov, Igor V.; Spektor, Igor E.; Skorobogatiy, Maksim; Yurchenko, Stanislav O.; Zaytsev, Kirill I.
2017-05-01
We have developed a method of solid immersion THz imaging—a non-contact technique employing the THz beam focused into evanescent-field volume and allowing strong reduction in the dimensions of THz caustic. We have combined numerical simulations and experimental studies to demonstrate a sub-wavelength 0.35λ0-resolution of the solid immersion THz imaging system compared to 0.85λ0-resolution of a standard imaging system, employing only an aspherical singlet. We have discussed the prospective of using the developed technique in various branches of THz science and technology, namely, for THz measurements of solid-state materials featuring sub-wavelength variations of physical properties, for highly accurate mapping of healthy and pathological tissues in THz medical diagnosis, for detection of sub-wavelength defects in THz non-destructive sensing, and for enhancement of THz nonlinear effects.
Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo
2009-01-01
Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
How to assess intestinal viability during surgery: A review of techniques
Urbanavičius, Linas; Pattyn, Piet; Van de Putte, Dirk; Venskutonis, Donatas
2011-01-01
Objective and quantitative intraoperative methods of bowel viability assessment are essential in gastrointestinal surgery. Exact determination of the borderline of the viable bowel with the help of an objective test could result in a decrease of postoperative ischemic complications. An accurate, reproducible and cost effective method is desirable in every operating theater dealing with abdominal operations. Numerous techniques assessing various parameters of intestinal viability are described by the studies. However, there is no consensus about their clinical use. To evaluate the available methods, a systematic search of the English literature was performed. Virtues and drawbacks of the techniques and possibilities of clinical application are reviewed. Valuable parameters related to postoperative intestinal anastomotic or stoma complications are analyzed. Important issues in the measurement and interpretation of bowel viability are discussed. To date, only a few methods are applicable in surgical practice. Further studies are needed to determine the limiting values of intestinal tissue oxygenation and flow indicative of ischemic complications and to standardize the methods. PMID:21666808
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
Improve Data Mining and Knowledge Discovery Through the Use of MatLab
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Martin, Dawn (Elliott); Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(R) (MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its enormous availability of built in functionalities and toolboxes make it suitable to perform numerical computations and simulations as well as a data mining tool. Engineers and scientists can take advantage of the readily available functions/toolboxes to gain wider insight in their perspective data mining experiments.
Improve Data Mining and Knowledge Discovery through the use of MatLab
NASA Technical Reports Server (NTRS)
Shaykahian, Gholan Ali; Martin, Dawn Elliott; Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(TradeMark)(MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its enormous availability of built in functionalities and toolboxes make it suitable to perform numerical computations and simulations as well as a data mining tool. Engineers and scientists can take advantage of the readily available functions/toolboxes to gain wider insight in their perspective data mining experiments.
Towards efficient backward-in-time adjoint computations using data compression techniques
Cyr, E. C.; Shadid, J. N.; Wildey, T.
2014-12-16
In the context of a posteriori error estimation for nonlinear time-dependent partial differential equations, the state-of-the-practice is to use adjoint approaches which require the solution of a backward-in-time problem defined by a linearization of the forward problem. One of the major obstacles in the practical application of these approaches, we found, is the need to store, or recompute, the forward solution to define the adjoint problem and to evaluate the error representation. Our study considers the use of data compression techniques to approximate forward solutions employed in the backward-in-time integration. The development derives an error representation that accounts for themore » difference between the standard-approach and the compressed approximation of the forward solution. This representation is algorithmically similar to the standard representation and only requires the computation of the quantity of interest for the forward solution and the data-compressed reconstructed solution (i.e. scalar quantities that can be evaluated as the forward problem is integrated). This approach is then compared with existing techniques, such as checkpointing and time-averaged adjoints. Lastly, we provide numerical results indicating the potential efficiency of our approach on a transient diffusion–reaction equation and on the Navier–Stokes equations. These results demonstrate memory compression ratios up to 450×450× while maintaining reasonable accuracy in the error-estimates.« less
Prediction of properties of wheat dough using intelligent deep belief networks
NASA Astrophysics Data System (ADS)
Guha, Paramita; Bhatnagar, Taru; Pal, Ishan; Kamboj, Uma; Mishra, Sunita
2017-11-01
In this paper, the rheological and chemical properties of wheat dough are predicted using deep belief networks. Wheat grains are stored at controlled environmental conditions. The internal parameters of grains viz., protein, fat, carbohydrates, moisture, ash are determined using standard chemical analysis and viscosity of the dough is measured using Rheometer. Here, fat, carbohydrates, moisture, ash and temperature are considered as inputs whereas protein and viscosity are chosen as outputs. The prediction algorithm is developed using deep neural network where each layer is trained greedily using restricted Boltzmann machine (RBM) networks. The overall network is finally fine-tuned using standard neural network technique. In most literature, it has been found that fine-tuning is done using back-propagation technique. In this paper, a new algorithm is proposed in which each layer is tuned using RBM and the final network is fine-tuned using deep neural network (DNN). It has been observed that with the proposed algorithm, errors between the actual and predicted outputs are less compared to the conventional algorithm. Hence, the given network can be considered as beneficial as it predicts the outputs more accurately. Numerical results along with discussions are presented.
Wales, Andrew; Breslin, Mark; Davies, Robert
2006-09-10
Salmonella infection of laying flocks in the UK is predominantly a problem of the persistent contamination of layer houses and associated wildlife vectors by Salmonella Enteritidis. Methods for its control and elimination include effective cleaning and disinfection of layer houses between flocks, and it is important to be able to measure the success of such decontamination. A method for the environmental detection and semi-quantitative enumeration of salmonellae was used and compared with a standard qualitative method, in 12 Salmonella-contaminated caged layer houses before and after cleaning and disinfection. The quantitative technique proved to have comparable sensitivity to the standard method, and additionally provided insights into the numerical Salmonella challenge that replacement flocks would encounter. Elimination of S. Enteritidis was not achieved in any of the premises examined although substantial reductions in the prevalence and numbers of salmonellae were demonstrated, whilst in others an increase in contamination was observed after cleaning and disinfection. Particular problems with feeders and wildlife vectors were highlighted. The use of a quantitative method assisted the identification of problem areas, such as those with a high initial bacterial load or those experiencing only a modest reduction in bacterial count following decontamination.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles
NASA Astrophysics Data System (ADS)
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.
2017-09-01
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
NASA Technical Reports Server (NTRS)
Bozeman, Robert E.
1987-01-01
An analytic technique for accounting for the joint effects of Earth oblateness and atmospheric drag on close-Earth satellites is investigated. The technique is analytic in the sense that explicit solutions to the Lagrange planetary equations are given; consequently, no numerical integrations are required in the solution process. The atmospheric density in the technique described is represented by a rotating spherical exponential model with superposed effects of the oblate atmosphere and the diurnal variations. A computer program implementing the process is discussed and sample output is compared with output from program NSEP (Numerical Satellite Ephemeris Program). NSEP uses a numerical integration technique to account for atmospheric drag effects.
Approximate Bayesian evaluations of measurement uncertainty
NASA Astrophysics Data System (ADS)
Possolo, Antonio; Bodnar, Olha
2018-04-01
The Guide to the Expression of Uncertainty in Measurement (GUM) includes formulas that produce an estimate of a scalar output quantity that is a function of several input quantities, and an approximate evaluation of the associated standard uncertainty. This contribution presents approximate, Bayesian counterparts of those formulas for the case where the output quantity is a parameter of the joint probability distribution of the input quantities, also taking into account any information about the value of the output quantity available prior to measurement expressed in the form of a probability distribution on the set of possible values for the measurand. The approximate Bayesian estimates and uncertainty evaluations that we present have a long history and illustrious pedigree, and provide sufficiently accurate approximations in many applications, yet are very easy to implement in practice. Differently from exact Bayesian estimates, which involve either (analytical or numerical) integrations, or Markov Chain Monte Carlo sampling, the approximations that we describe involve only numerical optimization and simple algebra. Therefore, they make Bayesian methods widely accessible to metrologists. We illustrate the application of the proposed techniques in several instances of measurement: isotopic ratio of silver in a commercial silver nitrate; odds of cryptosporidiosis in AIDS patients; height of a manometer column; mass fraction of chromium in a reference material; and potential-difference in a Zener voltage standard.
Numerical modeling of crystal growth in Bridgman device
NASA Astrophysics Data System (ADS)
Vompe, Dmitry Aleksandrovich
1997-12-01
The standard model for the growth of a crystal from a pure substance or diluted binary mixture contains transport equations for heat and phase change conditions at the solidification front. A numerical method is constructed for simulations of crystal growth in a vertical Bridgman device. The method is based on a boundary fitting technique in which melted and solidified regions are mapped onto a fixed rectangular logical domain. The Alternating Directions scheme (ADI) is used to treat the diffusive terms implicitly, with explicit methods are used for the remaining terms in the mapped temperature equations with variable coefficients. The nonlinear equation for the solid/liquid interface motion is solved by the modified Euler technique. Results obtained from the calculations have been used to study the influence of various boundary conditions imposed on the sidewalls and the top and bottom of the ampoule. Conditions are identified that lead to a steadily growing crystal and results are compared with an asymptotic one- dimensional model. Criteria based on ampoule length and boundary conditions being derived and compared with a previously developed one-dimensional model. Various cases have been considered to determine conditions for maintaining a nearly flat interface. It was found that the interface amplitude can be decreased by a factor of 100 (even 1,000) by optimizing temperature boundary conditions.
Topological quantum error correction in the Kitaev honeycomb model
NASA Astrophysics Data System (ADS)
Lee, Yi-Chan; Brell, Courtney G.; Flammia, Steven T.
2017-08-01
The Kitaev honeycomb model is an approximate topological quantum error correcting code in the same phase as the toric code, but requiring only a 2-body Hamiltonian. As a frustrated spin model, it is well outside the commuting models of topological quantum codes that are typically studied, but its exact solubility makes it more amenable to analysis of effects arising in this noncommutative setting than a generic topologically ordered Hamiltonian. Here we study quantum error correction in the honeycomb model using both analytic and numerical techniques. We first prove explicit exponential bounds on the approximate degeneracy, local indistinguishability, and correctability of the code space. These bounds are tighter than can be achieved using known general properties of topological phases. Our proofs are specialized to the honeycomb model, but some of the methods may nonetheless be of broader interest. Following this, we numerically study noise caused by thermalization processes in the perturbative regime close to the toric code renormalization group fixed point. The appearance of non-topological excitations in this setting has no significant effect on the error correction properties of the honeycomb model in the regimes we study. Although the behavior of this model is found to be qualitatively similar to that of the standard toric code in most regimes, we find numerical evidence of an interesting effect in the low-temperature, finite-size regime where a preferred lattice direction emerges and anyon diffusion is geometrically constrained. We expect this effect to yield an improvement in the scaling of the lifetime with system size as compared to the standard toric code.
Fauzdar, Ashish; Chowdhry, Mohit; Makroo, R. N.; Mishra, Manoj; Srivastava, Priyanka; Tyagi, Richa; Bhadauria, Preeti; Kaul, Anita
2013-01-01
BACKGROUND AND OBJECTIVE: Women with high-risk pregnancies are offered prenatal diagnosis through amniocentesis for cytogenetic analysis of fetal cells. The aim of this study was to evaluate the effectiveness of the rapid fluorescence in situ hybridization (FISH) technique for detecting numerical aberrations of chromosomes 13, 21, 18, X and Y in high-risk pregnancies in an Indian scenario. MATERIALS AND METHODS: A total of 163 samples were received for a FISH and/or a full karyotype for prenatal diagnosis from high-risk pregnancies. In 116 samples both conventional culture techniques for getting karyotype through G-banding techniques were applied in conjunction to FISH test using the AneuVysion kit (Abbott Molecular, Inc.), following standard recommended protocol to compare the both the techniques in our setup. RESULTS: Out of 116 patients, we got 96 normal for the five major chromosome abnormality and seven patients were found to be abnormal (04 trisomy 21, 02 monosomy X, and 01 trisomy 13) and all the FISH results correlated with conventional cytogenetics. To summarize the results of total 163 patients for the major chromosomal abnormalities analyzed by both/or cytogenetics and FISH there were 140 (86%) normal, 9 (6%) cases were abnormal and another 4 (2.5%) cases were suspicious mosaic and 10 (6%) cases of culture failure. The diagnostic detection rate with FISH in 116 patients was 97.5%. There were no false-positive and false-negative autosomal or sex chromosomal results, within our established criteria for reporting FISH signals. CONCLUSION: Rapid FISH is a reliable and prompt method for detecting numerical chromosomal aberrations and has now been implemented as a routine diagnostic procedure for detection of fetal aneuploidy in India. PMID:23901191
Fauzdar, Ashish; Chowdhry, Mohit; Makroo, R N; Mishra, Manoj; Srivastava, Priyanka; Tyagi, Richa; Bhadauria, Preeti; Kaul, Anita
2013-01-01
Women with high-risk pregnancies are offered prenatal diagnosis through amniocentesis for cytogenetic analysis of fetal cells. The aim of this study was to evaluate the effectiveness of the rapid fluorescence in situ hybridization (FISH) technique for detecting numerical aberrations of chromosomes 13, 21, 18, X and Y in high-risk pregnancies in an Indian scenario. A total of 163 samples were received for a FISH and/or a full karyotype for prenatal diagnosis from high-risk pregnancies. In 116 samples both conventional culture techniques for getting karyotype through G-banding techniques were applied in conjunction to FISH test using the AneuVysion kit (Abbott Molecular, Inc.), following standard recommended protocol to compare the both the techniques in our setup. Out of 116 patients, we got 96 normal for the five major chromosome abnormality and seven patients were found to be abnormal (04 trisomy 21, 02 monosomy X, and 01 trisomy 13) and all the FISH results correlated with conventional cytogenetics. To summarize the results of total 163 patients for the major chromosomal abnormalities analyzed by both/or cytogenetics and FISH there were 140 (86%) normal, 9 (6%) cases were abnormal and another 4 (2.5%) cases were suspicious mosaic and 10 (6%) cases of culture failure. The diagnostic detection rate with FISH in 116 patients was 97.5%. There were no false-positive and false-negative autosomal or sex chromosomal results, within our established criteria for reporting FISH signals. Rapid FISH is a reliable and prompt method for detecting numerical chromosomal aberrations and has now been implemented as a routine diagnostic procedure for detection of fetal aneuploidy in India.
High-resolution subgrid models: background, grid generation, and implementation
NASA Astrophysics Data System (ADS)
Sehili, Aissa; Lang, Günther; Lippert, Christoph
2014-04-01
The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clough, Katy; Figueras, Pau; Finkel, Hal
In this work, we introduce GRChombo: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. GRChombo evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. Wemore » show that GRChombo can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.« less
Highly Resolved Intravital Striped-illumination Microscopy of Germinal Centers
Andresen, Volker; Sporbert, Anje
2014-01-01
Monitoring cellular communication by intravital deep-tissue multi-photon microscopy is the key for understanding the fate of immune cells within thick tissue samples and organs in health and disease. By controlling the scanning pattern in multi-photon microscopy and applying appropriate numerical algorithms, we developed a striped-illumination approach, which enabled us to achieve 3-fold better axial resolution and improved signal-to-noise ratio, i.e. contrast, in more than 100 µm tissue depth within highly scattering tissue of lymphoid organs as compared to standard multi-photon microscopy. The acquisition speed as well as photobleaching and photodamage effects were similar to standard photo-multiplier-based technique, whereas the imaging depth was slightly lower due to the use of field detectors. By using the striped-illumination approach, we are able to observe the dynamics of immune complex deposits on secondary follicular dendritic cells – on the level of a few protein molecules in germinal centers. PMID:24748007
NASA Astrophysics Data System (ADS)
Chen, Wen; Wang, Fajie
Based on the implicit calculus equation modeling approach, this paper proposes a speculative concept of the potential and wave operators on negative dimensionality. Unlike the standard partial differential equation (PDE) modeling, the implicit calculus modeling approach does not require the explicit expression of the PDE governing equation. Instead the fundamental solution of physical problem is used to implicitly define the differential operator and to implement simulation in conjunction with the appropriate boundary conditions. In this study, we conjecture an extension of the fundamental solution of the standard Laplace and Helmholtz equations to negative dimensionality. And then by using the singular boundary method, a recent boundary discretization technique, we investigate the potential and wave problems using the fundamental solution on negative dimensionality. Numerical experiments reveal that the physics behaviors on negative dimensionality may differ on positive dimensionality. This speculative study might open an unexplored territory in research.
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
Intermediate-mass-ratio black-hole binaries: numerical relativity meets perturbation theory.
Lousto, Carlos O; Nakano, Hiroyuki; Zlochower, Yosef; Campanelli, Manuela
2010-05-28
We study black-hole binaries in the intermediate-mass-ratio regime 0.01≲q≲0.1 with a new technique that makes use of nonlinear numerical trajectories and efficient perturbative evolutions to compute waveforms at large radii for the leading and nonleading (ℓ, m) modes. As a proof-of-concept, we compute waveforms for q=1/10. We discuss applications of these techniques for LIGO and VIRGO data analysis and the possibility that our technique can be extended to produce accurate waveform templates from a modest number of fully nonlinear numerical simulations.
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
Numerical solution of potential flow about arbitrary 2-dimensional multiple bodies
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Thames, F. C.
1982-01-01
A procedure for the finite-difference numerical solution of the lifting potential flow about any number of arbitrarily shaped bodies is given. The solution is based on a technique of automatic numerical generation of a curvilinear coordinate system having coordinate lines coincident with the contours of all bodies in the field, regardless of their shapes and number. The effects of all numerical parameters involved are analyzed and appropriate values are recommended. Comparisons with analytic solutions for single Karman-Trefftz airfoils and a circular cylinder pair show excellent agreement. The technique of application of the boundary-fitted coordinate systems to the numerical solution of partial differential equations is illustrated.
Current options in inguinal hernia repair in adult patients
Kulacoglu, H
2011-01-01
Inguinal hernia is a very common problem. Surgical repair is the current approach, whereas asymptomatic or minimally symptomatic hernias may be good candidate for watchful waiting. Prophylactic antibiotics can be used in centers with high rate of wound infection. Local anesthesia is a suitable and economic option for open repairs, and should be popularized in day-case setting. Numerous repair methods have been described to date. Mesh repairs are superior to "nonmesh" tissue-suture repairs. Lichtenstein repair and endoscopic/laparoscopic techniques have similar efficacy. Standard polypropylene mesh is still the choice, whereas use of partially absorbable lightweight meshes seems to have some advantages. PMID:22435019
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1981-01-01
The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.
Raimondo, Joseph V; Heinemann, Uwe; de Curtis, Marco; Goodkin, Howard P; Dulla, Chris G; Janigro, Damir; Ikeda, Akio; Lin, Chou-Ching K; Jiruska, Premysl; Galanopoulou, Aristea S; Bernard, Christophe
2017-11-01
In vitro preparations are a powerful tool to explore the mechanisms and processes underlying epileptogenesis and ictogenesis. In this review, we critically review the numerous in vitro methodologies utilized in epilepsy research. We provide support for the inclusion of detailed descriptions of techniques, including often ignored parameters with unpredictable yet significant effects on study reproducibility and outcomes. In addition, we explore how recent developments in brain slice preparation relate to their use as models of epileptic activity. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Metamaterial-based half Maxwell fish-eye lens for broadband directive emissions
NASA Astrophysics Data System (ADS)
Dhouibi, Abdallah; Nawaz Burokur, Shah; de Lustrac, André; Priou, Alain
2013-01-01
The broadband directive emission from a metamaterial surface is numerically and experimentally reported. The metasurface, composed of non-resonant complementary closed ring structures, is designed to obey the refractive index of a half Maxwell fish-eye lens. A planar microstrip Vivaldi antenna is used as transverse magnetic polarized wave launcher for the lens. A prototype of the lens associated with its feed structure has been fabricated using standard lithography techniques. To experimentally demonstrate the broadband focusing properties and directive emissions, both the far-field radiation patterns and the near-field distributions have been measured. Measurements agree quantitatively and qualitatively with theoretical simulations.
A new world survey expression for cosmic ray vertical intensity vs. depth in standard rock
NASA Technical Reports Server (NTRS)
Crouch, M.
1985-01-01
The cosmic ray data on vertical intensity versus depth below 10 to the 5th power g sq cm is fitted to a 5 parameter empirical formula to give an analytical expression for interpretation of muon fluxes in underground measurements. This expression updates earlier published results and complements the more precise curves obtained by numerical integration or Monte Carlo techniques in which the fit is made to an energy spectrum at the top of the atmosphere. The expression is valid in the transitional region where neutrino induced muons begin to be important, as well as at great depths where this component becomes dominant.
Statistical crystallography of surface micelle spacing
NASA Technical Reports Server (NTRS)
Noever, David A.
1992-01-01
The aggregation of the recently reported surface micelles of block polyelectrolytes is analyzed using techniques of statistical crystallography. A polygonal lattice (Voronoi mosaic) connects center-to-center points, yielding statistical agreement with crystallographic predictions; Aboav-Weaire's law and Lewis's law are verified. This protocol supplements the standard analysis of surface micelles leading to aggregation number determination and, when compared to numerical simulations, allows further insight into the random partitioning of surface films. In particular, agreement with Lewis's law has been linked to the geometric packing requirements of filling two-dimensional space which compete with (or balance) physical forces such as interfacial tension, electrostatic repulsion, and van der Waals attraction.
Alvermann, A; Fehske, H
2009-04-17
We propose a general numerical approach to open quantum systems with a coupling to bath degrees of freedom. The technique combines the methodology of polynomial expansions of spectral functions with the sparse grid concept from interpolation theory. Thereby we construct a Hilbert space of moderate dimension to represent the bath degrees of freedom, which allows us to perform highly accurate and efficient calculations of static, spectral, and dynamic quantities using standard exact diagonalization algorithms. The strength of the approach is demonstrated for the phase transition, critical behavior, and dissipative spin dynamics in the spin-boson model.
A new experimental method to determine the sorption isotherm of a liquid in a porous medium.
Ouoba, Samuel; Cherblanc, Fabien; Cousin, Bruno; Bénet, Jean-Claude
2010-08-01
Sorption from the vapor phase is an important factor controlling the transport of volatile organic compounds (VOCs) in the vadose zone. Therefore, an accurate description of sorption behavior is essential to predict the ultimate fate of contaminants. Several measurement techniques are available in the case of water, however, when dealing with VOCs, the determination of sorption characteristics generally relies on gas chromatography. To avoid some drawbacks associated with this technology, we propose a new method to determine the sorption isotherm of any liquid compounds adsorbed in a soil. This method is based on standard and costless transducers (gas pressure, temperature) leading to a simple and transportable experimental device. A numerical estimation underlines the good accuracy and this technique is validated on two examples. Finally, this method is applied to determine the sorption isotherm of three liquid compounds (water, heptane, and trichloroethylene) in a clayey soil.
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Rosen, I. G.
1988-01-01
In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.
The Doghouse Plot: History, Construction Techniques, and Application
NASA Astrophysics Data System (ADS)
Wilson, John Robert
The Doghouse Plot visually represents an aircraft's performance during combined turn-climb maneuvers. The Doghouse Plot completely describes the turn-climb capability of an aircraft; a single plot demonstrates the relationship between climb performance, turn rate, turn radius, stall margin, and bank angle. Using NASA legacy codes, Empirical Drag Estimation Technique (EDET) and Numerical Propulsion System Simulation (NPSS), it is possible to reverse engineer sufficient basis data for commercial and military aircraft to construct Doghouse Plots. Engineers and operators can then use these to assess their aircraft's full performance envelope. The insight gained from these plots can broaden the understanding of an aircraft's performance and, in turn, broaden the operational scope of some aircraft that would otherwise be limited by the simplifications found in their Airplane Flight Manuals (AFM). More importantly, these plots can build on the current standards of obstacle avoidance and expose risks in operation.
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Numerical solution of a spatio-temporal gender-structured model for hantavirus infection in rodents.
Bürger, Raimund; Chowell, Gerardo; Gavilán, Elvis; Mulet, Pep; Villada, Luis M
2018-02-01
In this article we describe the transmission dynamics of hantavirus in rodents using a spatio-temporal susceptible-exposed-infective-recovered (SEIR) compartmental model that distinguishes between male and female subpopulations [L.J.S. Allen, R.K. McCormack and C.B. Jonsson, Bull. Math. Biol. 68 (2006), 511--524]. Both subpopulations are assumed to differ in their movement with respect to local variations in the densities of their own and the opposite gender group. Three alternative models for the movement of the male individuals are examined. In some cases the movement is not only directed by the gradient of a density (as in the standard diffusive case), but also by a non-local convolution of density values as proposed, in another context, in [R.M. Colombo and E. Rossi, Commun. Math. Sci., 13 (2015), 369--400]. An efficient numerical method for the resulting convection-diffusion-reaction system of partial differential equations is proposed. This method involves techniques of weighted essentially non-oscillatory (WENO) reconstructions in combination with implicit-explicit Runge-Kutta (IMEX-RK) methods for time stepping. The numerical results demonstrate significant differences in the spatio-temporal behavior predicted by the different models, which suggest future research directions.
NASA Astrophysics Data System (ADS)
Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.
2018-03-01
In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.
NASA Technical Reports Server (NTRS)
Chaussee, Denny S.
1993-01-01
The steady 3D viscous flow past the ONERA M6 wing and a slender delta wing-body with trailing edge control surfaces has been computed. A cell-centered finite-volume Navier-Stokes patched zonal method has been used for the numerical simulation. Both diagonalized and LUSGS schemes have been implemented. Besides the standard nonplanar zonal interfacing techniques, a new virtual zone capability has been employed. For code validation, the transonic flow past the ONERA M5 wing is calculated for angles-of-attack of 3.06 deg and 5.06 deg and compared with the available experiments. The wing-body computational results are compared with experimental data for both trailing-edge flaps deflected. The experimental flow conditions are M subinfinity = 0.4, a turbulent Reynolds number of 5.41 million based on a mean aerodynamic chord of 25.959 inches, adiabatic wall, and angles-of-attack varying from 0 deg to 23.85 deg. The computational results are presented for the 23.85 deg angle-of-attack case. The effects of the base flow due to a model sting, the varying second and fourth order numerical dissipation, and the turbulence model are all considered.
Numerical study of water mitigation effects on blast wave
NASA Astrophysics Data System (ADS)
Cheng, M.; Hung, K. C.; Chong, O. Y.
2005-11-01
The mitigating effect of a water wall on the generation and propagation of blast waves of a nearby explosive has been investigated using a numerical approach. A multimaterial Eulerian finite element technique is used to study the influence of the design parameters, such as the water-to-explosive weight ratio, the water wall thickness, the air-gap and the cover area ratio of water on the effectiveness of the water mitigation concept. In the computational model, the detonation gases are modelled with the standard Jones Wilkins Lee (JWL) equation of state. Water, on the other hand, is treated as a compressible fluid with the Mie Gruneisen equation of state model. The validity of the computational model is checked against a limited amount of available experimental data, and the influence of mesh sizes on the convergence of results is also discussed. From the results of the extensive numerical experiments, it is deduced that firstly, the presence of an air-gap reduces the effectiveness of the water mitigator. Secondly, the higher the water-to-explosive weight ratio, the more significant is the reduction in peak pressure of the explosion. Typically, water-to-explosive weight ratios in the range of 1 3 are found to be most practical.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
Schoolcraft, William; Meseguer, Marcos
2017-10-01
Infertility affects over 70 million couples globally. Access to, and interest in, assisted reproductive technologies is growing worldwide, with more couples seeking medical intervention to conceive, in particular by IVF. Despite numerous advances in IVF techniques since its first success in 1978, almost half of the patients treated remain childless. The multifactorial nature of IVF treatment means that success is dependent on many variables. Therefore, it is important to examine how each variable can be optimized to achieve the best possible outcomes for patients. The current approach to IVF is fragmented, with various protocols in use. A systematic approach to establishing optimum best practices may improve IVF success and live birth rates. Our vision of the future is that technological advancements in the laboratory setting are standardized and universally adopted to enable a gold standard of care. Implementation of best practices for laboratory procedures will enable clinicians to generate high-quality gametes, and to produce and identify gametes and embryos of maximum viability and implantation potential, which should contribute to improving take-home healthy baby rates. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Brennan T
2015-01-01
Turbine discharges at low-head short converging intakes are difficult to measure accurately. The proximity of the measurement section to the intake entrance admits large uncertainties related to asymmetry of the velocity profile, swirl, and turbulence. Existing turbine performance codes [10, 24] do not address this special case and published literature is largely silent on rigorous evaluation of uncertainties associated with this measurement context. The American Society of Mechanical Engineers (ASME) Committee investigated the use of Acoustic transit time (ATT), Acoustic scintillation (AS), and Current meter (CM) in a short converging intake at the Kootenay Canal Generating Station in 2009. Basedmore » on their findings, a standardized uncertainty analysis (UA) framework for velocity-area method (specifically for CM measurements) is presented in this paper given the fact that CM is still the most fundamental and common type of measurement system. Typical sources of systematic and random errors associated with CM measurements are investigated, and the major sources of uncertainties associated with turbulence and velocity fluctuations, numerical velocity integration technique (bi-cubic spline), and the number and placement of current meters are being considered for an evaluation. Since the velocity measurements in a short converging intake are associated with complex nonlinear and time varying uncertainties (e.g., Reynolds stress in fluid dynamics), simply applying the law of propagation of uncertainty is known to overestimate the measurement variance while the Monte Carlo method does not. Therefore, a pseudo-Monte Carlo simulation method (random flow generation technique [8]) which was initially developed for the purpose of establishing upstream or initial conditions in the Large-Eddy Simulation (LES) and the Direct Numerical Simulation (DNS) is used to statistically determine uncertainties associated with turbulence and velocity fluctuations. This technique is then combined with a bi-cubic spline interpolation method which converts point velocities into a continuous velocity distribution over the measurement domain. Subsequently the number and placement of current meters are simulated to investigate the accuracy of the estimated flow rates using the numerical velocity-area integration method outlined in ISO 3354 [12]. The authors herein consider that statistics on generated flow rates processed with bi-cubic interpolation and sensor simulations are the combined uncertainties which already accounted for the effects of all those three uncertainty sources. A preliminary analysis based on the current meter data obtained through an upgrade acceptance test of a single unit located in a mainstem plant has been presented.« less
NASA Astrophysics Data System (ADS)
Minami, Setsuo; Ogawa, Ryota
1980-09-01
Consequences of the working project formed in JOERA (JAPAN OPTICAL ENGINEERING RESEARCH ASSOCIATION) from 1976 to 1978 are to be reported. The question, "What is the most reasonable number of mesh divides of entrance pupil to get monochromatic OTF and the most economical sampling method of spectral wavelengths to calculate White Light MTF?" is important in the actual stage of designing to optimize the conflict relationship between numerical accuracy and computing time. We have examined the spectral characteristics of OTF using some typical lenses such as photographic telephoto lens and wide angled retrofocus lens, cleared the structure of the White Light MTF, and found some techniques to get the reasonable numerical results. As a result of trial experiments to get coincidence between measurements and calculat-ions, the standard filter, which should be added to the MTF lens tester and whose spectral transmittance should be installed in the calculation, are proposed.
A study of the viscous and nonadiabatic flow in radial turbines
NASA Technical Reports Server (NTRS)
Khalil, I.; Tabakoff, W.
1981-01-01
A method for analyzing the viscous nonadiabatic flow within turbomachine rotors is presented. The field analysis is based upon the numerical integration of the incompressible Navier-Stokes equations together with the energy equation over the rotors blade-to-blade stream channels. The numerical code used to solve the governing equations employs a nonorthogonal boundary fitted coordinate system that suits the most complicated blade geometries. Effects of turbulence are modeled with two equations; one expressing the development of the turbulence kinetic energy and the other its dissipation rate. The method of analysis is applied to a radial inflow turbine. The solution obtained indicates the severity of the complex interaction mechanism that occurs between different flow regimes (i.e., boundary layers, recirculating eddies, separation zones, etc.). Comparison with nonviscous flow solutions tend to justify strongly the inadequacy of using the latter with standard boundary layer techniques to obtain viscous flow details within turbomachine rotors. Capabilities and limitations of the present method of analysis are discussed.
Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.
2014-01-01
Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
A solution to neural field equations by a recurrent neural network method
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2012-09-01
Neural field equations (NFE) are used to model the activity of neurons in the brain, it is introduced from a single neuron 'integrate-and-fire model' starting point. The neural continuum is spatially discretized for numerical studies, and the governing equations are modeled as a system of ordinary differential equations. In this article the recurrent neural network approach is used to solve this system of ODEs. This consists of a technique developed by combining the standard numerical method of finite-differences with the Hopfield neural network. The architecture of the net, energy function, updating equations, and algorithms are developed for the NFE model. A Hopfield Neural Network is then designed to minimize the energy function modeling the NFE. Results obtained from the Hopfield-finite-differences net show excellent performance in terms of accuracy and speed. The parallelism nature of the Hopfield approaches may make them easier to implement on fast parallel computers and give them the speed advantage over the traditional methods.
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
NASA Astrophysics Data System (ADS)
Alfonso, Lester; Zamora, Jose; Cruz, Pedro
2015-04-01
The stochastic approach to coagulation considers the coalescence process going in a system of a finite number of particles enclosed in a finite volume. Within this approach, the full description of the system can be obtained from the solution of the multivariate master equation, which models the evolution of the probability distribution of the state vector for the number of particles of a given mass. Unfortunately, due to its complexity, only limited results were obtained for certain type of kernels and monodisperse initial conditions. In this work, a novel numerical algorithm for the solution of the multivariate master equation for stochastic coalescence that works for any type of kernels and initial conditions is introduced. The performance of the method was checked by comparing the numerically calculated particle mass spectrum with analytical solutions obtained for the constant and sum kernels, with an excellent correspondence between the analytical and numerical solutions. In order to increase the speedup of the algorithm, software parallelization techniques with OpenMP standard were used, along with an implementation in order to take advantage of new accelerator technologies. Simulations results show an important speedup of the parallelized algorithms. This study was funded by a grant from Consejo Nacional de Ciencia y Tecnologia de Mexico SEP-CONACYT CB-131879. The authors also thanks LUFAC® Computacion SA de CV for CPU time and all the support provided.
Ultrasensitive detection of atmospheric trace gases using frequency modulation spectroscopy
NASA Technical Reports Server (NTRS)
Cooper, David E.
1986-01-01
Frequency modulation (FM) spectroscopy is a new technique that promises to significantly extend the state-of-the-art in point detection of atmospheric trace gases. FM spectroscopy is essentially a balanced bridge optical heterodyne approach in which a small optical absorption or dispersion from an atomic or molecular species of interest generates an easily detected radio frequency (RF) signal. This signal can be monitored using standard RF signal processing techniques and is, in principle, limited only by the shot noise generated in the photodetector by the laser source employed. The use of very high modulation frequencies which exceed the spectral width of the probed absorption line distinguishes this technique from the well-known derivative spectroscopy which makes use of low (kHz) modulation frequencies. FM spectroscopy was recently extended to the 10 micron infrared (IR) spectral region where numerous polyatomic molecules exhibit characteristic vibrational-rotational bands. In conjunction with tunable semiconductor diode lasers, the quantum-noise-limited sensitivity of the technique should allow for the detection of absorptions as small as .00000001 in the IR spectral region. This sensitivity would allow for the detection of H2O2 at concentrations as low as 1 pptv with an integration time of 10 seconds.
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Matta, Fabio; Zappa, Emanuele; Sutton, Michael A.; Cigada, Alfredo
2015-03-01
This paper discusses the effect of pre-processing image blurring on the uncertainty of two-dimensional digital image correlation (DIC) measurements for the specific case of numerically-designed speckle patterns having particles with well-defined and consistent shape, size and spacing. Such patterns are more suitable for large measurement surfaces on large-scale specimens than traditional spray-painted random patterns without well-defined particles. The methodology consists of numerical simulations where Gaussian digital filters with varying standard deviation are applied to a reference speckle pattern. To simplify the pattern application process for large areas and increase contrast to reduce measurement uncertainty, the speckle shape, mean size and on-center spacing were selected to be representative of numerically-designed patterns that can be applied on large surfaces through different techniques (e.g., spray-painting through stencils). Such 'designer patterns' are characterized by well-defined regions of non-zero frequency content and non-zero peaks, and are fundamentally different from typical spray-painted patterns whose frequency content exhibits near-zero peaks. The effect of blurring filters is examined for constant, linear, quadratic and cubic displacement fields. Maximum strains between ±250 and ±20,000 με are simulated, thus covering a relevant range for structural materials subjected to service and ultimate stresses. The robustness of the simulation procedure is verified experimentally using a physical speckle pattern subjected to constant displacements. The stability of the relation between standard deviation of the Gaussian filter and measurement uncertainty is assessed for linear displacement fields at varying image noise levels, subset size, and frequency content of the speckle pattern. It is shown that bias error as well as measurement uncertainty are minimized through Gaussian pre-filtering. This finding does not apply to typical spray-painted patterns without well-defined particles, for which image blurring is only beneficial in reducing bias errors.
Sutlive, Thomas G; Mabry, Lance M; Easterling, Emmanuel J; Durbin, Jose D; Hanson, Stephen L; Wainner, Robert S; Childs, John D
2009-07-01
To determine whether military health care beneficiaries with low back pain (LBP) who are likely to respond successfully to spinal manipulation experience a difference in short-term clinical outcomes based on the manipulation technique that is used. Sixty patients with LBP identified as likely responders to manipulation underwent a standardized clinical examination and were randomized to receive a lumbopelvic (LP) or lumbar neutral gap (NG) manipulation technique. Outcome measures were a numeric pain rating scale and the modified Oswestry Disability Questionnaire. Both the LP and NG groups experienced statistically significant reductions in pain and disability at 48 hours postmanipulation. The improvements seen in each group were small because of the short follow-up. There were no statistically significant or clinically meaningful differences in pain or disability between the two groups. The two manipulation techniques used in this study were equally effective at reducing pain and disability when compared at 48 hours posttreatment. Clinicians may employ either technique for the treatment of LBP and can expect similar outcomes in those who satisfy the clinical prediction rule (CPR). Further research is required to determine whether differences exist at longer-term follow-up periods, after multiple treatment sessions, or in different clinical populations.
NASA Astrophysics Data System (ADS)
Bordovsky, Michal; Catrysse, Peter; Dods, Steven; Freitas, Marcio; Klein, Jackson; Kotacka, Libor; Tzolov, Velko; Uzunov, Ivan M.; Zhang, Jiazong
2004-05-01
We present the state of the art for commercial design and simulation software in the 'front end' of photonic circuit design. One recent advance is to extend the flexibility of the software by using more than one numerical technique on the same optical circuit. There are a number of popular and proven techniques for analysis of photonic devices. Examples of these techniques include the Beam Propagation Method (BPM), the Coupled Mode Theory (CMT), and the Finite Difference Time Domain (FDTD) method. For larger photonic circuits, it may not be practical to analyze the whole circuit by any one of these methods alone, but often some smaller part of the circuit lends itself to at least one of these standard techniques. Later the whole problem can be analyzed on a unified platform. This kind of approach can enable analysis for cases that would otherwise be cumbersome, or even impossible. We demonstrate solutions for more complex structures ranging from the sub-component layout, through the entire device characterization, to the mask layout and its editing. We also present recent advances in the above well established techniques. This includes the analysis of nano-particles, metals, and non-linear materials by FDTD, photonic crystal design and analysis, and improved models for high concentration Er/Yb co-doped glass waveguide amplifiers.
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
NASA Astrophysics Data System (ADS)
Harms, F.; Dalimier, E.; Vermeulen, P.; Fragola, A.; Boccara, A. C.
2012-03-01
Optical Coherence Tomography (OCT) is an efficient technique for in-depth optical biopsy of biological tissues, relying on interferometric selection of ballistic photons. Full-Field Optical Coherence Tomography (FF-OCT) is an alternative approach to Fourier-domain OCT (spectral or swept-source), allowing parallel acquisition of en-face optical sections. Using medium numerical aperture objective, it is possible to reach an isotropic resolution of about 1x1x1 ìm. After stitching a grid of acquired images, FF-OCT gives access to the architecture of the tissue, for both macroscopic and microscopic structures, in a non-invasive process, which makes the technique particularly suitable for applications in pathology. Here we report a multimodal approach to FF-OCT, combining two Full-Field techniques for collecting a backscattered endogeneous OCT image and a fluorescence exogeneous image in parallel. Considering pathological diagnosis of cancer, visualization of cell nuclei is of paramount importance. OCT images, even for the highest resolution, usually fail to identify individual nuclei due to the nature of the optical contrast used. We have built a multimodal optical microscope based on the combination of FF-OCT and Structured Illumination Microscopy (SIM). We used x30 immersion objectives, with a numerical aperture of 1.05, allowing for sub-micron transverse resolution. Fluorescent staining of nuclei was obtained using specific fluorescent dyes such as acridine orange. We present multimodal images of healthy and pathological skin tissue at various scales. This instrumental development paves the way for improvements of standard pathology procedures, as a faster, non sacrificial, operator independent digital optical method compared to frozen sections.
Thermo-mechanical toner transfer for high-quality digital image correlation speckle patterns
NASA Astrophysics Data System (ADS)
Mazzoleni, Paolo; Zappa, Emanuele; Matta, Fabio; Sutton, Michael A.
2015-12-01
The accuracy and spatial resolution of full-field deformation measurements performed through digital image correlation are greatly affected by the frequency content of the speckle pattern, which can be effectively controlled using particles with well-defined and consistent shape, size and spacing. This paper introduces a novel toner-transfer technique to impress a well-defined and repeatable speckle pattern on plane and curved surfaces of metallic and cement composite specimens. The speckle pattern is numerically designed, printed on paper using a standard laser printer, and transferred onto the measurement surface via a thermo-mechanical process. The tuning procedure to compensate for the difference between designed and toner-transferred actual speckle size is presented. Based on this evidence, the applicability of the technique is discussed with respect to surface material, dimensions and geometry. Proof of concept of the proposed toner-transfer technique is then demonstrated for the case of a quenched and partitioned welded steel plate subjected to uniaxial tensile loading, and for an aluminum plate exposed to temperatures up to 70% of the melting point of aluminum and past the melting point of typical printer toner powder.
GPU surface extraction using the closest point embedding
NASA Astrophysics Data System (ADS)
Kim, Mark; Hansen, Charles
2015-01-01
Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2008-01-01
Stereo Imaging Velocimetry (SIV) is a NASA Glenn Research Center (GRC) developed fluid physics technique for measuring threedimensional (3-D) velocities in any optically transparent fluid that can be seeded with tracer particles. SIV provides a means to measure 3-D fluid velocities quantitatively and qualitatively at many points. This technique provides full-field 3-D analysis of any optically clear fluid or gas experiment using standard off-the-shelf CCD cameras to provide accurate and reproducible 3-D velocity profiles for experiments that require 3-D analysis. A flame ball is a steady flame in a premixed combustible atmosphere which, due to the transport properties (low Lewis-number) of the mixture, does not propagate but is instead supplied by diffusive transport of the reactants, forming a premixed flame. This flame geometry presents a unique environment for testing combustion theory. We present our analysis of flame ball phenomena utilizing SIV technology in order to accurately calculate the 3-D position of a flame ball(s) during an experiment, which can be used as a direct comparison of numerical simulations.
Recording the adult zebrafish cerebral field potential during pentylenetetrazole seizures
Pineda, Ricardo; Beattie, Christine E.; Hall, Charles W.
2017-01-01
Although the zebrafish is increasingly used as a model organism to study epilepsy, no standard electrophysiological technique for recording electrographic seizures in adult fish exists. The purpose of this paper is to introduce a readily implementable technique for recording pentylenetetrazole seizures in the adult zebrafish. We find that we can consistently record a high quality field potential over the zebrafish cerebrum using an amplification of 5000 V/V and bandpass filtering at corner frequencies of 1.6 and 16 Hz. The cerebral field potential recordings show consistent features in the baseline, pre-seizure, seizure and post-seizure time periods that can be easily recognized by visual inspection as is the case with human and rodent electroencephalogram. Furthermore, numerical analysis of the field potential at the time of seizure onset reveals an increase in the total power, bandwidth and peak frequency in the power spectrum, as is also the case with human and rodent electroencephalogram. The techniques presented herein stand to advance the utility of the adult zebrafish in the study of epilepsy by affording an equivalent to the electroencephalogram used in mammalian models and human patients. PMID:21689682
NASA Astrophysics Data System (ADS)
Safaei Pirooz, Amir A.; Flay, Richard G. J.
2018-03-01
We evaluate the accuracy of the speed-up provided in several wind-loading standards by comparison with wind-tunnel measurements and numerical predictions, which are carried out at a nominal scale of 1:500 and full-scale, respectively. Airflow over two- and three-dimensional bell-shaped hills is numerically modelled using the Reynolds-averaged Navier-Stokes method with a pressure-driven atmospheric boundary layer and three different turbulence models. Investigated in detail are the effects of grid size on the speed-up and flow separation, as well as the resulting uncertainties in the numerical simulations. Good agreement is obtained between the numerical prediction of speed-up, as well as the wake region size and location, with that according to large-eddy simulations and the wind-tunnel results. The numerical results demonstrate the ability to predict the airflow over a hill with good accuracy with considerably less computational time than for large-eddy simulation. Numerical simulations for a three-dimensional hill show that the speed-up and the wake region decrease significantly when compared with the flow over two-dimensional hills due to the secondary flow around three-dimensional hills. Different hill slopes and shapes are simulated numerically to investigate the effect of hill profile on the speed-up. In comparison with more peaked hill crests, flat-topped hills have a lower speed-up at the crest up to heights of about half the hill height, for which none of the standards gives entirely satisfactory values of speed-up. Overall, the latest versions of the National Building Code of Canada and the Australian and New Zealand Standard give the best predictions of wind speed over isolated hills.
Use of benefit-cost analysis in establishing Federal radiation protection standards: a review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, L.E.
1979-10-01
This paper complements other work which has evaluated the cost impacts of radiation standards on the nuclear industry. It focuses on the approaches to valuation of the health and safety benefits of radiation standards and the actual and appropriate processes of benefit-cost comparison. A brief historical review of the rationale(s) for the levels of radiation standards prior to 1970 is given. The Nuclear Regulatory Commission (NRC) established numerical design objectives for light water reactors (LWRs). The process of establishing these numerical design criteria below the radiation protection standards set in 10 CFR 20 is reviewed. EPA's 40 CFR 190 environmentalmore » standards for the uranium fuel cycle have lower values than NRC's radiation protection standards in 10 CFR 20. The task of allocating EPA's 40 CFR 190 standards to the various portions of the fuel cycle was left to the implementing agency, NRC. So whether or not EPA's standards for the uranium fuel cycle are more stringent for LWRs than NRC's numerical design objectives depends on how EPA's standards are implemented by NRC. In setting the numerical levels in Appendix I to 10 CFR 50 and 40 CFR 190 NRC and EPA, respectively, focused on the costs of compliance with various levels of radiation control. A major portion of the paper is devoted to a review and critique of the available methods for valuing health and safety benefits. All current approaches try to estimate a constant value of life and use this to vaue the expected number of lives saved. This paper argues that it is more appropriate to seek a value of a reduction in risks to health and life that varies with the extent of these risks. Additional research to do this is recommended. (DC)« less
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Noniterative, unconditionally stable numerical techniques for solving condensational and
dissolutional growth equations are given. Growth solutions are compared to Gear-code solutions for
three cases when growth is coupled to reversible equilibrium chemistry. In all cases, ...
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
Olama, Mohammed M.; Ma, Xiao; Killough, Stephen M.; ...
2015-03-12
In recent years, there has been great interest in using hybrid spread-spectrum (HSS) techniques for commercial applications, particularly in the Smart Grid, in addition to their inherent uses in military communications. This is because HSS can accommodate high data rates with high link integrity, even in the presence of significant multipath effects and interfering signals. A highly useful form of this transmission technique for many types of command, control, and sensing applications is the specific code-related combination of standard direct sequence modulation with fast frequency hopping, denoted hybrid DS/FFH, wherein multiple frequency hops occur within a single data-bit time. Inmore » this paper, error-probability analyses are performed for a hybrid DS/FFH system over standard Gaussian and fading-type channels, progressively including the effects from wide- and partial-band jamming, multi-user interference, and varying degrees of Rayleigh and Rician fading. In addition, an optimization approach is formulated that minimizes the bit-error performance of a hybrid DS/FFH communication system and solves for the resulting system design parameters. The optimization objective function is non-convex and can be solved by applying the Karush-Kuhn-Tucker conditions. We also present our efforts toward exploring the design, implementation, and evaluation of a hybrid DS/FFH radio transceiver using a single FPGA. Numerical and experimental results are presented under widely varying design parameters to demonstrate the adaptability of the waveform for varied harsh smart grid RF signal environments.« less
Metrology of vibration measurements by laser techniques
NASA Astrophysics Data System (ADS)
von Martens, Hans-Jürgen
2008-06-01
Metrology as the art of careful measurement has been understood as uniform methodology for measurements in natural sciences, covering methods for the consistent assessment of experimental data and a corpus of rules regulating application in technology and in trade and industry. The knowledge, methods and tools available for precision measurements can be exploited for measurements at any level of uncertainty in any field of science and technology. A metrological approach to the preparation, execution and evaluation (including expression of uncertainty) of measurements of translational and rotational motion quantities using laser interferometer methods and techniques will be presented. The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and upgraded ISO standards are reviewed with respect to their suitability for ensuring traceable vibration measurements and calibrations in an extended frequency range of 0.4 Hz to higher than 100 kHz. Using adequate vibration exciters to generate sufficient displacement or velocity amplitudes, the upper frequency limits of the laser interferometer methods specified in ISO 16063-11 for frequencies <= 10 kHz can be expanded to 100 kHz and beyond. A comparison of different methods simultaneously used for vibration measurements at 100 kHz will be demonstrated. A statistical analysis of numerous experimental results proves the highest accuracy achievable currently in vibration measurements by specific laser methods, techniques and procedures (i.e. measurement uncertainty 0.05 % at frequencies <= 10 kHz, <= 1 % up to 100 kHz).
PHYSICAL PARAMETERS OF STANDARD AND BLOWOUT JETS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pucci, Stefano; Romoli, Marco; Poletto, Giannina
2013-10-10
The X-ray Telescope on board the Hinode mission revealed the occurrence, in polar coronal holes, of much more numerous jets than previously indicated by the Yohkoh/Soft X-ray Telescope. These plasma ejections can be of two types, depending on whether they fit the standard reconnection scenario for coronal jets or if they include a blowout-like eruption. In this work, we analyze two jets, one standard and one blowout, that have been observed by the Hinode and STEREO experiments. We aim to infer differences in the physical parameters that correspond to the different morphologies of the events. To this end, we adoptmore » spectroscopic techniques and determine the profiles of the plasma temperature, density, and outflow speed versus time and position along the jets. The blowout jet has a higher outflow speed, a marginally higher temperature, and is rooted in a stronger magnetic field region than the standard event. Our data provide evidence for recursively occurring reconnection episodes within both the standard and the blowout jet, pointing either to bursty reconnection or to reconnection occurring at different locations over the jet lifetimes. We make a crude estimate of the energy budget of the two jets and show how energy is partitioned among different forms. Also, we show that the magnetic energy that feeds the blowout jet is a factor of 10 higher than the magnetic energy that fuels the standard event.« less
Galerkin v. discrete-optimal projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less
Nimmermark, Magnus O; Wang, John J; Maynard, Charles; Cohen, Mauricio; Gilcrist, Ian; Heitner, John; Hudson, Michael; Palmeri, Sebastian; Wagner, Galen S; Pahlm, Olle
2011-01-01
The study purpose is to determine whether numeric and/or graphic ST measurements added to the display of the 12-lead electrocardiogram (ECG) would influence cardiologists' decision to provide myocardial reperfusion therapy. Twenty ECGs with borderline ST-segment deviation during elective percutaneous coronary intervention and 10 controls before balloon inflation were included. Only 5 of the 20 ECGs during coronary balloon occlusion met the 2007 American Heart Association guidelines for ST-elevation myocardial infarction (STEMI). Fifteen cardiologists read 4 sets of these ECGs as the basis for a "yes/no" reperfusion therapy decision. Sets 1 and 4 were the same 12-lead ECGs alone. Set 2 also included numeric ST-segment measurements, and set 3 included both numeric and graphically displayed ST measurements ("ST Maps"). The mean (range) positive reperfusion decisions were 10.6 (2-15), 11.4 (1-19), 9.7 (2-14), and 10.7 (1-15) for sets 1 to 4, respectively. The accuracies of the observers for the 5 STEMI ECGs were 67%, 69%, and 77% for the standard format, the ST numeric format, and the ST graphic format, respectively. The improved detection rate (77% vs 67%) with addition of both numeric and graphic displays did achieve statistical significance (P < .025). The corresponding specificities for the 10 control ECGs were 85%, 79%, and 89%, respectively. In conclusion, a wide variation of reperfusion decisions was observed among clinical cardiologists, and their decisions were not altered by adding ST deviation measurements in numeric and/or graphic displays. Acute coronary occlusion detection rate was low for ECGs meeting STEMI criteria, and this was improved by adding ST-segment measurements in numeric and graphic forms. These results merit further study of the clinical value of this technique for improved acute coronary occlusion treatment decision support. Copyright © 2011 Elsevier Inc. All rights reserved.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
NASA Astrophysics Data System (ADS)
Jougnot, D.; Roubinet, D.; Linde, N.; Irving, J.
2016-12-01
Quantifying fluid flow in fractured media is a critical challenge in a wide variety of research fields and applications. To this end, geophysics offers a variety of tools that can provide important information on subsurface physical properties in a noninvasive manner. Most geophysical techniques infer fluid flow by data or model differencing in time or space (i.e., they are not directly sensitive to flow occurring at the time of the measurements). An exception is the self-potential (SP) method. When water flows in the subsurface, an excess of charge in the pore water that counterbalances electric charges at the mineral-pore water interface gives rise to a streaming current and an associated streaming potential. The latter can be measured with the SP technique, meaning that the method is directly sensitive to fluid flow. Whereas numerous field experiments suggest that the SP method may allow for the detection of hydraulically active fractures, suitable tools for numerically modeling streaming potentials in fractured media do not exist. Here, we present a highly efficient two-dimensional discrete-dual-porosity approach for solving the fluid-flow and associated self-potential problems in fractured domains. Our approach is specifically designed for complex fracture networks that cannot be investigated using standard numerical methods due to computational limitations. We then simulate SP signals associated with pumping conditions for a number of examples to show that (i) accounting for matrix fluid flow is essential for accurate SP modeling and (ii) the sensitivity of SP to hydraulically active fractures is intimately linked with fracture-matrix fluid interactions. This implies that fractures associated with strong SP amplitudes are likely to be hydraulically conductive, attracting fluid flow from the surrounding matrix.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...
Code of Federal Regulations, 2010 CFR
2010-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...
Code of Federal Regulations, 2011 CFR
2011-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY STANDARDS Federally Promulgated Water Quality Standards § 131.38 Establishment of Numeric Criteria for priority toxic pollutants for the State... Concentration (CMC) equals the highest concentration of a pollutant to which aquatic life can be exposed for a...
Numerical modelling techniques of soft soil improvement via stone columns: A brief review
NASA Astrophysics Data System (ADS)
Zukri, Azhani; Nazir, Ramli
2018-04-01
There are a number of numerical studies on stone column systems in the literature. Most of the studies found were involved with two-dimensional analysis of the stone column behaviour, while only a few studies used three-dimensional analysis. The most popular software utilised in those studies was Plaxis 2D and 3D. Other types of software that used for numerical analysis are DIANA, EXAMINE, ZSoil, ABAQUS, ANSYS, NISA, GEOSTUDIO, CRISP, TOCHNOG, CESAR, GEOFEM (2D & 3D), FLAC, and FLAC 3. This paper will review the methodological approaches to model stone column numerically, both in two-dimensional and three-dimensional analyses. The numerical techniques and suitable constitutive model used in the studies will also be discussed. In addition, the validation methods conducted were to verify the numerical analysis conducted will be presented. This review paper also serves as a guide for junior engineers through the applicable procedures and considerations when constructing and running a two or three-dimensional numerical analysis while also citing numerous relevant references.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1993-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Average Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1992-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
NASA Astrophysics Data System (ADS)
García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin
2014-10-01
Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.
Use of the routing procedure to study dye and gas transport in the West Fork Trinity River, Texas
Jobson, Harvey E.; Rathbun, R.E.
1984-01-01
Rhodamine-WT dye, ethylene, and propane were injected at three sites along a 21.6-kilometer reach of the West Fork Trinity River below Fort Worth, Texas. Complete dye concentration versus time curves and peak gas concentrations were measured at three cross sections below each injection. The peak dye concentrations were located and samples were collected at about three-hour intervals for as many as six additional cross sections. These data were analyzed to determine the longitudinal dispersion coefficients as well as the gas desorption coefficients using both standard techniques and a numerical routing procedure. The routing procedure, using a Lagrangian transport model to minimize numerical dispersion, provided better estimates of the dispersion coefficient than did the method of moments. At a steady flow of about 0.76 m2/s, the dispersion coefficient varied from about 0.7 m2/s in a reach contained within a single deep pool to about 2.0 m2/s in a reach containing riffles and small pools. The bulk desorption coefficients computed using the routing procedure and the standard peak method were essentially the same. The liquid film coefficient could also be obtained using the routing procedure. Both the bulk desorption coefficient and the liquid film coefficient were much smaller in the pooled reach than in the reaches containing riffles.
Kavanagh, T; Dube, A; Albert, A; Gunka, V
2016-08-01
Between 10-22% of the general population experience needle phobia. Needle phobic parturients are at increased risk of adverse outcomes. We assessed the efficacy of topical Ametop™ (tetracaine 4%) gel in reducing the pain associated with local anesthetic skin infiltration before neuraxial block in non-laboring women. This was a prospective, randomized, double-blind, placebo-controlled study. Ametop™ or placebo was applied to the skin of the lower back at least 20min before neuraxial block using a standardized technique with 1% lidocaine skin infiltration. The primary outcome was numeric pain score (0-10) 30s after lidocaine infiltration. Groups were compared using Welch's t-test. Thirty-six subjects in each group were analyzed. There was a statistically significant difference in the mean (standard deviation) pain score between the Ametop™ and the placebo groups: 2.36±1.80 and 3.51±2.22, respectively (P=0.019). There were no significant adverse events. The mean numeric pain score in the Ametop™ group was 33% lower compared to the placebo group. Topical Ametop™ gel applied at least 20min before local anesthetic infiltration of the skin prior to neuraxial block in elective cesarean delivery may be a useful adjunct in needle phobic women. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pandey, Rishi Kumar; Mishra, Hradyesh Kumar
2017-11-01
In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.
Lagrangian analysis of multiscale particulate flows with the particle finite element method
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Latorre, Salvador; Casas, Guillermo; Rossi, Riccardo; Rojek, Jerzy
2014-05-01
We present a Lagrangian numerical technique for the analysis of flows incorporating physical particles of different sizes. The numerical approach is based on the particle finite element method (PFEM) which blends concepts from particle-based techniques and the FEM. The basis of the Lagrangian formulation for particulate flows and the procedure for modelling the motion of small and large particles that are submerged in the fluid are described in detail. The numerical technique for analysis of this type of multiscale particulate flows using a stabilized mixed velocity-pressure formulation and the PFEM is also presented. Examples of application of the PFEM to several particulate flows problems are given.
NASA Astrophysics Data System (ADS)
Snyder, Jeff; Hanstock, Chris C.; Wilman, Alan H.
2009-10-01
A general in vivo magnetic resonance spectroscopy editing technique is presented to detect weakly coupled spin systems through subtraction, while preserving singlets through addition, and is applied to the specific brain metabolite γ-aminobutyric acid (GABA) at 4.7 T. The new method uses double spin echo localization (PRESS) and is based on a constant echo time difference spectroscopy approach employing subtraction of two asymmetric echo timings, which is normally only applicable to strongly coupled spin systems. By utilizing flip angle reduction of one of the two refocusing pulses in the PRESS sequence, we demonstrate that this difference method may be extended to weakly coupled systems, thereby providing a very simple yet effective editing process. The difference method is first illustrated analytically using a simple two spin weakly coupled spin system. The technique was then demonstrated for the 3.01 ppm resonance of GABA, which is obscured by the strong singlet peak of creatine in vivo. Full numerical simulations, as well as phantom and in vivo experiments were performed. The difference method used two asymmetric PRESS timings with a constant total echo time of 131 ms and a reduced 120° final pulse, providing 25% GABA yield upon subtraction compared to two short echo standard PRESS experiments. Phantom and in vivo results from human brain demonstrate efficacy of this method in agreement with numerical simulations.
Experimental evaluation of the thermal properties of two tissue equivalent phantom materials.
Craciunescu, O I; Howle, L E; Clegg, S T
1999-01-01
Tissue equivalent radio frequency (RF) phantoms provide a means for measuring the power deposition of various hyperthermia therapy applicators. Temperature measurements made in phantoms are used to verify the accuracy of various numerical approaches for computing the power and/or temperature distributions. For the numerical simulations to be accurate, the electrical and thermal properties of the materials that form the phantom should be accurately characterized. This paper reports on the experimentally measured thermal properties of two commonly used phantom materials, i.e. a rigid material with the electrical properties of human fat, and a low concentration polymer gel with the electrical properties of human muscle. Particularities of the two samples required the design of alternative measuring techniques for the specific heat and thermal conductivity. For the specific heat, a calorimeter method is used. For the thermal diffusivity, a method derived from the standard guarded comparative-longitudinal heat flow technique was used for both materials. For the 'muscle'-like material, the thermal conductivity, density and specific heat at constant pressure were measured as: k = 0.31 +/- 0.001 W(mK)(-1), p = 1026 +/- 7 kgm(-3), and c(p) = 4584 +/- 107 J(kgK)(-1). For the 'fat'-like material, the literature reports on the density and specific heat such that only the thermal conductivity was measured as k = 0.55 W(mK)(-1).
Fourth order difference methods for hyperbolic IBVP's
NASA Technical Reports Server (NTRS)
Gustafsson, Bertil; Olsson, Pelle
1994-01-01
Fourth order difference approximations of initial-boundary value problems for hyperbolic partial differential equations are considered. We use the method of lines approach with both explicit and compact implicit difference operators in space. The explicit operator satisfies an energy estimate leading to strict stability. For the implicit operator we develop boundary conditions and give a complete proof of strong stability using the Laplace transform technique. We also present numerical experiments for the linear advection equation and Burgers' equation with discontinuities in the solution or in its derivative. The first equation is used for modeling contact discontinuities in fluid dynamics, the second one for modeling shocks and rarefaction waves. The time discretization is done with a third order Runge-Kutta TVD method. For solutions with discontinuities in the solution itself we add a filter based on second order viscosity. In case of the non-linear Burger's equation we use a flux splitting technique that results in an energy estimate for certain different approximations, in which case also an entropy condition is fulfilled. In particular we shall demonstrate that the unsplit conservative form produces a non-physical shock instead of the physically correct rarefaction wave. In the numerical experiments we compare our fourth order methods with a standard second order one and with a third order TVD-method. The results show that the fourth order methods are the only ones that give good results for all the considered test problems.
Extraction and quantitative analysis of iodine in solid and solution matrixes.
Brown, Christopher F; Geiszler, Keith N; Vickerman, Tanya S
2005-11-01
129I is a contaminant of interest in the vadose zone and groundwater at numerous federal and privately owned facilities. Several techniques have been utilized to extract iodine from solid matrixes; however, all of them rely on two fundamental approaches: liquid extraction or chemical/heat-facilitated volatilization. While these methods are typically chosen for their ease of implementation, they do not totally dissolve the solid. We defined a method that produces complete solid dissolution and conducted laboratory tests to assess its efficacy to extract iodine from solid matrixes. Testing consisted of potassium nitrate/potassium hydroxide fusion of the sample, followed by sample dissolution in a mixture of sulfuric acid and sodium bisulfite. The fusion extraction method resulted in complete sample dissolution of all solid matrixes tested. Quantitative analysis of 127I and 129I via inductively coupled plasma mass spectrometry showed better than +/-10% accuracy for certified reference standards, with the linear operating range extending more than 3 orders of magnitude (0.005-5 microg/L). Extraction and analysis of four replicates of standard reference material containing 5 microg/g 127I resulted in an average recovery of 98% with a relative deviation of 6%. This simple and cost-effective technique can be applied to solid samples of varying matrixes with little or no adaptation.
Non-invasive diagnosis of advanced fibrosis and cirrhosis
Sharma, Suraj; Khalili, Korosh; Nguyen, Geoffrey Christopher
2014-01-01
Liver cirrhosis is a common and growing public health problem globally. The diagnosis of cirrhosis portends an increased risk of morbidity and mortality. Liver biopsy is considered the gold standard for diagnosis of cirrhosis and staging of fibrosis. However, despite its universal use, liver biopsy is an invasive and inaccurate gold standard with numerous drawbacks. In order to overcome the limitations of liver biopsy, a number of non-invasive techniques have been investigated for the assessment of cirrhosis. This review will focus on currently available non-invasive markers of cirrhosis. The evidence behind the use of these markers will be highlighted, along with an assessment of diagnostic accuracy and performance characteristics of each test. Non-invasive markers of cirrhosis can be radiologic or serum-based. Radiologic techniques based on ultrasound, magnetic resonance imaging and elastography have been used to assess liver fibrosis. Serum-based biomarkers of cirrhosis have also been developed. These are broadly classified into indirect and direct markers. Indirect biomarkers reflect liver function, which may decline with the onset of cirrhosis. Direct biomarkers, reflect extracellular matrix turnover, and include molecules involved in hepatic fibrogenesis. On the whole, radiologic and serum markers of fibrosis correlate well with biopsy scores, especially when excluding cirrhosis or excluding fibrosis. This feature is certainly clinically useful, and avoids liver biopsy in many cases. PMID:25492996
Preconditioned MoM Solutions for Complex Planar Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fasenfest, B J; Jackson, D; Champagne, N
2004-01-23
The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less
Experimental progress in positronium laser physics
NASA Astrophysics Data System (ADS)
Cassidy, David B.
2018-03-01
The field of experimental positronium physics has advanced significantly in the last few decades, with new areas of research driven by the development of techniques for trapping and manipulating positrons using Surko-type buffer gas traps. Large numbers of positrons (typically ≥106) accumulated in such a device may be ejected all at once, so as to generate an intense pulse. Standard bunching techniques can produce pulses with ns (mm) temporal (spatial) beam profiles. These pulses can be converted into a dilute Ps gas in vacuum with densities on the order of 107 cm-3 which can be probed by standard ns pulsed laser systems. This allows for the efficient production of excited Ps states, including long-lived Rydberg states, which in turn facilitates numerous experimental programs, such as precision optical and microwave spectroscopy of Ps, the application of Stark deceleration methods to guide, decelerate and focus Rydberg Ps beams, and studies of the interactions of such beams with other atomic and molecular species. These methods are also applicable to antihydrogen production and spectroscopic studies of energy levels and resonances in positronium ions and molecules. A summary of recent progress in this area will be given, with the objective of providing an overview of the field as it currently exists, and a brief discussion of some future directions.
Light and Life in Baltimore—and Beyond
Edidin, Michael
2015-01-01
Baltimore has been the home of numerous biophysical studies using light to probe cells. One such study, quantitative measurement of lateral diffusion of rhodopsin, set the standard for experiments in which recovery after photobleaching is used to measure lateral diffusion. Development of this method from specialized microscopes to commercial scanning confocal microscopes has led to widespread use of the technique to measure lateral diffusion of membrane proteins and lipids, and as well diffusion and binding interactions in cell organelles and cytoplasm. Perturbation of equilibrium distributions by photobleaching has also been developed into a robust method to image molecular proximity in terms of fluorescence resonance energy transfer between donor and acceptor fluorophores. PMID:25650914
Seker, Gaye; Kulacoglu, Hakan; Öztuna, Derya; Topgül, Koray; Akyol, Cihangir; Çakmak, Atıl; Karateke, Faruk; Özdoğan, Mehmet; Ersoy, Eren; Gürer, Ahmet; Zerbaliyev, Elbrus; Seker, Duray; Yorgancı, Kaya; Pergel, Ahmet; Aydın, Ibrahim; Ensari, Cemal; Bilecik, Tuna; Kahraman, İzzettin; Reis, Erhan; Kalaycı, Murat; Canda, Aras Emre; Demirağ, Alp; Kesicioğlu, Tuğrul; Malazgirt, Zafer; Gündoğdu, Haldun; Terzi, Cem
2014-01-01
Abdominal wall hernias are a common problem in the general population. A Western estimate reveals that the lifetime risk of developing a hernia is about 2%. As a result, hernia repairs likely comprise the most frequent general surgery operations. More than 20 million hernias are estimated to be repaired every year around the world. Numerous repair techniques have been described to date however tension-free mesh repairs are widely used today because of their low hernia recurrence rates. Nevertheless, there are some ongoing debates regarding the ideal approach (open or laparoscopic), the ideal anesthesia (general, local, or regional), and the ideal mesh (standard polypropylene or newer meshes).
Using real options analysis to support strategic management decisions
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan
2013-12-01
Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.
Microelectromechanical systems(MEMS): Launching Research Concepts into the Marketplace
NASA Astrophysics Data System (ADS)
Arney, Susanne
1999-04-01
More than a decade following the demonstration of the first spinning micromotors and microgears, the field of microelectromechanical systems (MEMS) has burgeoned on a worldwide basis. Integrated circuit design, fabrication, and packaging techniques have provided the foundation for the growth of an increasingly mature MEMS infrastructure which spans numerous topics of research as well as industrial application. The remarkable proliferation of MEMS concepts into such contrasting arenas of application as automotive sensors, biology, optical and wireless telecommunications, displays, printing, and physics experiments will be described. Challenges to commercialization of research prototypes will be discussed with emphasis on the development of design, fabrication, packaging, reliability and standards which fundamentally enable the application of MEMS to a highly diversified marketplace.
Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou
2015-01-01
Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.
NASA Technical Reports Server (NTRS)
Zimmerle, D.; Bernhard, R. J.
1985-01-01
An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.
The future of surgery in the treatment of breast cancer.
Wood, William C
2003-12-01
The role of surgery cannot be discussed independently, but in relationship to the other modalities of treatment. Sentinel lymph node mapping and biopsy has revolutionized the role of surgery in axillary staging. Techniques of sentinel node mapping, the timing relative to chemotherapy, possible contraindications, and the necessity of completion axillary dissection are all under active investigation. The next few years will see continued changes in this important technique. Techniques of localizing clinically occult tumors are numerous and under study. These are not yet at the level of Phase III comparative trials. Induction chemotherapy has long been standard care for women with locally advanced breast cancer. It has not become standard care for Stage I or II breast cancers that meet criteria for adjuvant therapy. The ability to significantly downsize 80% of breast cancers is reason enough to make it usual practice for women who are certain to receive chemotherapy, if only for the cosmetic advantage that would accrue. Much has been made of the use of thermal ablation of small breast cancers by small probes introduced by skin puncture. In initial trials the lesions were excised after being heated or frozen. Current studies are leaving the destroyed tissue in place and following for evidence of control or recurrence. The value of this approach in terms of cosmesis is unproven, and the timing of its introduction when small tumors are being evaluated for margins and genetic markers, make it difficult to imagine broad acceptance. Finally, the role of prophylactic surgery for women at increased risk remains a difficult equation, compounded of alternatives such as chemoprevention, availability and effectiveness of surveillance techniques, and the level of fear and anxiety of the patient.
A test of a vortex method for the computation of flap side edge noise
NASA Technical Reports Server (NTRS)
Martin, James E.
1995-01-01
Upon approach to landing, a major source location of airframe noise occurs at the side edges of the part span, trailing edge flaps. In the vicinity of these flaps, a complex arrangement of spanwise flow with primary and secondary tip vortices may form. Each of these vortices is observed to become fully three-dimensional. In the present study, a numerical model is developed to investigate the noise radiated from the side edge of a flap. The inherent three-dimensionality of this flow forces us to carefully consider a numerical scheme which will be both accurate in its prediction of the flow acoustics and also computationally efficient. Vortex methods have offered a fast and efficient means of simulating many two and three-dimensional, vortex dominated flows. In vortex methods, the time development of the flow is tracked by following exclusively the vorticity containing regions. Through the Biot-Savart law, knowledge of the vorticity field enables one to obtain flow quantities at any desired location during the flow evolution. In the present study, a numerical procedure has been developed which incorporates the Lagrangian approach of vortex methods into a calculation for the noise radiated by a flow-surface interaction. In particular, the noise generated by a vortex in the presence of a flat half plane is considered. This problem serves as a basic model of flap edge flow. It also permits the direct comparison between our computed results and previous acoustic analyses performed for this problem. In our numerical simulations, the mean flow is represented by the complex potential W(z) = Aiz(exp l/2), which is obtained through conformal mapping techniques. The magnitude of the mean flow is controlled by the parameter A. This mean flow has been used in the acoustic analysis by Hardin and is considered a reasonable model of the flow field in the vicinity of the edge and away from the leading and trailing edges of the flap. To represent the primary vortex which occurs near the flap, a point vortex is introduced just below the flat half plane. Using a technique from panel methods, boundary conditions on the flap surface are satisfied by the introduction of a row of stationary point vortices along the extent of the flap. At each time step in the calculation, the strength of these vortices is chosen to eliminate the normal velocity at intermediary collocation points. The time development of the overall flow field is then tracked using standard techniques from vortex methods. Vortex trajectories obtained through this computation are in good agreement with those predicted by the analytical solution given by Hardin, thus verifying the viability of this procedure for more complex flow arrangements. For the flow acoustics, the Ffowcs Williams-Hawkings equation is numerically integrated. This equation supplies the far field acoustic pressure based upon pressures occurring along the flap surface. With our vortex method solution, surface pressures may be obtained with exceptional resolution. The Ffowcs Williams-Hawkings equation is integrated using a spatially fourth order accurate Simpson's rule. Rational function interpolation is used to obtain the surface pressures at the appropriate retarded times. Comparisons between our numerical results for the acoustic pressure and those predicted by the Hardin analysis have been made. Preliminary results indicate the need for an improved integration technique. In the future, the numerical procedure developed in this study will be applied to the case of a rectangular flap of finite thickness and ultimately modified for application to the fully three-dimensional problem.
Setting Emissions Standards Based on Technology Performance
In setting national emissions standards, EPA sets emissions performance levels rather than mandating use of a particular technology. The law mandates that EPA use numerical performance standards whenever feasible in setting national emissions standards.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Constrained H1-regularization schemes for diffeomorphic image registration
Mang, Andreas; Biros, George
2017-01-01
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361
NASA Astrophysics Data System (ADS)
Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo
2014-09-01
Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.
NASA Technical Reports Server (NTRS)
Lang, Steve; Tao, W.-K.; Simpson, J.; Ferrier, B.; Einaudi, Franco (Technical Monitor)
2001-01-01
Six different convective-stratiform separation techniques, including a new technique that utilizes the ratio of vertical and terminal velocities, are compared and evaluated using two-dimensional numerical simulations of a tropical [Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE)] and midlatitude continental [Preliminary Regional Experiment for STORM-Central (PRESTORM)] squall line. The simulations are made using two different numerical advection schemes: 4th order and positive definite advection. Comparisons are made in terms of rainfall, cloud coverage, mass fluxes, apparent heating and moistening, mean hydrometeor profiles, CFADs (Contoured Frequency with Altitude Diagrams), microphysics, and latent heating retrieval. Overall, it was found that the different separation techniques produced results that qualitatively agreed. However, the quantitative differences were significant. Observational comparisons were unable to conclusively evaluate the performance of the techniques. Latent heating retrieval was shown to be sensitive to the use of separation technique mainly due to the stratiform region for methods that found very little stratiform rain. The midlatitude PRESTORM simulation was found to be nearly invariant with respect to advection type for most quantities while for TOGA COARE fourth order advection produced numerous shallow convective cores and positive definite advection fewer cells that were both broader and deeper penetrating above the freezing level.
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
NASA Technical Reports Server (NTRS)
Baum, J. D.; Levine, J. N.
1980-01-01
The selection of a satisfactory numerical method for calculating the propagation of steep fronted shock life waveforms in a solid rocket motor combustion chamber is discussed. A number of different numerical schemes were evaluated by comparing the results obtained for three problems: the shock tube problems; the linear wave equation, and nonlinear wave propagation in a closed tube. The most promising method--a combination of the Lax-Wendroff, Hybrid and Artificial Compression techniques, was incorporated into an existing nonlinear instability program. The capability of the modified program to treat steep fronted wave instabilities in low smoke tactical motors was verified by solving a number of motor test cases with disturbance amplitudes as high as 80% of the mean pressure.
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
Density-matrix-based algorithm for solving eigenvalue problems
NASA Astrophysics Data System (ADS)
Polizzi, Eric
2009-03-01
A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.
Material parameter measurements at high temperatures
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.; Peters, L., Jr.
1988-01-01
Alternate fixtures of techniques for the measurement of the constitutive material parameters at elevated temperatures are presented. The technique utilizes scattered field data from material coated cylinders between parallel plates or material coated hemispheres over a finite size groundplane. The data acquisition is centered around the HP 8510B Network Analyzer. The parameters are then found from a numerical search algorithm using the Newton-Ralphson technique with the measured and calculated fields from these canonical scatters. Numerical and experimental results are shown.
NASA Technical Reports Server (NTRS)
Reese, O. W.
1972-01-01
The numerical calculation is described of the steady-state flow of electrons in an axisymmetric, spherical, electrostatic collector for a range of boundary conditions. The trajectory equations of motion are solved alternately with Poisson's equation for the potential field until convergence is achieved. A direct (noniterative) numerical technique is used to obtain the solution to Poisson's equation. Space charge effects are included for initial current densities as large as 100 A/sq cm. Ways of dealing successfully with the difficulties associated with these high densities are discussed. A description of the mathematical model, a discussion of numerical techniques, results from two typical runs, and the FORTRAN computer program are included.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
Reliability Analysis and Modeling of ZigBee Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.
Gradient augmented level set method for phase change simulations
NASA Astrophysics Data System (ADS)
Anumolu, Lakshman; Trujillo, Mario F.
2018-01-01
A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gokaltun, Seckin; Munroe, Norman; Subramaniam, Shankar
2014-12-31
This study presents a new drag model, based on the cohesive inter-particle forces, implemented in the MFIX code. This new drag model combines an existing standard model in MFIX with a particle-based drag model based on a switching principle. Switches between the models in the computational domain occur where strong particle-to-particle cohesion potential is detected. Three versions of the new model were obtained by using one standard drag model in each version. Later, performance of each version was compared against available experimental data for a fluidized bed, published in the literature and used extensively by other researchers for validation purposes.more » In our analysis of the results, we first observed that standard models used in this research were incapable of producing closely matching results. Then, we showed for a simple case that a threshold is needed to be set on the solid volume fraction. This modification was applied to avoid non-physical results for the clustering predictions, when governing equation of the solid granular temperate was solved. Later, we used our hybrid technique and observed the capability of our approach in improving the numerical results significantly; however, improvement of the results depended on the threshold of the cohesive index, which was used in the switching procedure. Our results showed that small values of the threshold for the cohesive index could result in significant reduction of the computational error for all the versions of the proposed drag model. In addition, we redesigned an existing circulating fluidized bed (CFB) test facility in order to create validation cases for clustering regime of Geldart A type particles.« less
Tucker, David L; Rockett, Mark; Hasan, Mehedi; Poplar, Sarah; Rule, Simon A
2015-06-01
Bone marrow aspiration and trephine (BMAT) biopsies remain important tests in haematology. However, the procedures can be moderately to severely painful despite standard methods of pain relief. To test the efficacy of transcutaneous electrical nerve stimulation (TENS) in alleviating the pain from BMAT in addition to standard analgesia using a numerical pain rating scale (NRS). 70 patients requiring BMAT were randomised (1:1) in a double-blind, placebo-controlled trial. -35 patients received TENS impulses at a strong but comfortable amplitude (intervention group) and 35 patients received TENS impulses just above the sensory threshold (control group) (median pulse amplitude 20 and 7 mA, respectively). Patients and operators were blinded to group allocation. Pain assessments were made using a numerical pain scale completed after the procedure. No significant difference in NRS pain recalled after the procedure was detected (median pain score 5.7 (95% CI 4.8 to 6.6) in control vs 5.6 (95% CI 4.8 to 6.4) in the intervention group). However, 100% of patients who had previous experience of BMAT and >94% of participants overall felt they benefited from using TENS and would recommend it to others for this procedure. There were no side effects from the TENS device, and it was well tolerated. TENS is a safe, non-invasive adjunct to analgesia for reducing pain during bone marrow biopsy and provides a subjective benefit to most users; however, no objective difference in pain scores was detected when using TENS in this randomised controlled study. NCT02005354. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Zhou, Shiyue; Tello, Nadia; Harvey, Alex; Boyes, Barry; Orlando, Ron; Mechref, Yehia
2016-06-01
Glycans have numerous functions in various biological processes and participate in the progress of diseases. Reliable quantitative glycomic profiling techniques could contribute to the understanding of the biological functions of glycans, and lead to the discovery of potential glycan biomarkers for diseases. Although LC-MS is a powerful analytical tool for quantitative glycomics, the variation of ionization efficiency and MS intensity bias are influencing quantitation reliability. Internal standards can be utilized for glycomic quantitation by MS-based methods to reduce variability. In this study, we used stable isotope labeled IgG2b monoclonal antibody, iGlycoMab, as an internal standard to reduce potential for errors and to reduce variabililty due to sample digestion, derivatization, and fluctuation of nanoESI efficiency in the LC-MS analysis of permethylated N-glycans released from model glycoproteins, human blood serum, and breast cancer cell line. We observed an unanticipated degradation of isotope labeled glycans, tracked a source of such degradation, and optimized a sample preparation protocol to minimize degradation of the internal standard glycans. All results indicated the effectiveness of using iGlycoMab to minimize errors originating from sample handling and instruments. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Hudoklin, D.; Šetina, J.; Drnovšek, J.
2012-09-01
The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).
MRI in patients with inflammatory bowel disease
Gee, Michael S.; Harisinghani, Mukesh G.
2011-01-01
Inflammatory bowel disease (IBD) affects approximately 1.4 million people in North America and, because of its typical early age of onset and episodic disease course, IBD patients often undergo numerous imaging studies over the course of their lifetimes. CT has become the standard imaging modality for assessment of IBD patients because of its widespread availability, rapid image acquisition, and ability to evaluate intraluminal and extraluminal disease. However, repetitive CT imaging has been associated with a significant ionizing radiation risk to patients, making MRI an appealing alternative IBD imaging modality. Pelvic MRI is currently the imaging gold standard for detecting perianal disease, while recent studies indicate that MRI bowel-directed techniques (enteroclysis, enterography, colonography) can accurately evaluate bowel inflammation in IBD. With recent technical innovations leading to faster and higher resolution body MRI, the role of MRI in IBD evaluation is likely to continue to expand. Future applications include surveillance imaging, detection of mural fibrosis, and early assessment of therapy response. PMID:21512607
Current Status of Laparoendoscopic Single-Site Surgery in Urologic Surgery
2012-01-01
Since the introduction of laparoscopic surgery, the promise of lower postoperative morbidity and improved cosmesis has been achieved. Laparoendoscopic single-site surgery (LESS) potentially takes this further. Following the first human urological LESS report in 2007, numerous case series have emerged, as well as comparative studies comparing LESS with standard laparoscopy. However, comparative series between conventional laparoscopy and LESS for different procedures suggest a non-inferiority of LESS over standard laparoscopy, but the only objective benefit remains an improved cosmetic outcome. Challenging ergonomics, instrument clashing, lack of true triangulation, and in-line vision are the main concerns with LESS surgery. Various new instruments have been designed, but only experienced laparoscopists and well-selected patients are pivotal for a successful LESS procedure. Robotic-assisted LESS procedures have been performed. The available robotic platform remains bulky, but development of instrumentation and application of robotic technology are expected to define the actual role of these techniques in minimally invasive urologic surgery. PMID:22866213
Improvements to Wire Bundle Thermal Modeling for Ampacity Determination
NASA Technical Reports Server (NTRS)
Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah
2017-01-01
Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.
Simhon, David; Halpern, Marisa; Brosh, Tamar; Vasilyev, Tamar; Ravid, Avi; Tennenbaum, Tamar; Nevo, Zvi; Katzir, Abraham
2007-02-01
A feedback temperature-controlled laser soldering system (TCLS) was used for bonding skin incisions on the backs of pigs. The study was aimed: 1) to characterize the optimal soldering parameters, and 2) to compare the immediate and long-term wound healing outcomes with other wound closure modalities. A TCLS was used to bond the approximated wound margins of skin incisions on porcine backs. The reparative outcomes were evaluated macroscopically, microscopically, and immunohistochemically. The optimal soldering temperature was found to be 65 degrees C and the operating time was significantly shorter than with suturing. The immediate tight sealing of the wound by the TCLS contributed to rapid, high quality wound healing in comparison to Dermabond or Histoacryl cyanoacrylate glues or standard suturing. TCLS of incisions in porcine skin has numerous advantages, including rapid procedure and high quality reparative outcomes, over the common standard wound closure procedures. Further studies with a variety of skin lesions are needed before advocating this technique for clinical use.
Front dynamics and entanglement in the XXZ chain with a gradient
NASA Astrophysics Data System (ADS)
Eisler, Viktor; Bauernfeind, Daniel
2017-11-01
We consider the XXZ spin chain with a magnetic field gradient and study the profiles of the magnetization as well as the entanglement entropy. For a slowly varying field, it is shown that, by means of a local density approximation, the ground-state magnetization profile can be obtained with standard Bethe ansatz techniques. Furthermore, it is argued that the low-energy description of the theory is given by a Luttinger liquid with slowly varying parameters. This allows us to obtain a very good approximation of the entanglement profile using a recently introduced technique of conformal field theory in curved spacetime. Finally, the front dynamics is also studied after the gradient field has been switched off, following arguments of generalized hydrodynamics for integrable systems. While for the XX chain the hydrodynamic solution can be found analytically, the XXZ case appears to be more complicated and the magnetization profiles are recovered only around the edge of the front via an approximate numerical solution.
On the energy integral for first post-Newtonian approximation
NASA Astrophysics Data System (ADS)
O'Leary, Joseph; Hill, James M.; Bennett, James C.
2018-07-01
The post-Newtonian approximation for general relativity is widely adopted by the geodesy and astronomy communities. It has been successfully exploited for the inclusion of relativistic effects in practically all geodetic applications and techniques such as satellite/lunar laser ranging and very long baseline interferometry. Presently, the levels of accuracy required in geodetic techniques require that reference frames, planetary and satellite orbits and signal propagation be treated within the post-Newtonian regime. For arbitrary scalar W and vector gravitational potentials W^j (j=1,2,3), we present a novel derivation of the energy associated with a test particle in the post-Newtonian regime. The integral so obtained appears not to have been given previously in the literature and is deduced through algebraic manipulation on seeking a Jacobi-like integral associated with the standard post-Newtonian equations of motion. The new integral is independently verified through a variational formulation using the post-Newtonian metric components and is subsequently verified by numerical integration of the post-Newtonian equations of motion.
Distributed support vector machine in master-slave mode.
Chen, Qingguo; Cao, Feilong
2018-05-01
It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.
Design and fabrication of self-assembled thin films
NASA Astrophysics Data System (ADS)
Topasna, Daniela M.; Topasna, Gregory A.
2015-10-01
Students experience the entire process of designing, fabricating and testing thin films during their capstone course. The films are fabricated by the ionic-self assembled monolayer (ISAM) technique, which is suited to a short class and is relatively rapid, inexpensive and environmentally friendly. The materials used are polymers, nanoparticles, and small organic molecules that, in various combinations, can create films with nanometer thickness and with specific properties. These films have various potential applications such as pH optical sensors or antibacterial coatings. This type of project offers students an opportunity to go beyond the standard lecture and labs and to experience firsthand the design and fabrication processes. They learn new techniques and procedures, as well as familiarize themselves with new instruments and optical equipment. For example, students learn how to characterize the films by using UV-Vis-NIR spectrophotometry and in the process learn how the instruments operate. This work compliments a previous exercise that we introduced where students use MATHCAD to numerically model the transmission and reflection of light from thin films.
Holographic quantitative imaging of sample hidden by turbid medium or occluding objects
NASA Astrophysics Data System (ADS)
Bianco, V.; Miccio, L.; Merola, F.; Memmolo, P.; Gennari, O.; Paturzo, Melania; Netti, P. A.; Ferraro, P.
2015-03-01
Digital Holography (DH) numerical procedures have been developed to allow imaging through turbid media. A fluid is considered turbid when dispersed particles provoke strong light scattering, thus destroying the image formation by any standard optical system. Here we show that sharp amplitude imaging and phase-contrast mapping of object hidden behind turbid medium and/or occluding objects are possible in harsh noise conditions and with a large field-of view by Multi-Look DH microscopy. In particular, it will be shown that both amplitude imaging and phase-contrast mapping of cells hidden behind a flow of Red Blood Cells can be obtained. This allows, in a noninvasive way, the quantitative evaluation of living processes in Lab on Chip platforms where conventional microscopy techniques fail. The combination of this technique with endoscopic imaging can pave the way for the holographic blood vessel inspection, e.g. to look for settled cholesterol plaques as well as blood clots for a rapid diagnostics of blood diseases.
The calculating eye: Baily, Herschel, Babbage and the business of astronomy
NASA Astrophysics Data System (ADS)
Ashworth, William J.
1994-12-01
Astronomy does not often appear in the socio-political and economic history of nineteenth-century Britain. Whereas contemporary literature, poetry and the visual arts made significant reference to the heavens, the more earthbound arena of finance seems an improbable place to encounter astronomical themes. This paper shows that astronomical practice was an important factor in the emergence of what can be described as an accountant's view of the world. I begin by exploring the senses of the term 'calculation' in Regency England, and then seek to reveal how the dramatic growth of vigilance in science, the organization and control of labour, and the monitoring of society and the economy drew upon and informed this disciplined numerical technique. Observations in all these areas could only be trusted if correctly reduced through a single system of calculation assisted by a group of standardized tables and division of mental labour. Within this setting the stellar economy provided an object that was seemingly ordered and law-like and therefore predictable through a powerful combination of techniques.
Spot detection and image segmentation in DNA microarray data.
Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune
2005-01-01
Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.
NASA Astrophysics Data System (ADS)
Degenfeld-Schonburg, Peter; Navarrete-Benlloch, Carlos; Hartmann, Michael J.
2015-05-01
Nonlinear quantum optical systems are of paramount relevance for modern quantum technologies, as well as for the study of dissipative phase transitions. Their nonlinear nature makes their theoretical study very challenging and hence they have always served as great motivation to develop new techniques for the analysis of open quantum systems. We apply the recently developed self-consistent projection operator theory to the degenerate optical parametric oscillator to exemplify its general applicability to quantum optical systems. We show that this theory provides an efficient method to calculate the full quantum state of each mode with a high degree of accuracy, even at the critical point. It is equally successful in describing both the stationary limit and the dynamics, including regions of the parameter space where the numerical integration of the full problem is significantly less efficient. We further develop a Gaussian approach consistent with our theory, which yields sensibly better results than the previous Gaussian methods developed for this system, most notably standard linearization techniques.
Development of a model of space station solar array
NASA Technical Reports Server (NTRS)
Bosela, Paul A.
1990-01-01
Space structures, such as the space station solar arrays, must be extremely lightweight, flexible structures. Accurate prediction of the natural frequencies and mode shapes is essential for determining the structural adequacy of components, and designing a control system. The tension preload in the blanket of photovoltaic solar collectors, and the free/free boundary conditions of a structure in space, causes serious reservations on the use of standard finite element techniques of solution. In particular, a phenomena known as grounding, or false stiffening, of the stiffness matrix occurs during rigid body rotation. The grounding phenomena is examined in detail. Numerous stiffness matrices developed by others are examined for rigid body rotation capability, and found lacking. Various techniques are used for developing new stiffness matrices from the rigorous solutions of the differential equations, including the solution of the directed force problem. A new directed force stiffness matrix developed by the author provides all the rigid body capabilities for the beam in space.
Diffractive optics development using a modified stack-and-draw technique.
Pniewski, Jacek; Kasztelanic, Rafal; Nowosielski, Jedrzej M; Filipkowski, Adam; Piechal, Bernard; Waddie, Andrew J; Pysz, Dariusz; Kujawa, Ireneusz; Stepien, Ryszard; Taghizadeh, Mohammad R; Buczynski, Ryszard
2016-06-20
We present a novel method for the development of diffractive optical elements (DOEs). Unlike standard surface relief DOEs, the phase shift is introduced through a refractive index variation achieved by using different types of glass. For the fabrication of DOEs we use a modified stack-and-draw technique, originally developed for the fabrication of photonic crystal fibers, resulting in a completely flat element that is easy to integrate with other optical components. A proof-of-concept demonstration of the method is presented-a two-dimensional binary optical phase grating in the form of a square chessboard with a pixel size of 5 μm. Two types of glass are used: low refractive index silicate glass NC21 and high refractive index lead-silicate glass F2. The measured diffraction characteristics of the fabricated component are presented and it is shown numerically and experimentally that such a DOE can be used as a fiber interconnector that couples light from a small-core fiber into the several cores of a multicore fiber.
Fabrication and optical characterization of silica optical fibers containing gold nanoparticles.
de Oliveira, Rafael E P; Sjödin, Niclas; Fokine, Michael; Margulis, Walter; de Matos, Christiano J S; Norin, Lars
2015-01-14
Gold nanoparticles have been used since antiquity for the production of red-colored glasses. More recently, it was determined that this color is caused by plasmon resonance, which additionally increases the material's nonlinear optical response, allowing for the improvement of numerous optical devices. Interest in silica fibers containing gold nanoparticles has increased recently, aiming at the integration of nonlinear devices with conventional optical fibers. However, fabrication is challenging due to the high temperatures required for silica processing and fibers with gold nanoparticles were solely demonstrated using sol-gel techniques. We show a new fabrication technique based on standard preform/fiber fabrication methods, where nanoparticles are nucleated by heat in a furnace or by laser exposure with unprecedented control over particle size, concentration, and distribution. Plasmon absorption peaks exceeding 800 dB m(-1) at 514-536 nm wavelengths were observed, indicating higher achievable nanoparticle concentrations than previously reported. The measured resonant nonlinear refractive index, (6.75 ± 0.55) × 10(-15) m(2) W(-1), represents an improvement of >50×.
Crystalline phases by an improved gradient expansion technique
NASA Astrophysics Data System (ADS)
Carignano, S.; Mannarelli, M.; Anzuini, F.; Benhar, O.
2018-02-01
We develop an innovative technique for studying inhomogeneous phases with a spontaneous broken symmetry. The method relies on the knowledge of the exact form of the free energy in the homogeneous phase and on a specific gradient expansion of the order parameter. We apply this method to quark matter at vanishing temperature and large chemical potential, which is expected to be relevant for astrophysical considerations. The method is remarkably reliable and fast as compared to performing the full numerical diagonalization of the quark Hamiltonian in momentum space and is designed to improve the standard Ginzburg-Landau expansion close to the phase transition points. For definiteness, we focus on inhomogeneous chiral symmetry breaking, accurately reproducing known results for one-dimensional and two-dimensional modulations and examining novel crystalline structures, as well. Consistently with previous results, we find that the energetically favored modulation is the so-called one-dimensional real-kink crystal. We propose a qualitative description of the pairing mechanism to motivate this result.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
NASA Technical Reports Server (NTRS)
Dermanis, A.
1977-01-01
The possibility of recovering earth rotation and network geometry (baseline) parameters are emphasized. The numerical simulated experiments performed are set up in an environment where station coordinates vary with respect to inertial space according to a simulated earth rotation model similar to the actual but unknown rotation of the earth. The basic technique of VLBI and its mathematical model are presented. The parametrization of earth rotation chosen is described and the resulting model is linearized. A simple analysis of the geometry of the observations leads to some useful hints on achieving maximum sensitivity of the observations with respect to the parameters considered. The basic philosophy for the simulation of data and their analysis through standard least squares adjustment techniques is presented. A number of characteristic network designs based on present and candidate station locations are chosen. The results of the simulations for each design are presented together with a summary of the conclusions.
NASA Astrophysics Data System (ADS)
Harzalla, S.; Belgacem, F. Bin Muhammad; Chabaat, M.
2014-12-01
In this paper, a nondestructive technique is used as a tool to control cracks and microcracks in materials. A simulation by a numerical approach such as the finite element method is employed to detect cracks and eventually; to study their propagation using a crucial parameter such as the stress intensity factor. This approach has been used in the aircraft industry to control cracks. Besides, it makes it possible to highlight the defects of parts while preserving the integrity of the controlled products. On the other side, it is proven that the reliability of the control of defects gives convincing results for the improvement of the quality and the safety of the material. Eddy current testing (ECT) is a standard technique in industry for the detection of surface breaking flaws in magnetic materials such as steels. In this context, simulation tools can be used to improve the understanding of experimental signals, optimize the design of sensors or evaluate the performance of ECT procedures. CEA-LIST has developed for many years semi-analytical models embedded into the simulation platform CIVA dedicated to non-destructive testing. The developments presented herein address the case of flaws located inside a planar and magnetic medium. Simulation results are obtained through the application of the Volume Integral Method (VIM). When considering the ECT of a single flaw, a system of two differential equations is derived from Maxwell equations. The numerical resolution of the system is carried out using the classical Galerkin variant of the Method of Moments. Besides, a probe response is calculated by application of the Lorentz reciprocity theorem. Finally, the approach itself as well as comparisons between simulation results and measured data are presented.
Olea, E; Fondarella, A; Sánchez, C; Iriarte, I; Almeida, M V; Martínez de Salinas, A
2013-12-01
Evaluation of pain and degree of satisfaction in patients undergoing ultrasound-assisted peripheral regional block for the treatment of idiopathic palmar hyperhidrosis with botulinum toxin. A descriptive, observational study of patients with palmar hyperhidrosis treated with botulinum toxin A, who underwent ultrasound-guided peripheral regional block of the median and ulnar nerves with 3 ml of mepivacaine 1% in each one. The radial nerve block was injected in the anatomical snuffbox. After establishing blocking, the dermatologist performed a mapping and injected around 100 IU of botulinum toxin across the whole palm. The pain experienced during the injection of botulinum toxin was evaluated by verbal numerical scale (from 0 to 10), along with the degree of satisfaction with the anesthetic technique, and the post-anesthetic complications. A total of 40 patients were enrolled in the study, 11 men and 29 women with no significant differences. The pain intensity assessed with verbal numerical scale was 1.03 (standard deviation of 1.37). No patients had a value greater than 5. The degree of patient satisfaction with the anesthetic technique was very good for 85% of the patients, and good for 7.5%. There were no complications related to type of anesthesia. The ultrasound-assisted peripheral regional block could be a simple, effective and safe technique for patients undergoing palmar injection of botulinum toxin. Pain intensity was very low, and it provided a very good level of satisfaction in most patients. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.
Song, Fang; Zheng, Chuantao; Yan, Wanhong; Ye, Weilin; Wang, Yiding; Tittel, Frank K
2017-12-11
To suppress sensor noise with unknown statistical properties, a novel self-adaptive direct laser absorption spectroscopy (SA-DLAS) technique was proposed by incorporating a recursive, least square (RLS) self-adaptive denoising (SAD) algorithm and a 3291 nm interband cascade laser (ICL) for methane (CH 4 ) detection. Background noise was suppressed by introducing an electrical-domain noise-channel and an expectation-known-based RLS SAD algorithm. Numerical simulations and measurements were carried out to validate the function of the SA-DLAS technique by imposing low-frequency, high-frequency, White-Gaussian and hybrid noise on the ICL scan signal. Sensor calibration, stability test and dynamic response measurement were performed for the SA-DLAS sensor using standard or diluted CH 4 samples. With the intrinsic sensor noise considered only, an Allan deviation of ~43.9 ppbv with a ~6 s averaging time was obtained and it was further decreased to 6.3 ppbv with a ~240 s averaging time, through the use of self-adaptive filtering (SAF). The reported SA-DLAS technique shows enhanced sensitivity compared to a DLAS sensor using a traditional sensing architecture and filtering method. Indoor and outdoor atmospheric CH 4 measurements were conducted to validate the normal operation of the reported SA-DLAS technique.
Computational method for analysis of polyethylene biodegradation
NASA Astrophysics Data System (ADS)
Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro
2003-12-01
In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.
A GENERAL MASS-CONSERVATIVE NUMERICAL SOLUTION FOR THE UNSATURATED FLOW EQUATION
Numerical approximations based on different forms of the governing partial differential equation can lead to significantly different results for unsaturated flow problems. Numerical solution based on the standard h-based form of Richards equation generally yields poor results, ch...
NASA Technical Reports Server (NTRS)
Seshadri, Banavara R.; Smith, Stephen W.
2007-01-01
Variation in constraint through the thickness of a specimen effects the cyclic crack-tip-opening displacement (DELTA CTOD). DELTA CTOD is a valuable measure of crack growth behavior, indicating closure development, constraint variations and load history effects. Fatigue loading with a continual load reduction was used to simulate the load history associated with fatigue crack growth threshold measurements. The constraint effect on the estimated DELTA CTOD is studied by carrying out three-dimensional elastic-plastic finite element simulations. The analysis involves numerical simulation of different standard fatigue threshold test schemes to determine how each test scheme affects DELTA CTOD. The American Society for Testing and Materials (ASTM) prescribes standard load reduction procedures for threshold testing using either the constant stress ratio (R) or constant maximum stress intensity (K(sub max)) methods. Different specimen types defined in the standard, namely the compact tension, C(T), and middle cracked tension, M(T), specimens were used in this simulation. The threshold simulations were conducted with different initial K(sub max) values to study its effect on estimated DELTA CTOD. During each simulation, the DELTA CTOD was estimated at every load increment during the load reduction procedure. Previous numerical simulation results indicate that the constant R load reduction method generates a plastic wake resulting in remote crack closure during unloading. Upon reloading, this remote contact location was observed to remain in contact well after the crack tip was fully open. The final region to open is located at the point at which the load reduction was initiated and at the free surface of the specimen. However, simulations carried out using the constant Kmax load reduction procedure did not indicate remote crack closure. Previous analysis results using various starting K(sub max) values and different load reduction rates have indicated DELTA CTOD is independent of specimen size. A study of the effect of specimen thickness and geometry on the measured DELTA CTOD for various load reduction procedures and its implication in the estimation of fatigue crack growth threshold values is discussed.
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
Hernekamp, J F; Reinecke, A; Neubrech, F; Bickert, B; Kneser, U; Kremer, T
2016-04-01
Four-corner fusion is a standard procedure for advanced carpal collapse. Several operative techniques and numerous implants for osseous fixation have been described. Recently, a specially designed locking plate (Aptus©, Medartis, Basel, Switzerland) was introduced. The purpose of this study was to compare functional results after osseous fixation using K-wires (standard of care, SOC) with four-corner fusion and locking plate fixation. 21 patients who underwent four-corner fusion in our institution between 2008 and 2013 were included in a retrospective analysis. In 11 patients, osseous fixation was performed using locking plates whereas ten patients underwent bone fixation with conventional K-wires. Outcome parameters were functional outcome, osseous consolidation, patient satisfaction (DASH- and Krimmer Score), pain and perioperative morbidity and the time until patients returned to daily work. Patients were divided in two groups and paired t-tests were performed for statistical analysis. No implant related complications were observed. Osseous consolidation was achieved in all cases. Differences between groups were not significant regarding active range of motion (AROM), pain and function. Overall patient satisfaction was acceptable in all cases; differences in the DASH questionnaire and the Krimmer questionnaire were not significant. One patient of the plate group required conversion to total wrist arthrodesis without implant-related complications. Both techniques for four-corner fusion have similar healing rates. Using the more expensive locking implant avoids a second operation for K-wire removal, but no statistical differences were detected in functional outcome as well as in patient satisfaction when compared to SOC.
Numerical Modeling of Inclusion Behavior in Liquid Metal Processing
NASA Astrophysics Data System (ADS)
Bellot, Jean-Pierre; Descotes, Vincent; Jardy, Alain
2013-09-01
Thermomechanical performance of metallic alloys is directly related to the metal cleanliness that has always been a challenge for metallurgists. During liquid metal processing, particles can grow or decrease in size either by mass transfer with the liquid phase or by agglomeration/fragmentation mechanisms. As a function of numerical density of inclusions and of the hydrodynamics of the reactor, different numerical modeling approaches are proposed; in the case of an isolated particle, the Lagrangian technique coupled with a dissolution model is applied, whereas in the opposite case of large inclusion phase concentration, the population balance equation must be solved. Three examples of numerical modeling studies achieved at Institut Jean Lamour are discussed. They illustrate the application of the Lagrangian technique (for isolated exogenous inclusion in titanium bath) and the Eulerian technique without or with the aggregation process: for precipitation and growing of inclusions at the solidification front of a Maraging steel, and for endogenous inclusions in the molten steel bath of a gas-stirred ladle, respectively.
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1979-01-01
The theoretical foundation and formulation of a numerical method for predicting the viscous flowfield in and about isolated three dimensional nozzles of geometrically complex configuration are presented. High Reynolds number turbulent flows are of primary interest for any combination of subsonic, transonic, and supersonic flow conditions inside or outside the nozzle. An alternating-direction implicit (ADI) numerical technique is employed to integrate the unsteady Navier-Stokes equations until an asymptotic steady-state solution is reached. Boundary conditions are computed with an implicit technique compatible with the ADI technique employed at interior points of the flow region. The equations are formulated and solved in a boundary-conforming curvilinear coordinate system. The curvilinear coordinate system and computational grid is generated numerically as the solution to an elliptic boundary value problem. A method is developed that automatically adjusts the elliptic system so that the interior grid spacing is controlled directly by the a priori selection of the grid spacing on the boundaries of the flow region.
Preserving Simplecticity in the Numerical Integration of Linear Beam Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
2017-07-01
Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less
Diagnostics of dust content in spiral galaxies: Numerical simulations of radiative transfer
NASA Technical Reports Server (NTRS)
Byun, Y. I.; Freeman, K. C.; Kylafis, N. D.
1994-01-01
In order to find the best observable diagnostics for the amount of internal extinction within spiral galaxies, we have constructed realistic models for disk galaxies with immersed dust layers. The radiative transfer including both scattering and absorption has been computed for a range of model galaxies in various orientations. Standard galaxy surface photometry techniques were then applied to the numerical data to illustrate how different observables such as total magnitude, color and luminosity distribution behave under given conditions of dust distribution. This work reveals a set of superior diagnostics for the dust in the disk. These include not only the integrated parameters, but also the apparent disk structural parameters, the amplitude of the asymmetry between the near and far sides of the galaxy as divided by the apparent major axis and their dependence on the orientation of the galaxy with respect to the observer. Combining the above diagnostics with our impressions of real galaxies, we arrive at the qualitative conclusion that galaxy disks are generally optically thin. Quantitative conclusions will appear in subsequent work.
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Bifurcation structure of a wind-driven shallow water model with layer-outcropping
NASA Astrophysics Data System (ADS)
Primeau, François W.; Newman, David
The steady state bifurcation structure of the double-gyre wind-driven ocean circulation is examined in a shallow water model where the upper layer is allowed to outcrop at the sea surface. In addition to the classical jet-up and jet-down multiple equilibria, we find a new regime in which one of the equilibrium solutions has a large outcropping region in the subpolar gyre. Time dependent simulations show that the outcropping solution equilibrates to a stable periodic orbit with a period of 8 months. Co-existing with the periodic solution is a stable steady state solution without outcropping. A numerical scheme that has the unique advantage of being differentiable while still allowing layers to outcrop at the sea surface is used for the analysis. In contrast, standard schemes for solving layered models with outcropping are non-differentiable and have an ill-defined Jacobian making them unsuitable for solution using Newton's method. As such, our new scheme expands the applicability of numerical bifurcation techniques to an important class of ocean models whose bifurcation structure had hitherto remained unexplored.
NASA Astrophysics Data System (ADS)
Ireland, Peter J.; Collins, Lance R.
2012-11-01
Turbulence-induced collision of inertial particles may contribute to the rapid onset of precipitation in warm cumulus clouds. The particle collision frequency is determined from two parameters: the radial distribution function g (r) and the mean inward radial relative velocity
Thermographic Analysis of Stress Distribution in Welded Joints
NASA Astrophysics Data System (ADS)
Piršić, T.; Krstulović Opara, L.; Domazet, Ž.
2010-06-01
The fatigue life prediction of welded joints based on S-N curves in conjunction with nominal stresses generally is not reliable. Stress distribution in welded area affected by geometrical inhomogeneity, irregular welded surface and weld toe radius is quite complex, so the local (structural) stress concept is accepted in recent papers. The aim of this paper is to determine the stress distribution in plate type aluminum welded joints, to analyze the reliability of TSA (Thermal Stress Analysis) in this kind of investigations, and to obtain numerical values for stress concentration factors for practical use. Stress distribution in aluminum butt and fillet welded joints is determined by using the three different methods: strain gauges measurement, thermal stress analysis and FEM. Obtained results show good agreement - the TSA mutually confirmed the FEM model and stresses measured by strain gauges. According to obtained results, it may be stated that TSA, as a relatively new measurement technique may in the future become a standard tool for the experimental investigation of stress concentration and fatigue in welded joints that can help to develop more accurate numerical tools for fatigue life prediction.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
Fourier/Chebyshev methods for the incompressible Navier-Stokes equations in finite domains
NASA Technical Reports Server (NTRS)
Corral, Roque; Jimenez, Javier
1992-01-01
A fully spectral numerical scheme for the incompressible Navier-Stokes equations in domains which are infinite or semi-infinite in one dimension. The domain is not mapped, and standard Fourier or Chebyshev expansions can be used. The handling of the infinite domain does not introduce any significant overhead. The scheme assumes that the vorticity in the flow is essentially concentrated in a finite region, which is represented numerically by standard spectral collocation methods. To accomodate the slow exponential decay of the velocities at infinity, extra expansion functions are introduced, which are handled analytically. A detailed error analysis is presented, and two applications to Direct Numerical Simulation of turbulent flows are discussed in relation with the numerical performance of the scheme.
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
2016-10-20
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
Galerkin v. least-squares Petrov–Galerkin projection in nonlinear model reduction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlberg, Kevin Thomas; Barone, Matthew F.; Antil, Harbir
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. Furthermore, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. This work provides a detailed theoretical and computational comparison of the two techniques for two common classes of timemore » integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.« less
NASA Technical Reports Server (NTRS)
Tuccillo, J. J.
1984-01-01
Numerical Weather Prediction (NWP), for both operational and research purposes, requires only fast computational speed but also large memory. A technique for solving the Primitive Equations for atmospheric motion on the CYBER 205, as implemented in the Mesoscale Atmospheric Simulation System, which is fully vectorized and requires substantially less memory than other techniques such as the Leapfrog or Adams-Bashforth Schemes is discussed. The technique presented uses the Euler-Backard time marching scheme. Also discussed are several techniques for reducing computational time of the model by replacing slow intrinsic routines by faster algorithms which use only hardware vector instructions.
Twist Model Development and Results from the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew M.; Allen, Michael J.
2007-01-01
Understanding the wing twist of the active aeroelastic wing (AAW) F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption. This technique produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Jet Substructure at the Large Hadron Collider : Experimental Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asquith, Lily; Campanelli, Mario; Delitzsch, Chris
Jet substructure has emerged to play a central role at the Large Hadron Collider (LHC), where it has provided numerous innovative new ways to search for new physics and to probe the Standard Model, particularly in extreme regions of phase space. In this article we focus on a review of the development and use of state-of-the-art jet substructure techniques by the ATLAS and CMS experiments. ALICE and LHCb have been probing fragmentation functions since the start of the LHC and have also recently started studying other jet substructure techniques. It is likely that in the near future all LHC collaborationsmore » will make significant use of jet substructure and grooming techniques. Much of the work in this field in recent years has been galvanized by the Boost Workshop Series, which continues to inspire fruitful collaborations between experimentalists and theorists. We hope that this review will prove a useful introduction and reference to experimental aspects of jet substructure at the LHC. A companion overview of recent progress in theory and machine learning approaches is given in 1709.04464, the complete review will be submitted to Reviews of Modern Physics.« less
Fast alternative Monte Carlo formalism for a class of problems in biophotonics
NASA Astrophysics Data System (ADS)
Miller, Steven D.
1997-12-01
A practical and effective, alternative Monte Carlo formalism is presented that rapidly finds flux solutions to the radiative transport equation for a class of problems in biophotonics; namely, wide-beam irradiance of finite, optically anisotropic homogeneous or heterogeneous biomedias, which both strongly scatter and absorb light. Such biomedias include liver, tumors, blood, or highly blood perfused tissues. As Fermat rays comprising a wide coherent (laser) beam enter the tissue, they evolve into a bundle of random optical paths or trajectories due to scattering. Overall, this can be physically interpreted as a bundle of Markov trajectories traced out by a 'gas' of Brownian-like point photons being successively scattered and absorbed. By considering the cumulative flow of a statistical bundle of trajectories through interior data planes, the effective equivalent information of the (generally unknown) analytical flux solutions of the transfer equation rapidly emerges. Unlike the standard Monte Carlo techniques, which evaluate scalar fluence, this technique is faster, more efficient, and simpler to apply for this specific class of optical situations. Other analytical or numerical techniques can either become unwieldy or lack viability or are simply more difficult to apply. Illustrative flux calculations are presented for liver, blood, and tissue-tumor-tissue systems.
NASA Astrophysics Data System (ADS)
Colla, C.; Gabrielli, E.
2017-01-01
To evaluate the complex behaviour of masonry structures under mechanical loads, numerical models are developed and continuously implemented at diverse scales, whilst, from an experimental viewpoint, laboratory standard mechanical tests are usually carried out by instrumenting the specimens via traditional measuring devices. Extracted values collected in the few points where the tools were installed are assumed to represent the behaviour of the whole specimen but this may be quite optimistic or approximate. Optical monitoring techniques may help in overcoming some of these limitations by providing full-field visualization of mechanical parameters. Photoelasticity and the more recent DIC, employed to monitor masonry columns during compression tests are here presented and a lab case study is compared listing procedures, data acquisitions, advantages and limitations. It is shown that the information recorded by traditional measuring tools must be considered limited to the specific instrumented points. Instead, DIC in particular among the optical techniques, is proving both a very precise global and local picture of the masonry performance, opening new horizons towards a deeper knowledge of this complex construction material. The applicability of an innovative DIC procedure to cultural heritage constructions is also discussed.
Performance evaluation of a digital mammography unit using a contrast-detail phantom
NASA Astrophysics Data System (ADS)
Elizalde-Cabrera, J.; Brandan, M.-E.
2015-01-01
The relation between image quality and mean glandular dose (MGD) has been studied for a Senographe 2000D mammographic unit used for research in our laboratory. The magnitudes were evaluated for a clinically relevant range of acrylic thicknesses and radiological techniques. The CDMAM phantom was used to determine the contrast-detail curve. Also, an alternative method based on the analysis of signal-to-noise (SNR) and contrast-to-noise (CNR) ratios from the CDMAM image was proposed and applied. A simple numerical model was utilized to successfully interpret the results. Optimum radiological techniques were determined using the figures-of-merit FOMSNR=SNR2/MGD and FOMCNR=CNR2/MGD. Main results were: the evaluation of the detector response flattening process (it reduces by about one half the spatial non-homogeneities due to the X- ray field), MGD measurements (the values comply with standards), and verification of the automatic exposure control performance (it is sensitive to fluence attenuation, not to contrast). For 4-5 cm phantom thicknesses, the optimum radiological techniques were Rh/Rh 34 kV to optimize SNR, and Rh/Rh 28 kV to optimize CNR.
Twist Model Development and Results From the Active Aeroelastic Wing F/A-18 Aircraft
NASA Technical Reports Server (NTRS)
Lizotte, Andrew; Allen, Michael J.
2005-01-01
Understanding the wing twist of the active aeroelastic wing F/A-18 aircraft is a fundamental research objective for the program and offers numerous benefits. In order to clearly understand the wing flexibility characteristics, a model was created to predict real-time wing twist. A reliable twist model allows the prediction of twist for flight simulation, provides insight into aircraft performance uncertainties, and assists with computational fluid dynamic and aeroelastic issues. The left wing of the aircraft was heavily instrumented during the first phase of the active aeroelastic wing program allowing deflection data collection. Traditional data processing steps were taken to reduce flight data, and twist predictions were made using linear regression techniques. The model predictions determined a consistent linear relationship between the measured twist and aircraft parameters, such as surface positions and aircraft state variables. Error in the original model was reduced in some cases by using a dynamic pressure-based assumption and by using neural networks. These techniques produced excellent predictions for flight between the standard test points and accounted for nonlinearities in the data. This report discusses data processing techniques and twist prediction validation, and provides illustrative and quantitative results.
Application of borehole geophysics to water-resources investigations
Keys, W.S.; MacCary, L.M.
1971-01-01
This manual is intended to be a guide for hydrologists using borehole geophysics in ground-water studies. The emphasis is on the application and interpretation of geophysical well logs, and not on the operation of a logger. It describes in detail those logging techniques that have been utilized within the Water Resources Division of the U.S. Geological Survey, and those used in petroleum investigations that have potential application to hydrologic problems. Most of the logs described can be made by commercial logging service companies, and many can be made with small water-well loggers. The general principles of each technique and the rules of log interpretation are the same, regardless of differences in instrumentation. Geophysical well logs can be interpreted to determine the lithology, geometry, resistivity, formation factor, bulk density, porosity, permeability, moisture content, and specific yield of water-bearing rocks, and to define the source, movement, and chemical and physical characteristics of ground water. Numerous examples of logs are used to illustrate applications and interpretation in various ground-water environments. The interrelations between various types of logs are emphasized, and the following aspects are described for each of the important logging techniques: Principles and applications, instrumentation, calibration and standardization, radius of investigation, and extraneous effects.
The ultrastructural features of the premalignant oral lesions.
Olinici, Doiniţa; Cotrutz, Carmen Elena; Mihali, Ciprian Valentin; Grecu, Vasile Bogdan; Botez, Emanuela Ana; Stoica, Laura; Onofrei, Pavel; Condurache, Oana; Dimitriu, Daniela Cristina
2018-01-01
Premalignant oral lesions are among the most important risk factors for the development of oral squamocellular carcinoma. Recent population studies indicate a significant rise in the prevalence of leukoplakia, erythroplakia/erythroleukoplakia, actinic cheilitis, submucous fibrosis and erosive lichen planus. Since standard histopathological examination has numerous limitations regarding the accurate appreciation of potential malignant transformation, the present study aims to aid these evaluations using the transmission electron microscopy (TEM) technique, which emphasizes ultrastructural changes pertaining to this pathology. Oral mucosa fragments collected from 43 patients that were clinically and histopathologically diagnosed with leukoplakia, erosive actinic cheilitis and erosive lichen planus have been processed through the classic technique for the examination using TEM and were examined using a Philips CM100 transmission electron microscope. The electron microscopy study has confirmed the histopathological diagnosis of the tissue samples examined using photonic microscopy and has furthermore revealed a series of ultrastructural details that on the one hand indicate the tendency for malignant transformation, and on the other reveal characteristic features of tumor development. All the details furnished by TEM complete the overall picture of morphological changes, specific to these lesions, indicating the importance of using these techniques in establishing both a correct diagnosis and prognosis.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
NASA Astrophysics Data System (ADS)
Moore, Christopher J.; Gair, Jonathan R.
2014-12-01
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications, these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this Letter, a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalized over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
López-Guerra, Enrique A
2017-01-01
We explore the contact problem of a flat-end indenter penetrating intermittently a generalized viscoelastic surface, containing multiple characteristic times. This problem is especially relevant for nanoprobing of viscoelastic surfaces with the highly popular tapping-mode AFM imaging technique. By focusing on the material perspective and employing a rigorous rheological approach, we deliver analytical closed-form solutions that provide physical insight into the viscoelastic sources of repulsive forces, tip–sample dissipation and virial of the interaction. We also offer a systematic comparison to the well-established standard harmonic excitation, which is the case relevant for dynamic mechanical analysis (DMA) and for AFM techniques where tip–sample sinusoidal interaction is permanent. This comparison highlights the substantial complexity added by the intermittent-contact nature of the interaction, which precludes the derivation of straightforward equations as is the case for the well-known harmonic excitations. The derivations offered have been thoroughly validated through numerical simulations. Despite the complexities inherent to the intermittent-contact nature of the technique, the analytical findings highlight the potential feasibility of extracting meaningful viscoelastic properties with this imaging method. PMID:29114450
Multi-scale imaging and elastic simulation of carbonates
NASA Astrophysics Data System (ADS)
Faisal, Titly Farhana; Awedalkarim, Ahmed; Jouini, Mohamed Soufiane; Jouiad, Mustapha; Chevalier, Sylvie; Sassi, Mohamed
2016-05-01
Digital Rock Physics (DRP) is an emerging technology that can be used to generate high quality, fast and cost effective special core analysis (SCAL) properties compared to conventional experimental techniques and modeling techniques. The primary workflow of DRP conssits of three elements: 1) image the rock sample using high resolution 3D scanning techniques (e.g. micro CT, FIB/SEM), 2) process and digitize the images by segmenting the pore and matrix phases 3) simulate the desired physical properties of the rocks such as elastic moduli and velocities of wave propagation. A Finite Element Method based algorithm, that discretizes the basic Hooke's Law equation of linear elasticity and solves it numerically using a fast conjugate gradient solver, developed by Garboczi and Day [1] is used for mechanical and elastic property simulations. This elastic algorithm works directly on the digital images by treating each pixel as an element. The images are assumed to have periodic constant-strain boundary condition. The bulk and shear moduli of the different phases are required inputs. For standard 1.5" diameter cores however the Micro-CT scanning reoslution (around 40 μm) does not reveal smaller micro- and nano- pores beyond the resolution. This results in an unresolved "microporous" phase, the moduli of which is uncertain. Knackstedt et al. [2] assigned effective elastic moduli to the microporous phase based on self-consistent theory (which gives good estimation of velocities for well cemented granular media). Jouini et al. [3] segmented the core plug CT scan image into three phases and assumed that micro porous phase is represented by a sub-extracted micro plug (which too was scanned using Micro-CT). Currently the elastic numerical simulations based on CT-images alone largely overpredict the bulk, shear and Young's modulus when compared to laboratory acoustic tests of the same rocks. For greater accuracy of numerical simulation prediction, better estimates of moduli inputs for this current unresolved phase is important. In this work we take a multi-scale imaging approach by first extracting a smaller 0.5" core and scanning at approx 13 µm, then further extracting a 5mm diameter core scanned at 5 μm. From this last scale, region of interests (containing unresolved areas) are identified for scanning at higher resolutions using Focalised Ion Beam (FIB/SEM) scanning technique reaching 50 nm resolution. Numerical simulation is run on such a small unresolved section to obtain a better estimate of the effective moduli which is then used as input for simulations performed using CT-images. Results are compared with expeirmental acoustic test moduli obtained also at two scales: 1.5" and 0.5" diameter cores.
NASA Astrophysics Data System (ADS)
Moylan, Andrew; Scott, Susan M.; Searle, Anthony C.
2006-02-01
The software tool GRworkbench is an ongoing project in visual, numerical General Relativity at The Australian National University. Recently, GRworkbench has been significantly extended to facilitate numerical experimentation in analytically-defined space-times. The numerical differential geometric engine has been rewritten using functional programming techniques, enabling objects which are normally defined as functions in the formalism of differential geometry and General Relativity to be directly represented as function variables in the C++ code of GRworkbench. The new functional differential geometric engine allows for more accurate and efficient visualisation of objects in space-times and makes new, efficient computational techniques available. Motivated by the desire to investigate a recent scientific claim using GRworkbench, new tools for numerical experimentation have been implemented, allowing for the simulation of complex physical situations.
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
Hypothesis testing of scientific Monte Carlo calculations.
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Nonplanar ion acoustic waves with kappa-distributed electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Biswajit
2011-06-15
Using the standard reductive perturbation technique, nonlinear cylindrical and spherical Kadomtsev-Petviashvili equations are derived for the propagation of ion acoustic solitary waves in an unmagnetized collisionless plasma with kappa distributed electrons and warm ions. The influence of kappa-distributed electrons and the effects caused by the transverse perturbation on cylindrical and spherical ion acoustic waves (IAWs) are investigated. It is observed that increase in the kappa distributed electrons (i.e., decreasing {kappa}) decreases the amplitude of the solitary electrostatic potential structures. The numerical results are presented to understand the formation of ion acoustic solitary waves with kappa-distributed electrons in nonplanar geometry. Themore » present investigation may have relevance in the study of propagation of IAWs in space and laboratory plasmas.« less
Nonlinear Dust Acoustic Waves in a Magnetized Dusty Plasma with Trapped and Superthermal Electrons
NASA Astrophysics Data System (ADS)
Ahmadi, Abrishami S.; Nouri, Kadijani M.
2014-06-01
In this work, the effects of superthermal and trapped electrons on the oblique propagation of nonlinear dust-acoustic waves in a magnetized dusty (complex) plasma are investigated. The dynamic of electrons is simulated by the generalized Lorentzian (κ) distribution function (DF). The dust grains are cold and their dynamics are simulated by hydrodynamic equations. Using the standard reductive perturbation technique (RPT) a nonlinear modified Korteweg-de Vries (mKdV) equation is derived. Two types of solitary waves; fast and slow dust acoustic solitons, exist in this plasma. Calculations reveal that compressive solitary structures are likely to propagate in this plasma where dust grains are negatively (or positively) charged. The properties of dust acoustic solitons (DASs) are also investigated numerically.
Light and life in Baltimore--and beyond.
Edidin, Michael
2015-02-03
Baltimore has been the home of numerous biophysical studies using light to probe cells. One such study, quantitative measurement of lateral diffusion of rhodopsin, set the standard for experiments in which recovery after photobleaching is used to measure lateral diffusion. Development of this method from specialized microscopes to commercial scanning confocal microscopes has led to widespread use of the technique to measure lateral diffusion of membrane proteins and lipids, and as well diffusion and binding interactions in cell organelles and cytoplasm. Perturbation of equilibrium distributions by photobleaching has also been developed into a robust method to image molecular proximity in terms of fluorescence resonance energy transfer between donor and acceptor fluorophores. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Hypothesis testing of scientific Monte Carlo calculations
NASA Astrophysics Data System (ADS)
Wallerberger, Markus; Gull, Emanuel
2017-11-01
The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.
Design of 6 kw fiber-coupled system for semiconductor laser
NASA Astrophysics Data System (ADS)
Wu, Yulong; Dong, Zhiyong; Chen, Yongqi; Qi, Yunfei; Ding, Lushuang; Zhao, Pengfei; Zou, Yonggang; Xu, Li; Lin, Xuechun
2016-10-01
In this paper, we present the design of a 6 kW fiber-coupled laser diode system by using ZEMAX, and power scaling and fiber coupling techniques for high-power laser diode stacks were introduced in detail. Beams emitted from eight laser diode stacks comprised of four 960 W stacks with center wavelength of 938 nm and four 960 W stacks with center wavelength of 976 nm are combined and coupled into a standard fiber with a core diameter of 800 μm and numerical aperture of 0.22. Simulative result shows that the final power came out of the fiber could reach 6283.9 W, the fiber-coupling efficiency is 87%, and the brightness is 8.2 MW/ (cm2·sr).
Autonomous Infrastructure for Observatory Operations
NASA Astrophysics Data System (ADS)
Seaman, R.
This is an era of rapid change from ancient human-mediated modes of astronomical practice to a vision of ever larger time domain surveys, ever bigger "big data", to increasing numbers of robotic telescopes and astronomical automation on every mountaintop. Over the past decades, facets of a new autonomous astronomical toolkit have been prototyped and deployed in support of numerous space missions. Remote and queue observing modes have gained significant market share on the ground. Archives and data-mining are becoming ubiquitous; astroinformatic techniques and virtual observatory standards and protocols are areas of active development. Astronomers and engineers, planetary and solar scientists, and researchers from communities as diverse as particle physics and exobiology are collaborating on a vast range of "multi-messenger" science. What then is missing?
Tsiminis, Georgios; Chu, Fenghong; Warren-Smith, Stephen C.; Spooner, Nigel A.; Monro, Tanya M.
2013-01-01
A novel approach for identifying explosive species is reported, using Raman spectroscopy in suspended core optical fibers. Numerical simulations are presented that predict the strength of the observed signal as a function of fiber geometry, with the calculated trends verified experimentally and used to optimize the sensors. This technique is used to identify hydrogen peroxide in water solutions at volumes less than 60 nL and to quantify microgram amounts of material using the solvent's Raman signature as an internal calibration standard. The same system, without further modifications, is also used to detect 1,4-dinitrobenzene, a model molecule for nitrobenzene-based explosives such as 2,4,6-trinitrotoluene (TNT). PMID:24084111
Controlling the light shift of the CPT resonance by modulation technique
NASA Astrophysics Data System (ADS)
Tsygankov, E. A.; Petropavlovsky, S. V.; Vaskovskaya, M. I.; Zibrov, S. A.; Velichansky, V. L.; Yakovlev, V. P.
2017-12-01
Motivated by recent developments in atomic frequency standards employing the effect of coherent population trapping (CPT), we propose a theoretical framework for the frequency modulation spectroscopy of the CPT resonances. Under realistic assumptions we provide simple yet non-trivial analytical formulae for the major spectroscopic signals such as the CPT resonance line and the in-phase/quadrature responses. We discuss the influence of the light shift and, in particular, derive a simple expression for the displacement of the resonance as a function of modulation index. The performance of the model is checked against numerical simulations, the agreement is good to perfect. The obtained results can be used in more general models accounting for light absorption in the thick optical medium.
Exploration of Mars by Mariner 9 - Television sensors and image processing.
NASA Technical Reports Server (NTRS)
Cutts, J. A.
1973-01-01
Two cameras equipped with selenium sulfur slow scan vidicons were used in the orbital reconnaissance of Mars by the U.S. Spacecraft Mariner 9 and the performance characteristics of these devices are presented. Digital image processing techniques have been widely applied in the analysis of images of Mars and its satellites. Photometric and geometric distortion corrections, image detail enhancement and transformation to standard map projection have been routinely employed. More specializing applications included picture differencing, limb profiling, solar lighting corrections, noise removal, line plots and computer mosaics. Information on enhancements as well as important picture geometric information was stored in a master library. Display of the library data in graphic or numerical form was accomplished by a data management computer program.
First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals
NASA Astrophysics Data System (ADS)
Tadano, Terumasa; Tsuneyuki, Shinji
2018-04-01
We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.
Aerodynamic Characteristics of High Speed Trains under Cross Wind Conditions
NASA Astrophysics Data System (ADS)
Chen, W.; Wu, S. P.; Zhang, Y.
2011-09-01
Numerical simulation for the two models in cross-wind was carried out in this paper. The three-dimensional compressible Reynolds-averaged Navier-Stokes equations(RANS), combined with the standard k-ɛ turbulence model, were solved on multi-block hybrid grids by second order upwind finite volume technique. The impact of fairing on aerodynamic characteristics of the train models was analyzed. It is shown that, the flow separates on the fairing and a strong vortex is generated, the pressure on the upper middle car decreases dramatically, which leads to a large lift force. The fairing changes the basic patterns around the trains. In addition, formulas of the coefficient of aerodynamic force at small yaw angles up to 24° were expressed.
A Comparative Study of Random Patterns for Digital Image Correlation
NASA Astrophysics Data System (ADS)
Stoilov, G.; Kavardzhikov, V.; Pashkouleva, D.
2012-06-01
Digital Image Correlation (DIC) is a computer based image analysis technique utilizing random patterns, which finds applications in experimental mechanics of solids and structures. In this paper a comparative study of three simulated random patterns is done. One of them is generated according to a new algorithm, introduced by the authors. A criterion for quantitative evaluation of random patterns after the calculation of their autocorrelation functions is introduced. The patterns' deformations are simulated numerically and realized experimentally. The displacements are measured by using the DIC method. Tensile tests are performed after printing the generated random patterns on surfaces of standard iron sheet specimens. It is found that the new designed random pattern keeps relatively good quality until reaching 20% deformation.
Optical microfiber-loaded surface plasmonic TE-pass polarizer
NASA Astrophysics Data System (ADS)
Ma, Youqiao; Farrell, Gerald; Semenova, Yuliya; Li, Binghui; Yuan, Jinhui; Sang, Xinzhu; Yan, Binbin; Yu, Chongxiu; Guo, Tuan; Wu, Qiang
2016-04-01
We propose a novel optical microfiber-loaded plasmonic TE-pass polarizer consisting of an optical microfiber placed on top of a silver substrate and demonstrate its performance both numerically by using the finite element method (FEM) and experimentally. The simulation results show that the loss in the fundamental TE mode is relatively low while at the same time the fundamental TM mode suffers from a large metal dissipation loss induced by excitation of the microfiber-loaded surface plasmonic mode. The microfiber was fabricated using the standard microheater brushing-tapering technique. The measured extinction ratio over the range of the C-band wavelengths is greater than 20 dB for the polarizer with a microfiber diameter of 4 μm, which agrees well with the simulation results.
Non-Standard Forms of Human Residence - The Past and the Future
NASA Astrophysics Data System (ADS)
Wojtkun, Grzegorz
2017-10-01
Since the dawn a man has been taking numerous settlement actions in places even useless for this purpose, for example in areas water- or permafrost-stricken. A perfect example of this in European conditions have become households built in Friesland, on the Scandinavian Peninsula (Lapland) and in the western part of Jutland Peninsula. The scale of the building development, created within it neighbourly relations, as well as used on that occasion materials, techniques and technologies seem to be particularly interesting in the case of so-called negative evolution of the human environment and the disappearance of active-citizen attitudes. For these reasons, research undertakes aiming to assess the usefulness of construction and settlement solutions in the past should be regarded as reasonable.
Consumer perceptions of strain differences in Cannabis aroma
DiVerdi, Joseph A.
2018-01-01
The smell of marijuana (Cannabis sativa L.) is of interest to users, growers, plant breeders, law enforcement and, increasingly, to state-licensed retail businesses. The numerous varieties and strains of Cannabis produce strikingly different scents but to date there have been few, if any, attempts to quantify these olfactory profiles directly. Using standard sensory evaluation techniques with untrained consumers we have validated a preliminary olfactory lexicon for dried cannabis flower, and characterized the aroma profile of eleven strains sold in the legal recreational market in Colorado. We show that consumers perceive differences among strains, that the strains form distinct clusters based on odor similarity, and that strain aroma profiles are linked to perceptions of potency, price, and smoking interest. PMID:29401526
Ion bipolar junction transistors
Tybrandt, Klas; Larsson, Karin C.; Richter-Dahlfors, Agneta; Berggren, Magnus
2010-01-01
Dynamic control of chemical microenvironments is essential for continued development in numerous fields of life sciences. Such control could be achieved with active chemical circuits for delivery of ions and biomolecules. As the basis for such circuitry, we report a solid-state ion bipolar junction transistor (IBJT) based on conducting polymers and thin films of anion- and cation-selective membranes. The IBJT is the ionic analogue to the conventional semiconductor BJT and is manufactured using standard microfabrication techniques. Transistor characteristics along with a model describing the principle of operation, in which an anionic base current amplifies a cationic collector current, are presented. By employing the IBJT as a bioelectronic circuit element for delivery of the neurotransmitter acetylcholine, its efficacy in modulating neuronal cell signaling is demonstrated. PMID:20479274
Numerical human models for accident research and safety - potentials and limitations.
Praxl, Norbert; Adamec, Jiri; Muggenthaler, Holger; von Merten, Katja
2008-01-01
The method of numerical simulation is frequently used in the area of automotive safety. Recently, numerical models of the human body have been developed for the numerical simulation of occupants. Different approaches in modelling the human body have been used: the finite-element and the multibody technique. Numerical human models representing the two modelling approaches are introduced and the potentials and limitations of these models are discussed.
Standardized pivot shift test improves measurement accuracy.
Hoshino, Yuichi; Araujo, Paulo; Ahlden, Mattias; Moore, Charity G; Kuroda, Ryosuke; Zaffagnini, Stefano; Karlsson, Jon; Fu, Freddie H; Musahl, Volker
2012-04-01
The variability of the pivot shift test techniques greatly interferes with achieving a quantitative and generally comparable measurement. The purpose of this study was to compare the variation of the quantitative pivot shift measurements with different surgeons' preferred techniques to a standardized technique. The hypothesis was that standardizing the pivot shift test would improve consistency in the quantitative evaluation when compared with surgeon-specific techniques. A whole lower body cadaveric specimen was prepared to have a low-grade pivot shift on one side and high-grade pivot shift on the other side. Twelve expert surgeons performed the pivot shift test using (1) their preferred technique and (2) a standardized technique. Electromagnetic tracking was utilized to measure anterior tibial translation and acceleration of the reduction during the pivot shift test. The variation of the measurement was compared between the surgeons' preferred technique and the standardized technique. The anterior tibial translation during pivot shift test was similar between using surgeons' preferred technique (left 24.0 ± 4.3 mm; right 15.5 ± 3.8 mm) and using standardized technique (left 25.1 ± 3.2 mm; right 15.6 ± 4.0 mm; n.s.). However, the variation in acceleration was significantly smaller with the standardized technique (left 3.0 ± 1.3 mm/s(2); right 2.5 ± 0.7 mm/s(2)) compared with the surgeons' preferred technique (left 4.3 ± 3.3 mm/s(2); right 3.4 ± 2.3 mm/s(2); both P < 0.01). Standardizing the pivot shift test maneuver provides a more consistent quantitative evaluation and may be helpful in designing future multicenter clinical outcome trials. Diagnostic study, Level I.
NASA Astrophysics Data System (ADS)
Xu, Zexuan; Hu, Bill
2016-04-01
Dual-permeability karst aquifers of porous media and conduit networks with significant different hydrological characteristics are widely distributed in the world. Discrete-continuum numerical models, such as MODFLOW-CFP and CFPv2, have been verified as appropriate approaches to simulate groundwater flow and solute transport in numerical modeling of karst hydrogeology. On the other hand, seawater intrusion associated with fresh groundwater resources contamination has been observed and investigated in numbers of coastal aquifers, especially under conditions of sea level rise. Density-dependent numerical models including SEAWAT are able to quantitatively evaluate the seawater/freshwater interaction processes. A numerical model of variable-density flow and solute transport - conduit flow process (VDFST-CFP) is developed to provide a better description of seawater intrusion and submarine groundwater discharge in a coastal karst aquifer with conduits. The coupling discrete-continuum VDFST-CFP model applies Darcy-Weisbach equation to simulate non-laminar groundwater flow in the conduit system in which is conceptualized and discretized as pipes, while Darcy equation is still used in continuum porous media. Density-dependent groundwater flow and solute transport equations with appropriate density terms in both conduit and porous media systems are derived and numerically solved using standard finite difference method with an implicit iteration procedure. Synthetic horizontal and vertical benchmarks are created to validate the newly developed VDFST-CFP model by comparing with other numerical models such as variable density SEAWAT, couplings of constant density groundwater flow and solute transport MODFLOW/MT3DMS and discrete-continuum CFPv2/UMT3D models. VDFST-CFP model improves the simulation of density dependent seawater/freshwater mixing processes and exchanges between conduit and matrix. Continuum numerical models greatly overestimated the flow rate under turbulent flow condition but discrete-continuum models provide more accurate results. Parameters sensitivities analysis indicates that conduit diameter and friction factor, matrix hydraulic conductivity and porosity are important parameters that significantly affect variable-density flow and solute transport simulation. The pros and cons of model assumptions, conceptual simplifications and numerical techniques in VDFST-CFP are discussed. In general, the development of VDFST-CFP model is an innovation in numerical modeling methodology and could be applied to quantitatively evaluate the seawater/freshwater interaction in coastal karst aquifers. Keywords: Discrete-continuum numerical model; Variable density flow and transport; Coastal karst aquifer; Non-laminar flow
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
Algorithms for computing the geopotential using a simple density layer
NASA Technical Reports Server (NTRS)
Morrison, F.
1976-01-01
Several algorithms have been developed for computing the potential and attraction of a simple density layer. These are numerical cubature, Taylor series, and a mixed analytic and numerical integration using a singularity-matching technique. A computer program has been written to combine these techniques for computing the disturbing acceleration on an artificial earth satellite. A total of 1640 equal-area, constant surface density blocks on an oblate spheroid are used. The singularity-matching algorithm is used in the subsatellite region, Taylor series in the surrounding zone, and numerical cubature on the rest of the earth.
Variational optical flow computation in real time.
Bruhn, Andrés; Weickert, Joachim; Feddern, Christian; Kohlberger, Timo; Schnörr, Christoph
2005-05-01
This paper investigates the usefulness of bidirectional multigrid methods for variational optical flow computations. Although these numerical schemes are among the fastest methods for solving equation systems, they are rarely applied in the field of computer vision. We demonstrate how to employ those numerical methods for the treatment of variational optical flow formulations and show that the efficiency of this approach even allows for real-time performance on standard PCs. As a representative for variational optic flow methods, we consider the recently introduced combined local-global method. It can be considered as a noise-robust generalization of the Horn and Schunck technique. We present a decoupled, as well as a coupled, version of the classical Gauss-Seidel solver, and we develop several multgrid implementations based on a discretization coarse grid approximation. In contrast, with standard bidirectional multigrid algorithms, we take advantage of intergrid transfer operators that allow for nondyadic grid hierarchies. As a consequence, no restrictions concerning the image size or the number of traversed levels have to be imposed. In the experimental section, we juxtapose the developed multigrid schemes and demonstrate their superior performance when compared to unidirectional multgrid methods and nonhierachical solvers. For the well-known 316 x 252 Yosemite sequence, we succeeded in computing the complete set of dense flow fields in three quarters of a second on a 3.06-GHz Pentium4 PC. This corresponds to a frame rate of 18 flow fields per second which outperforms the widely-used Gauss-Seidel method by almost three orders of magnitude.
Animal Allergens and Their Presence in the Environment
Zahradnik, Eva; Raulf, Monika
2014-01-01
Exposure to animal allergens is a major risk factor for sensitization and allergic diseases. Besides mites and cockroaches, the most important animal allergens are derived from mammals. Cat and dog allergies affect the general population; whereas, allergies to rodents or cattle is an occupational problem. Exposure to animal allergens is not limited to direct contact to animals. Based on their aerodynamic properties, mammalian allergens easily become airborne, attach to clothing and hair, and can be spread from one environment to another. For example, the major cat allergen Fel d 1 was frequently found in homes without pets and in public buildings, including schools, day-care centers, and hospitals. Allergen concentrations in a particular environment showed high variability depending on numerous factors. Assessment of allergen exposure levels is a stepwise process that involves dust collection, allergen quantification, and data analysis. Whereas a number of different dust sampling strategies are used, ELISA assays have prevailed in the last years as the standard technique for quantification of allergen concentrations. This review focuses on allergens arising from domestic, farm, and laboratory animals and describes the ubiquity of mammalian allergens in the human environment. It includes an overview of exposure assessment studies carried out in different indoor settings (homes, schools, workplaces) using numerous sampling and analytical methods and summarizes significant factors influencing exposure levels. However, methodological differences among studies have contributed to the variability of the findings and make comparisons between studies difficult. Therefore, a general standardization of methods is needed and recommended. PMID:24624129
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Anders T., E-mail: andehans@rm.dk; Lukacova, Slavka; Lassen-Ramshad, Yasmin
2015-01-01
When standard conformal x-ray technique for craniospinal irradiation is used, it is a challenge to achieve satisfactory dose coverage of the target including the area of the cribriform plate, while sparing organs at risk. We present a new intensity-modulated radiation therapy (IMRT), noncoplanar technique, for delivering irradiation to the cranial part and compare it with 3 other techniques and previously published results. A total of 13 patients who had previously received craniospinal irradiation with standard conformal x-ray technique were reviewed. New treatment plans were generated for each patient using the noncoplanar IMRT-based technique, a coplanar IMRT-based technique, and a coplanarmore » volumetric-modulated arch therapy (VMAT) technique. Dosimetry data for all patients were compared with the corresponding data from the conventional treatment plans. The new noncoplanar IMRT technique substantially reduced the mean dose to organs at risk compared with the standard radiation technique. The 2 other coplanar techniques also reduced the mean dose to some of the critical organs. However, this reduction was not as substantial as the reduction obtained by the noncoplanar technique. Furthermore, compared with the standard technique, the IMRT techniques reduced the total calculated radiation dose that was delivered to the normal tissue, whereas the VMAT technique increased this dose. Additionally, the coverage of the target was significantly improved by the noncoplanar IMRT technique. Compared with the standard technique, the coplanar IMRT and the VMAT technique did not improve the coverage of the target significantly. All the new planning techniques increased the number of monitor units (MU) used—the noncoplanar IMRT technique by 99%, the coplanar IMRT technique by 122%, and the VMAT technique by 26%—causing concern for leak radiation. The noncoplanar IMRT technique covered the target better and decreased doses to organs at risk compared with the other techniques. All the new techniques increased the number of MU compared with the standard technique.« less
Recent advances in numerical PDEs
NASA Astrophysics Data System (ADS)
Zuev, Julia Michelle
In this thesis, we investigate four neighboring topics, all in the general area of numerical methods for solving Partial Differential Equations (PDEs). Topic 1. Radial Basis Functions (RBF) are widely used for multi-dimensional interpolation of scattered data. This methodology offers smooth and accurate interpolants, which can be further refined, if necessary, by clustering nodes in select areas. We show, however, that local refinements with RBF (in a constant shape parameter [varepsilon] regime) may lead to the oscillatory errors associated with the Runge phenomenon (RP). RP is best known in the case of high-order polynomial interpolation, where its effects can be accurately predicted via Lebesgue constant L (which is based solely on the node distribution). We study the RP and the applicability of Lebesgue constant (as well as other error measures) in RBF interpolation. Mainly, we allow for a spatially variable shape parameter, and demonstrate how it can be used to suppress RP-like edge effects and to improve the overall stability and accuracy. Topic 2. Although not as versatile as RBFs, cubic splines are useful for interpolating grid-based data. In 2-D, we consider a patch representation via Hermite basis functions s i,j ( u, v ) = [Special characters omitted.] h mn H m ( u ) H n ( v ), as opposed to the standard bicubic representation. Stitching requirements for the rectangular non-equispaced grid yield a 2-D tridiagonal linear system AX = B, where X represents the unknown first derivatives. We discover that the standard methods for solving this NxM system do not take advantage of the spline-specific format of the matrix B. We develop an alternative approach using this specialization of the RHS, which allows us to pre-compute coefficients only once, instead of N times. MATLAB implementation of our fast 2-D cubic spline algorithm is provided. We confirm analytically and numerically that for large N ( N > 200), our method is at least 3 times faster than the standard algorithm and is just as accurate. Topic 3. The well-known ADI-FDTD method for solving Maxwell's curl equations is second-order accurate in space/time, unconditionally stable, and computationally efficient. We research Richardson extrapolation -based techniques to improve time discretization accuracy for spatially oversampled ADI-FDTD. A careful analysis of temporal accuracy, computational efficiency, and the algorithm's overall stability is presented. Given the context of wave- type PDEs, we find that only a limited number of extrapolations to the ADI-FDTD method are beneficial, if its unconditional stability is to be preserved. We propose a practical approach for choosing the size of a time step that can be used to improve the efficiency of the ADI-FDTD algorithm, while maintaining its accuracy and stability. Topic 4. Shock waves and their energy dissipation properties are critical to understanding the dynamics controlling the MHD turbulence. Numerical advection algorithms used in MHD solvers (e.g. the ZEUS package) introduce undesirable numerical viscosity. To counteract its effects and to resolve shocks numerically, Richtmyer and von Neumann's artificial viscosity is commonly added to the model. We study shock power by analyzing the influence of both artificial and numerical viscosity on energy decay rates. Also, we analytically characterize the numerical diffusivity of various advection algorithms by quantifying their diffusion coefficients e.
NASA Technical Reports Server (NTRS)
Canright, R. B., Jr.; Semler, T. T.
1972-01-01
Several approximations to the Doppler broadening functions psi(x, theta) and chi(x, theta) are compared with respect to accuracy and speed of evaluation. A technique, due to A. M. Turning (1943), is shown to be at least as accurate as direct numerical quadrature and somewhat faster than Gaussian quadrature. FORTRAN 4 listings are included.
Application of artificial intelligence to impulsive orbital transfers
NASA Technical Reports Server (NTRS)
Burns, Rowland E.
1987-01-01
A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.
NASA Technical Reports Server (NTRS)
Kalben, P.
1972-01-01
The FORTRAN IV Program developed to analyze the flow field associated with scramjet exhaust systems is presented. The instructions for preparing input and interpreting output are described. The program analyzes steady three dimensional supersonic flow by the reference plane characteristic technique. The governing equations and numerical techniques employed are presented in Volume 1 of this report.
A versatile breast reduction technique: Conical plicated central U shaped (COPCUs) mammaplasty
Copcu, Eray
2009-01-01
Background There have been numerous studies on reduction mammaplasty and its modifications in the literature. The multitude of modifications of reduction mammaplasty indicates that the ideal technique has yet to be found. There are four reasons for seeking the ideal technique. One reason is to preserve functional features of the breast: breastfeeding and arousal. Other reasons are to achieve the real geometric and aesthetic shape of the breast with the least scar and are to minimize complications of prior surgical techniques without causing an additional complication. Last reason is the limitation of the techniques described before. To these aims, we developed a new versatile reduction mammaplasty technique, which we called conical plicated central U shaped (COPCUs) mammaplasty. Methods We performed central plication to achieve a juvenile look in the superior pole of the breast and to prevent postoperative pseudoptosis and used central U shaped flap to achieve maximum NAC safety and to preserve lactation and nipple sensation. The central U flap was 6 cm in width and the superior conical plication was performed with 2/0 PDS. Preoperative and postoperative standard measures of the breast including the superior pole fullness were compared. Results Forty six patients were operated with the above mentioned technique. All of the patients were satisfied with functional and aesthetic results and none of them had major complications. There were no changes in the nipple innervation. Six patients becoming pregnant after surgery did not experience any problems with lactation. None of the patients required scar revision. Conclusion Our technique is a versatile, safe, reliable technique which creates the least scar, avoids previously described disadvantages, provides maximum preservation of functions, can be employed in all breasts regardless of their sizes. PMID:19575809
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
Next generation initiation techniques
NASA Technical Reports Server (NTRS)
Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans
1993-01-01
Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The third kind of next-generation technique involves strategies to initialize convective scale (non-hydrostatic) models.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Model reduction for Space Station Freedom
NASA Technical Reports Server (NTRS)
Williams, Trevor
1992-01-01
Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.
Experimental validation of a new heterogeneous mechanical test design
NASA Astrophysics Data System (ADS)
Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.
2018-05-01
Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.
Transient well flow in vertically heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Hemker, C. J.
1999-11-01
A solution for the general problem of computing well flow in vertically heterogeneous aquifers is found by an integration of both analytical and numerical techniques. The radial component of flow is treated analytically; the drawdown is a continuous function of the distance to the well. The finite-difference technique is used for the vertical flow component only. The aquifer is discretized in the vertical dimension and the heterogeneous aquifer is considered to be a layered (stratified) formation with a finite number of homogeneous sublayers, where each sublayer may have different properties. The transient part of the differential equation is solved with Stehfest's algorithm, a numerical inversion technique of the Laplace transform. The well is of constant discharge and penetrates one or more of the sublayers. The effect of wellbore storage on early drawdown data is taken into account. In this way drawdowns are found for a finite number of sublayers as a continuous function of radial distance to the well and of time since the pumping started. The model is verified by comparing results with published analytical and numerical solutions for well flow in homogeneous and heterogeneous, confined and unconfined aquifers. Instantaneous and delayed drainage of water from above the water table are considered, combined with the effects of partially penetrating and finite-diameter wells. The model is applied to demonstrate that the transient effects of wellbore storage in unconfined aquifers are less pronounced than previous numerical experiments suggest. Other applications of the presented solution technique are given for partially penetrating wells in heterogeneous formations, including a demonstration of the effect of decreasing specific storage values with depth in an otherwise homogeneous aquifer. The presented solution can be a powerful tool for the analysis of drawdown from pumping tests, because hydraulic properties of layered heterogeneous aquifer systems with partially penetrating wells may be estimated without the need to construct transient numerical models. A computer program based on the hybrid analytical-numerical technique is available from the author.
Baker, Jay B; Maskell, Kevin F; Matlock, Aaron G; Walsh, Ryan M; Skinner, Carl G
2015-07-01
We compared intubating with a preloaded bougie (PB) against standard bougie technique in terms of success rates, time to successful intubation and provider preference on a cadaveric airway model. In this prospective, crossover study, healthcare providers intubated a cadaver using the PB technique and the standard bougie technique. Participants were randomly assigned to start with either technique. Following standardized training and practice, procedural success and time for each technique was recorded for each participant. Subsequently, participants were asked to rate their perceived ease of intubation on a visual analogue scale of 1 to 10 (1=difficult and 10=easy) and to select which technique they preferred. 47 participants with variable experience intubating were enrolled at an emergency medicine intern airway course. The success rate of all groups for both techniques was equal (95.7%). The range of times to completion for the standard bougie technique was 16.0-70.2 seconds, with a mean time of 29.7 seconds. The range of times to completion for the PB technique was 15.7-110.9 seconds, with a mean time of 29.4 seconds. There was a non-significant difference of 0.3 seconds (95% confidence interval -2.8 to 3.4 seconds) between the two techniques. Participants rated the relative ease of intubation as 7.3/10 for the standard technique and 7.6/10 for the preloaded technique (p=0.53, 95% confidence interval of the difference -0.97 to 0.50). Thirty of 47 participants subjectively preferred the PB technique (p=0.039). There was no significant difference in success or time to intubation between standard bougie and PB techniques. The majority of participants in this study preferred the PB technique. Until a clear and clinically significant difference is found between these techniques, emergency airway operators should feel confident in using the technique with which they are most comfortable.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
Van Hecke, Wim; Sijbers, Jan; De Backer, Steve; Poot, Dirk; Parizel, Paul M; Leemans, Alexander
2009-07-01
Although many studies are starting to use voxel-based analysis (VBA) methods to compare diffusion tensor images between healthy and diseased subjects, it has been demonstrated that VBA results depend heavily on parameter settings and implementation strategies, such as the applied coregistration technique, smoothing kernel width, statistical analysis, etc. In order to investigate the effect of different parameter settings and implementations on the accuracy and precision of the VBA results quantitatively, ground truth knowledge regarding the underlying microstructural alterations is required. To address the lack of such a gold standard, simulated diffusion tensor data sets are developed, which can model an array of anomalies in the diffusion properties of a predefined location. These data sets can be employed to evaluate the numerous parameters that characterize the pipeline of a VBA algorithm and to compare the accuracy, precision, and reproducibility of different post-processing approaches quantitatively. We are convinced that the use of these simulated data sets can improve the understanding of how different diffusion tensor image post-processing techniques affect the outcome of VBA. In turn, this may possibly lead to a more standardized and reliable evaluation of diffusion tensor data sets of large study groups with a wide range of white matter altering pathologies. The simulated DTI data sets will be made available online (http://www.dti.ua.ac.be).
NASA Astrophysics Data System (ADS)
Kleinböhl, Armin; Friedson, A. James; Schofield, John T.
2017-01-01
The remote sounding of infrared emission from planetary atmospheres using limb-viewing geometry is a powerful technique for deriving vertical profiles of structure and composition on a global scale. Compared with nadir viewing, limb geometry provides enhanced vertical resolution and greater sensitivity to atmospheric constituents. However, standard limb profile retrieval techniques assume spherical symmetry and are vulnerable to biases produced by horizontal gradients in atmospheric parameters. We present a scheme for the correction of horizontal gradients in profile retrievals from limb observations of the martian atmosphere. It characterizes horizontal gradients in temperature, pressure, and aerosol extinction along the line-of-sight of a limb view through neighboring measurements, and represents these gradients by means of two-dimensional radiative transfer in the forward model of the retrieval. The scheme is applied to limb emission measurements from the Mars Climate Sounder instrument on Mars Reconnaissance Orbiter. Retrieval simulations using data from numerical models indicate that biases of up to 10 K in the winter polar region, obtained with standard retrievals using spherical symmetry, are reduced to about 2 K in most locations by the retrieval with two-dimensional radiative transfer. Retrievals from Mars atmospheric measurements suggest that the two-dimensional radiative transfer greatly reduces biases in temperature and aerosol opacity caused by observational geometry, predominantly in the polar winter regions.
Fienen, Michael N.; Nolan, Bernard T.; Feinstein, Daniel T.
2016-01-01
For decision support, the insights and predictive power of numerical process models can be hampered by insufficient expertise and computational resources required to evaluate system response to new stresses. An alternative is to emulate the process model with a statistical “metamodel.” Built on a dataset of collocated numerical model input and output, a groundwater flow model was emulated using a Bayesian Network, an Artificial neural network, and a Gradient Boosted Regression Tree. The response of interest was surface water depletion expressed as the source of water-to-wells. The results have application for managing allocation of groundwater. Each technique was tuned using cross validation and further evaluated using a held-out dataset. A numerical MODFLOW-USG model of the Lake Michigan Basin, USA, was used for the evaluation. The performance and interpretability of each technique was compared pointing to advantages of each technique. The metamodel can extend to unmodeled areas.
Visualizing Time-Varying Phenomena In Numerical Simulations Of Unsteady Flows
NASA Technical Reports Server (NTRS)
Lane, David A.
1996-01-01
Streamlines, contour lines, vector plots, and volume slices (cutting planes) are commonly used for flow visualization. These techniques are sometimes referred to as instantaneous flow visualization techniques because calculations are based on an instant of the flowfield in time. Although instantaneous flow visualization techniques are effective for depicting phenomena in steady flows,they sometimes do not adequately depict time-varying phenomena in unsteady flows. Streaklines and timelines are effective visualization techniques for depicting vortex shedding, vortex breakdown, and shock waves in unsteady flows. These techniques are examples of time-dependent flow visualization techniques, which are based on many instants of the flowfields in time. This paper describes the algorithms for computing streaklines and timelines. Using numerically simulated unsteady flows, streaklines and timelines are compared with streamlines, contour lines, and vector plots. It is shown that streaklines and timelines reveal vortex shedding and vortex breakdown more clearly than instantaneous flow visualization techniques.
di Stasio, Stefano; Konstandopoulos, Athanasios G; Kostoglou, Margaritis
2002-03-01
The agglomeration kinetics of growing soot generated in a diffusion atmospheric flame are here studied in situ by light scattering technique to infer cluster morphology and size (fractal dimension D(f) and radius of gyration R(g)). SEM analysis is used as a standard reference to obtain primary particle size D(P) at different residence times. The number N(P) of primary particles per aggregate and the number concentration n(A) of clusters are evaluated on the basis of the measured angular patterns of the scattered light intensity. The major finding is that the kinetics of the coagulation process that yields to the formation of chain-like aggregates by soot primary particles (size 10 to 40 nm) can be described with a constant coagulation kernel beta(c,exp)=2.37x10(-9) cm3/s (coagulation constant tau(c) approximately = 0.28 ms). This result is in nice accord with the Smoluchowski coagulation equation in the free molecular regime, and, vice versa, it is in contrast with previous studies conducted by invasive (ex situ) techniques, which claimed the evidence in flames of coagulation rates much larger than the kinetic theory predictions. Thereafter, a number of numerical simulations is implemented to compare with the experimental results on primary particle growth rate and on the process of aggregate reshaping that is observed by light scattering at later residence times. The restructuring process is conjectured to occur, for not well understood reasons, as a direct consequence of the atomic rearrangement in the solid phase carbon due to the prolonged residence time within the flame. Thus, on one side, it is shown that the numerical simulations of primary size history compare well with the values of primary size from SEM experiment with a growth rate constant of primary diameter about 1 nm/s. On the other side, the evolution of aggregate morphology is found to be predictable by the numerical simulations when the onset of a first-order "thermal" restructuring mechanism is assumed to occur in the flame at about 20 ms residence time leading to aggregates with an asymptotic fractal dimension D(f,infinity) approximately = 2.5.
NASA Astrophysics Data System (ADS)
Kawamori, E.; Igami, H.
2017-11-01
A diagnostic technique for detecting the wave numbers of electron density fluctuations at electron gyro-scales in an electron cyclotron frequency range is proposed, and the validity of the idea is checked by means of a particle-in-cell (PIC) numerical simulation. The technique is a modified version of the scattering technique invented by Novik et al. [Plasma Phys. Controlled Fusion 36, 357-381 (1994)] and Gusakov et al., [Plasma Phys. Controlled Fusion 41, 899-912 (1999)]. The novel method adopts forward scattering of injected extraordinary probe waves at the upper hybrid resonance layer instead of the backward-scattering adopted by the original method, enabling the measurement of the wave-numbers of the fine scale density fluctuations in the electron-cyclotron frequency band by means of phase measurement of the scattered waves. The verification numerical simulation with the PIC method shows that the technique has a potential to be applicable to the detection of electron gyro-scale fluctuations in laboratory plasmas if the upper-hybrid resonance layer is accessible to the probe wave. The technique is a suitable means to detect electron Bernstein waves excited via linear mode conversion from electromagnetic waves in torus plasma experiments. Through the numerical simulations, some problems that remain to be resolved are revealed, which include the influence of nonlinear processes such as the parametric decay instability of the probe wave in the scattering process, and so on.
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
A VAS-numerical model impact study using the Gal-Chen variational approach
NASA Technical Reports Server (NTRS)
Aune, Robert M.; Tuccillo, James J.; Uccellini, Louis W.; Petersen, Ralph A.
1987-01-01
A numerical study based on the use of a variational assimilation technique of Gal-Chen (1983, 1986) was conducted to assess the impact of incorporating temperature data from the VISSR Atmospheric Sounder (VAS) into a regional-scale numerical model. A comparison with the results of a control forecast using only conventional data indicated that the assimilation technique successfully combines actual VAS temperature observations with the dynamically balanced model fields without destabilizing the model during the assimilation cycle. Moreover, increasing the temporal frequency of VAS temperature insertions during the assimilation cycle was shown to enhance the impact on the model forecast through successively longer forecast periods. The incorporation of a nudging technique, whereby the model temperature field is constrained toward the VAS 'updated' values during the assimilation cycle, further enhances the impact of the VAS temperature data.
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Chang, T. Y. P.; Wilt, T.; Iskovitz, I.
1989-01-01
The research work performed during the past year on finite element implementation and computational techniques pertaining to high temperature composites is outlined. In the present research, two main issues are addressed: efficient geometric modeling of composite structures and expedient numerical integration techniques dealing with constitutive rate equations. In the first issue, mixed finite elements for modeling laminated plates and shells were examined in terms of numerical accuracy, locking property and computational efficiency. Element applications include (currently available) linearly elastic analysis and future extension to material nonlinearity for damage predictions and large deformations. On the material level, various integration methods to integrate nonlinear constitutive rate equations for finite element implementation were studied. These include explicit, implicit and automatic subincrementing schemes. In all cases, examples are included to illustrate the numerical characteristics of various methods that were considered.
Operational numerical weather prediction on the CYBER 205 at the National Meteorological Center
NASA Technical Reports Server (NTRS)
Deaven, D.
1984-01-01
The Development Division of the National Meteorological Center (NMC), having the responsibility of maintaining and developing the numerical weather forecasting systems of the center, is discussed. Because of the mission of NMC data products must be produced reliably and on time twice daily free of surprises for forecasters. Personnel of Development Division are in a rather unique situation. They must develop new advanced techniques for numerical analysis and prediction utilizing current state-of-the-art techniques, and implement them in an operational fashion without damaging the operations of the center. With the computational speeds and resources now available from the CYBER 205, Development Division Personnel will be able to introduce advanced analysis and prediction techniques into the operational job suite without disrupting the daily schedule. The capabilities of the CYBER 205 are discussed.
Macionis, Valdas
2013-01-09
Diagrammatic recording of finger joint angles by using two criss-crossed paper strips can be a quick substitute to the standard goniometry. As a preliminary step toward clinical validation of the diagrammatic technique, the current study employed healthy subjects and non-professional raters to explore whether reliability estimates of the diagrammatic goniometry are comparable with those of the standard procedure. The study included two procedurally different parts, which were replicated by assigning 24 medical students to act interchangeably as 12 subjects and 12 raters. A larger component of the study was designed to compare goniometers side-by-side in measurement of finger joint angles varying from subject to subject. In the rest of the study, the instruments were compared by parallel evaluations of joint angles similar for all subjects in a situation of simulated change of joint range of motion over time. The subjects used special guides to position the joints of their left ring finger at varying angles of flexion and extension. The obtained diagrams of joint angles were converted to numerical values by computerized measurements. The statistical approaches included calculation of appropriate intraclass correlation coefficients, standard errors of measurements, proportions of measurement differences of 5 or less degrees, and significant differences between paired observations. Reliability estimates were similar for both goniometers. Intra-rater and inter-rater intraclass correlation coefficients ranged from 0.69 to 0.93. The corresponding standard errors of measurements ranged from 2.4 to 4.9 degrees. Repeated measurements of a considerable number of raters fell within clinically non-meaningful 5 degrees of each other in proportions comparable with a criterion value of 0.95. Data collected with both instruments could be similarly interpreted in a simulated situation of change of joint range of motion over time. The paper goniometer and the standard goniometer can be used interchangeably by non-professional raters for evaluation of normal finger joints. The obtained results warrant further research to assess clinical performance of the paper strip technique.
2013-01-01
Background Diagrammatic recording of finger joint angles by using two criss-crossed paper strips can be a quick substitute to the standard goniometry. As a preliminary step toward clinical validation of the diagrammatic technique, the current study employed healthy subjects and non-professional raters to explore whether reliability estimates of the diagrammatic goniometry are comparable with those of the standard procedure. Methods The study included two procedurally different parts, which were replicated by assigning 24 medical students to act interchangeably as 12 subjects and 12 raters. A larger component of the study was designed to compare goniometers side-by-side in measurement of finger joint angles varying from subject to subject. In the rest of the study, the instruments were compared by parallel evaluations of joint angles similar for all subjects in a situation of simulated change of joint range of motion over time. The subjects used special guides to position the joints of their left ring finger at varying angles of flexion and extension. The obtained diagrams of joint angles were converted to numerical values by computerized measurements. The statistical approaches included calculation of appropriate intraclass correlation coefficients, standard errors of measurements, proportions of measurement differences of 5 or less degrees, and significant differences between paired observations. Results Reliability estimates were similar for both goniometers. Intra-rater and inter-rater intraclass correlation coefficients ranged from 0.69 to 0.93. The corresponding standard errors of measurements ranged from 2.4 to 4.9 degrees. Repeated measurements of a considerable number of raters fell within clinically non-meaningful 5 degrees of each other in proportions comparable with a criterion value of 0.95. Data collected with both instruments could be similarly interpreted in a simulated situation of change of joint range of motion over time. Conclusions The paper goniometer and the standard goniometer can be used interchangeably by non-professional raters for evaluation of normal finger joints. The obtained results warrant further research to assess clinical performance of the paper strip technique. PMID:23302419
40 CFR 94.8 - Exhaust emission standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
...) Engines fueled with alcohol fuel shall comply with THCE+NOX standards that are numerically equivalent to... advance by the Administrator. (g) Standards for alternative fuels. The standards described in this section apply to compression-ignition engines, irrespective of fuel, with the following two exceptions for...
A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm
Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...
2016-02-17
We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.
Tensor-product preconditioners for a space-time discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2014-10-01
A space-time discontinuous Galerkin spectral element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is presented. A diagonalized alternating direction implicit preconditioner is extended to a space-time formulation using entropy variables. The effectiveness of this technique is demonstrated for the direct numerical simulation of turbulent flow in a channel.
Numerical Investigation of Hot Gas Ingestion by STOVL Aircraft
NASA Technical Reports Server (NTRS)
Vanka, S. P.
1998-01-01
This report compiles the various research activities conducted under the auspices of the NASA Grant NAG3-1026, "Numerical Investigation of Hot Gas Ingestion by STOVL Aircraft" during the period of April 1989 to April 1994. The effort involved the development of multigrid based algorithms and computer programs for the calculation of the flow and temperature fields generated by Short Take-off and Vertical Landing (STOVL) aircraft, while hovering in ground proximity. Of particular importance has been the interaction of the exhaust jets with the head wind which gives rise to the hot gas ingestion process. The objective of new STOVL designs to reduce the temperature of the gases ingested into the engine. The present work describes a solution algorithm for the multi-dimensional elliptic partial-differential equations governing fluid flow and heat transfer in general curvilinear coordinates. The solution algorithm is based on the multigrid technique which obtains rapid convergence of the iterative numerical procedure for the discrete equations. Initial efforts were concerned with the solution of the Cartesian form of the equations. This algorithm was applied to a simulated STOVL configuration in rectangular coordinates. In the next phase of the work, a computer code for general curvilinear coordinates was constructed. This was applied to model STOVL geometries on curvilinear grids. The code was also validated in model problems. In all these efforts, the standard k-Epsilon model was used.
Moho Modeling Using FFT Technique
NASA Astrophysics Data System (ADS)
Chen, Wenjin; Tenzer, Robert
2017-04-01
To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.
Hot soup! Correlating the severity of liquid scald burns to fluid and biomedical properties.
Loller, Cameron; Buxton, Gavin A; Kerzmann, Tony L
2016-05-01
Burns caused by hot drinks and soups can be both debilitating and costly, especially to pediatric and geriatric patients. This research is aimed at better understanding the fluid properties that can influence the severity of skin burns. We use a standard model which combines heat transfer and biomedical equations to predict burn severity. In particular, experimental data from a physical model serves as the input to our numerical model to determine the severity of scald burns as a consequence of actual fluid flows. This technique enables us to numerically predict the heat transfer from the hot soup into the skin, without the need to numerically estimate the complex fluid mechanics and thermodynamics of the potentially highly viscous and heterogeneous soup. While the temperature of the soup is obviously is the most important fact in determining the degree of burn, we also find that more viscous fluids result in more severe burns, as the slower flowing thicker fluids remain in contact with the skin for longer. Furthermore, other factors can also increase the severity of burn such as a higher initial fluid temperature, a greater fluid thermal conductivity, or a higher thermal capacity of the fluid. Our combined experimental and numerical investigation finds that for average skin properties a very viscous fluid at 100°C, the fluid must be in contact with the skin for around 15-20s to cause second degree burns, and more than 80s to cause a third degree burn. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
Numerical Simulations of the Digital Microfluidic Manipulation of Single Microparticles.
Lan, Chuanjin; Pal, Souvik; Li, Zhen; Ma, Yanbao
2015-09-08
Single-cell analysis techniques have been developed as a valuable bioanalytical tool for elucidating cellular heterogeneity at genomic, proteomic, and cellular levels. Cell manipulation is an indispensable process for single-cell analysis. Digital microfluidics (DMF) is an important platform for conducting cell manipulation and single-cell analysis in a high-throughput fashion. However, the manipulation of single cells in DMF has not been quantitatively studied so far. In this article, we investigate the interaction of a single microparticle with a liquid droplet on a flat substrate using numerical simulations. The droplet is driven by capillary force generated from the wettability gradient of the substrate. Considering the Brownian motion of microparticles, we utilize many-body dissipative particle dynamics (MDPD), an off-lattice mesoscopic simulation technique, in this numerical study. The manipulation processes (including pickup, transport, and drop-off) of a single microparticle with a liquid droplet are simulated. Parametric studies are conducted to investigate the effects on the manipulation processes from the droplet size, wettability gradient, wetting properties of the microparticle, and particle-substrate friction coefficients. The numerical results show that the pickup, transport, and drop-off processes can be precisely controlled by these parameters. On the basis of the numerical results, a trap-free delivery of a hydrophobic microparticle to a destination on the substrate is demonstrated in the numerical simulations. The numerical results not only provide a fundamental understanding of interactions among the microparticle, the droplet, and the substrate but also demonstrate a new technique for the trap-free immobilization of single hydrophobic microparticles in the DMF design. Finally, our numerical method also provides a powerful design and optimization tool for the manipulation of microparticles in DMF systems.
NASA Astrophysics Data System (ADS)
Abdolkader, Tarek M.; Shaker, Ahmed; Alahmadi, A. N. M.
2018-07-01
With the continuous miniaturization of electronic devices, quantum-mechanical effects such as tunneling become more effective in many device applications. In this paper, a numerical simulation tool is developed under a MATLAB environment to calculate the tunneling probability and current through an arbitrary potential barrier comparing three different numerical techniques: the finite difference method, transfer matrix method, and transmission line method. For benchmarking, the tool is applied to many case studies such as the rectangular single barrier, rectangular double barrier, and continuous bell-shaped potential barrier, each compared to analytical solutions and giving the dependence of the error on the number of mesh points. In addition, a thorough study of the J ‑ V characteristics of MIM and MIIM diodes, used as rectifiers for rectenna solar cells, is presented and simulations are compared to experimental results showing satisfactory agreement. On the undergraduate level, the tool provides a deeper insight for students to compare numerical techniques used to solve various tunneling problems and helps students to choose a suitable technique for a certain application.
Stability of numerical integration techniques for transient rotor dynamics
NASA Technical Reports Server (NTRS)
Kascak, A. F.
1977-01-01
A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.
ULTRASONIC STUDIES OF THE FUNDAMENTAL MECHANISMS OF RECRYSTALLIZATION AND SINTERING OF METALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
TURNER, JOSEPH A.
2005-11-30
The purpose of this project was to develop a fundamental understanding of the interaction of an ultrasonic wave with complex media, with specific emphases on recrystallization and sintering of metals. A combined analytical, numerical, and experimental research program was implemented. Theoretical models of elastic wave propagation through these complex materials were developed using stochastic wave field techniques. The numerical simulations focused on finite element wave propagation solutions through complex media. The experimental efforts were focused on corroboration of the models developed and on the development of new experimental techniques. The analytical and numerical research allows the experimental results to bemore » interpreted quantitatively.« less
Advances in numerical and applied mathematics
NASA Technical Reports Server (NTRS)
South, J. C., Jr. (Editor); Hussaini, M. Y. (Editor)
1986-01-01
This collection of papers covers some recent developments in numerical analysis and computational fluid dynamics. Some of these studies are of a fundamental nature. They address basic issues such as intermediate boundary conditions for approximate factorization schemes, existence and uniqueness of steady states for time dependent problems, and pitfalls of implicit time stepping. The other studies deal with modern numerical methods such as total variation diminishing schemes, higher order variants of vortex and particle methods, spectral multidomain techniques, and front tracking techniques. There is also a paper on adaptive grids. The fluid dynamics papers treat the classical problems of imcompressible flows in helically coiled pipes, vortex breakdown, and transonic flows.
Numerical modeling of on-orbit propellant motion resulting from an impulsive acceleration
NASA Technical Reports Server (NTRS)
Aydelott, John C.; Mjolsness, Raymond C.; Torrey, Martin D.; Hochstein, John I.
1987-01-01
In-space docking and separation maneuvers of spacecraft that have large fluid mass fractions may cause undesirable spacecraft motion in response to the impulsive-acceleration-induced fluid motion. An example of this potential low gravity fluid management problem arose during the development of the shuttle/Centaur vehicle. Experimentally verified numerical modeling techniques were developed to establish the propellant dynamics, and subsequent vehicle motion, associated with the separation of the Centaur vehicle from the shuttle orbiter cargo bay. Although the shuttle/Centaur development activity was suspended, the numerical modeling techniques are available to predict on-orbit liquid motion resulting from impulsive accelerations for other missions and spacecraft.
Study on the tumor-induced angiogenesis using mathematical models.
Suzuki, Takashi; Minerva, Dhisa; Nishiyama, Koichi; Koshikawa, Naohiko; Chaplain, Mark Andrew Joseph
2018-01-01
We studied angiogenesis using mathematical models describing the dynamics of tip cells. We reviewed the basic ideas of angiogenesis models and its numerical simulation technique to produce realistic computer graphics images of sprouting angiogenesis. We examined the classical model of Anderson-Chaplain using fundamental concepts of mass transport and chemical reaction with ECM degradation included. We then constructed two types of numerical schemes, model-faithful and model-driven ones, where new techniques of numerical simulation are introduced, such as transient probability, particle velocity, and Boolean variables. © 2017 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Mehl, S.; Hill, M.C.
2001-01-01
Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.
Finite Element Modelling and Analysis of Conventional Pultrusion Processes
NASA Astrophysics Data System (ADS)
Akishin, P.; Barkanov, E.; Bondarchuk, A.
2015-11-01
Pultrusion is one of many composite manufacturing techniques and one of the most efficient methods for producing fiber reinforced polymer composite parts with a constant cross-section. Numerical simulation is helpful for understanding the manufacturing process and developing scientific means for the pultrusion tooling design. Numerical technique based on the finite element method has been developed for the simulation of pultrusion processes. It uses the general purpose finite element software ANSYS Mechanical. It is shown that the developed technique predicts the temperature and cure profiles, which are in good agreement with those published in the open literature.
CFD Techniques for Propulsion Applications
NASA Technical Reports Server (NTRS)
1992-01-01
The symposium was composed of the following sessions: turbomachinery computations and validations; flow in ducts, intakes, and nozzles; and reacting flows. Forty papers were presented, and they covered full 3-D code validation and numerical techniques; multidimensional reacting flow; and unsteady viscous flow for the entire spectrum of propulsion system components. The capabilities of the various numerical techniques were assessed and significant new developments were identified. The technical evaluation spells out where progress has been made and concludes that the present state of the art has almost reached the level necessary to tackle the comprehensive topic of computational fluid dynamics (CFD) validation for propulsion.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
Local numerical modelling of ultrasonic guided waves in linear and nonlinear media
NASA Astrophysics Data System (ADS)
Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.
2017-04-01
Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.
A new shock-capturing numerical scheme for ideal hydrodynamics
NASA Astrophysics Data System (ADS)
Fecková, Z.; Tomášik, B.
2015-05-01
We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.
Biostatistics Series Module 10: Brief Overview of Multivariate Methods.
Hazra, Avijit; Gogtay, Nithya
2017-01-01
Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.
Study of the mode of angular velocity damping for a spacecraft at non-standard situation
NASA Astrophysics Data System (ADS)
Davydov, A. A.; Sazonov, V. V.
2012-07-01
Non-standard situation on a spacecraft (Earth's satellite) is considered, when there are no measurements of the spacecraft's angular velocity component relative to one of its body axes. Angular velocity measurements are used in controlling spacecraft's attitude motion by means of flywheels. The arising problem is to study the operation of standard control algorithms in the absence of some necessary measurements. In this work this problem is solved for the algorithm ensuring the damping of spacecraft's angular velocity. Such a damping is shown to be possible not for all initial conditions of motion. In the general case one of two possible final modes is realized, each described by stable steady-state solutions of the equations of motion. In one of them, the spacecraft's angular velocity component relative to the axis, for which the measurements are absent, is nonzero. The estimates of the regions of attraction are obtained for these steady-state solutions by numerical calculations. A simple technique is suggested that allows one to eliminate the initial conditions of the angular velocity damping mode from the attraction region of an undesirable solution. Several realizations of this mode that have taken place are reconstructed. This reconstruction was carried out using approximations of telemetry values of the angular velocity components and the total angular momentum of flywheels, obtained at the non-standard situation, by solutions of the equations of spacecraft's rotational motion.
40 CFR 449.11 - New source performance standards (NSPS).
Code of Federal Regulations, 2012 CFR
2012-07-01
... percent of available ADF. (2) Numerical effluent limitation. The new source must achieve the performance... the numeric limitations for ammonia in Table III, prior to any dilution or commingling with any non...
40 CFR 449.11 - New source performance standards (NSPS).
Code of Federal Regulations, 2013 CFR
2013-07-01
... percent of available ADF. (2) Numerical effluent limitation. The new source must achieve the performance... the numeric limitations for ammonia in Table III, prior to any dilution or commingling with any non...
40 CFR 449.11 - New source performance standards (NSPS).
Code of Federal Regulations, 2014 CFR
2014-07-01
... percent of available ADF. (2) Numerical effluent limitation. The new source must achieve the performance... the numeric limitations for ammonia in Table III, prior to any dilution or commingling with any non...
Saletti, Dominique
2017-01-01
Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505
Standards for Pediatric Immunization Practices.
ERIC Educational Resources Information Center
Centers for Disease Control (DHHS/PHS), Atlanta, GA.
This booklet outlines 18 national standards for pediatric immunizations. The standards were developed by a 35-member working group drawn from 24 different public and private sector organizations and from numerous state and local health departments and approved by the U.S. Public Health Service. The first three standards state that: immunization…
Numerical integration of ordinary differential equations of various orders
NASA Technical Reports Server (NTRS)
Gear, C. W.
1969-01-01
Report describes techniques for the numerical integration of differential equations of various orders. Modified multistep predictor-corrector methods for general initial-value problems are discussed and new methods are introduced.
Editing of EIA coded, numerically controlled, machine tool tapes
NASA Technical Reports Server (NTRS)
Weiner, J. M.
1975-01-01
Editing of numerically controlled (N/C) machine tool tapes (8-level paper tape) using an interactive graphic display processor is described. A rapid technique required for correcting production errors in N/C tapes was developed using the interactive text editor on the IMLAC PDS-ID graphic display system and two special programs resident on disk. The correction technique and special programs for processing N/C tapes coded to EIA specifications are discussed.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1990-01-01
Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
A coupled Eulerian/Lagrangian method for the solution of three-dimensional vortical flows
NASA Technical Reports Server (NTRS)
Felici, Helene Marie
1992-01-01
A coupled Eulerian/Lagrangian method is presented for the reduction of numerical diffusion observed in solutions of three-dimensional rotational flows using standard Eulerian finite-volume time-marching procedures. A Lagrangian particle tracking method using particle markers is added to the Eulerian time-marching procedure and provides a correction of the Eulerian solution. In turn, the Eulerian solutions is used to integrate the Lagrangian state-vector along the particles trajectories. The Lagrangian correction technique does not require any a-priori information on the structure or position of the vortical regions. While the Eulerian solution ensures the conservation of mass and sets the pressure field, the particle markers, used as 'accuracy boosters,' take advantage of the accurate convection description of the Lagrangian solution and enhance the vorticity and entropy capturing capabilities of standard Eulerian finite-volume methods. The combined solution procedures is tested in several applications. The convection of a Lamb vortex in a straight channel is used as an unsteady compressible flow preservation test case. The other test cases concern steady incompressible flow calculations and include the preservation of turbulent inlet velocity profile, the swirling flow in a pipe, and the constant stagnation pressure flow and secondary flow calculations in bends. The last application deals with the external flow past a wing with emphasis on the trailing vortex solution. The improvement due to the addition of the Lagrangian correction technique is measured by comparison with analytical solutions when available or with Eulerian solutions on finer grids. The use of the combined Eulerian/Lagrangian scheme results in substantially lower grid resolution requirements than the standard Eulerian scheme for a given solution accuracy.
Theoretical and numerical studies of chaotic mixing
NASA Astrophysics Data System (ADS)
Kim, Ho Jun
Theoretical and numerical studies of chaotic mixing are performed to circumvent the difficulties of efficient mixing, which come from the lack of turbulence in microfluidic devices. In order to carry out efficient and accurate parametric studies and to identify a fully chaotic state, a spectral element algorithm for solution of the incompressible Navier-Stokes and species transport equations is developed. Using Taylor series expansions in time marching, the new algorithm employs an algebraic factorization scheme on multi-dimensional staggered spectral element grids, and extends classical conforming Galerkin formulations to nonconforming spectral elements. Lagrangian particle tracking methods are utilized to study particle dispersion in the mixing device using spectral element and fourth order Runge-Kutta discretizations in space and time, respectively. Comparative studies of five different techniques commonly employed to identify the chaotic strength and mixing efficiency in microfluidic systems are presented to demonstrate the competitive advantages and shortcomings of each method. These are the stirring index based on the box counting method, Poincare sections, finite time Lyapunov exponents, the probability density function of the stretching field, and mixing index inverse, based on the standard deviation of scalar species distribution. Series of numerical simulations are performed by varying the Peclet number (Pe) at fixed kinematic conditions. The mixing length (lm) is characterized as function of the Pe number, and lm ∝ ln(Pe) scaling is demonstrated for fully chaotic cases. Employing the aforementioned techniques, optimum kinematic conditions and the actuation frequency of the stirrer that result in the highest mixing/stirring efficiency are identified in a zeta potential patterned straight micro channel, where a continuous flow is generated by superposition of a steady pressure driven flow and time periodic electroosmotic flow induced by a stream-wise AC electric field. Finally, it is shown that the invariant manifold of hyperbolic periodic point determines the geometry of fast mixing zones in oscillatory flows in two-dimensional cavity.
An Object Model for a Rocket Engine Numerical Simulator
NASA Technical Reports Server (NTRS)
Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.
1998-01-01
Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.
Cost considerations in selecting coronary artery revascularization therapy in the elderly.
Maziarz, David M; Koutlas, Theodore C
2004-01-01
This article presents some of the cost factors involved in selecting coronary artery revascularization therapy in an elderly patient. With the percentage of gross national product allocated to healthcare continuing to rise in the US, resource allocation has become an issue. Percutaneous coronary intervention continues to be a viable option for many patients, with lower initial costs. However, long-term angina-free results often require further interventions or eventual surgery. Once coronary artery revascularization therapy is selected, it is worthwhile to evaluate the cost considerations inherent to various techniques. Off-pump coronary artery bypass graft surgery has seen a resurgence, with improved technology and lower hospital costs than on-pump bypass surgery. Numerous factors contributing to cost in coronary surgery have been studied and several are documented here, including the potential benefits of early extubation and the use of standardized optimal care pathways. A wide range of hospital-level cost variation has been noted, and standardization issues remain. With the advent of advanced computer-assisted robotic techniques, a push toward totally endoscopic bypass surgery has begun, with the eventual hope of reducing hospital stays to a minimum while maximizing outcomes, thus reducing intensive care unit and stepdown care times, which contribute a great deal toward overall cost. At the present time, these techniques add a significant premium to hospital charges, outweighing any potential length-of-stay benefits from a cost standpoint. As our elderly population continues to grow, use of healthcare resource dollars will continue to be heavily scrutinized. Although the clinical outcome remains the ultimate benchmark, cost containment and optimization of resources will take on a larger role in the future. Copyright 2004 Adis Data Information BV
NASA Astrophysics Data System (ADS)
Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.
1992-10-01
The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.
A Numerical Simulation of Scattering from One-Dimensional Inhomogeneous Dielectric Random Surfaces
NASA Technical Reports Server (NTRS)
Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.
1996-01-01
In this paper, an efficient numerical solution for the scattering problem of inhomogeneous dielectric rough surfaces is presented. The inhomogeneous dielectric random surface represents a bare soil surface and is considered to be comprised of a large number of randomly positioned dielectric humps of different sizes, shapes, and dielectric constants above an impedance surface. Clods with nonuniform moisture content and rocks are modeled by inhomogeneous dielectric humps and the underlying smooth wet soil surface is modeled by an impedance surface. In this technique, an efficient numerical solution for the constituent dielectric humps over an impedance surface is obtained using Green's function derived by the exact image theory in conjunction with the method of moments. The scattered field from a sample of the rough surface is obtained by summing the scattered fields from all the individual humps of the surface coherently ignoring the effect of multiple scattering between the humps. The statistical behavior of the scattering coefficient sigma(sup 0) is obtained from the calculation of scattered fields of many different realizations of the surface. Numerical results are presented for several different roughnesses and dielectric constants of the random surfaces. The numerical technique is verified by comparing the numerical solution with the solution based on the small perturbation method and the physical optics model for homogeneous rough surfaces. This technique can be used to study the behavior of scattering coefficient and phase difference statistics of rough soil surfaces for which no analytical solution exists.
NASA Astrophysics Data System (ADS)
Rahmouni, Lyes; Mitharwal, Rajendra; Andriulli, Francesco P.
2017-11-01
This work presents two new volume integral equations for the Electroencephalography (EEG) forward problem which, differently from the standard integral approaches in the domain, can handle heterogeneities and anisotropies of the head/brain conductivity profiles. The new formulations translate to the quasi-static regime some volume integral equation strategies that have been successfully applied to high frequency electromagnetic scattering problems. This has been obtained by extending, to the volume case, the two classical surface integral formulations used in EEG imaging and by introducing an extra surface equation, in addition to the volume ones, to properly handle boundary conditions. Numerical results corroborate theoretical treatments, showing the competitiveness of our new schemes over existing techniques and qualifying them as a valid alternative to differential equation based methods.
NASA Astrophysics Data System (ADS)
Chiu, Shao-Pin; Chung, Hui-Fang; Lin, Yong-Han; Kai, Ji-Jung; Chen, Fu-Rong; Lin, Juhn-Jong
2009-03-01
Single-crystalline indium tin oxide (ITO) nanowires (NWs) were grown by the standard thermal evaporation method. The as-grown NWs were typically 100-300 nm in diameter and a few µm long. Four-probe submicron Ti/Au electrodes on individual NWs were fabricated by the electron-beam lithography technique. The resistivities of several single NWs have been measured from 300 down to 1.5 K. The results indicate that the as-grown ITO NWs are metallic, but disordered. The overall temperature behavior of resistivity can be described by the Bloch-Grüneisen law plus a low-temperature correction due to the scattering of electrons off dynamic point defects. This observation suggests the existence of numerous dynamic point defects in as-grown ITO NWs.
Approaches for assessing and discovering protein interactions in cancer
Mohammed, Hisham; Carroll, Jason S.
2013-01-01
Significant insight into the function of proteins, can be delineated by discovering and characterising interacting proteins. There are numerous methods for the discovery of unknown associated protein networks, with purification of the bait (the protein of interest) followed by Mass Spectrometry (MS) as a common theme. In recent years, advances have permitted the purification of endogenous proteins and methods for scaling down starting material. As such, approaches for rapid, unbiased identification of protein interactomes are becoming a standard tool in the researchers toolbox, rather than a technique that is only available to specialists. This review will highlight some of the recent technical advances in proteomic based discovery approaches, the pros and cons of various methods and some of the key findings in cancer related systems. PMID:24072816
Şeker, Gaye; Kulacoglu, Hakan; Öztuna, Derya; Topgül, Koray; Akyol, Cihangir; Çakmak, Atıl; Karateke, Faruk; Özdoğan, Mehmet; Ersoy, Eren; Gürer, Ahmet; Zerbaliyev, Elbrus; Seker, Duray; Yorgancı, Kaya; Pergel, Ahmet; Aydın, İbrahim; Ensari, Cemal; Bilecik, Tuna; Kahraman, İzzettin; Reis, Erhan; Kalaycı, Murat; Canda, Aras Emre; Demirağ, Alp; Kesicioğlu, Tuğrul; Malazgirt, Zafer; Gündoğdu, Haldun; Terzi, Cem
2014-01-01
Abdominal wall hernias are a common problem in the general population. A Western estimate reveals that the lifetime risk of developing a hernia is about 2%.1–3 As a result, hernia repairs likely comprise the most frequent general surgery operations. More than 20 million hernias are estimated to be repaired every year around the world.4 Numerous repair techniques have been described to date however tension-free mesh repairs are widely used today because of their low hernia recurrence rates. Nevertheless, there are some ongoing debates regarding the ideal approach (open or laparoscopic),5,6 the ideal anesthesia (general, local, or regional),7,8 and the ideal mesh (standard polypropylene or newer meshes).9,10 PMID:25216417
A Leo Satellite Navigation Algorithm Based on GPS and Magnetometer Data
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack
2001-01-01
The Global Positioning System (GPS) has become a standard method for low cost onboard satellite orbit determination. The use of a GPS receiver as an attitude and rate sensor has also been developed in the recent past. Additionally, focus has been given to attitude and orbit estimation using the magnetometer, a low cost, reliable sensor. Combining measurements from both GPS and a magnetometer can provide a robust navigation system that takes advantage of the estimation qualities of both measurements. Ultimately, a low cost, accurate navigation system can result, potentially eliminating the need for more costly sensors, including gyroscopes. This work presents the development of a technique to eliminate numerical differentiation of the GPS phase measurements and also compares the use of one versus two GPS satellites.
Virial coefficients and demixing in the Asakura-Oosawa model.
López de Haro, Mariano; Tejero, Carlos F; Santos, Andrés; Yuste, Santos B; Fiumara, Giacomo; Saija, Franz
2015-01-07
The problem of demixing in the Asakura-Oosawa colloid-polymer model is considered. The critical constants are computed using truncated virial expansions up to fifth order. While the exact analytical results for the second and third virial coefficients are known for any size ratio, analytical results for the fourth virial coefficient are provided here, and fifth virial coefficients are obtained numerically for particular size ratios using standard Monte Carlo techniques. We have computed the critical constants by successively considering the truncated virial series up to the second, third, fourth, and fifth virial coefficients. The results for the critical colloid and (reservoir) polymer packing fractions are compared with those that follow from available Monte Carlo simulations in the grand canonical ensemble. Limitations and perspectives of this approach are pointed out.
NASA Astrophysics Data System (ADS)
Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo
2016-11-01
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
Integrated and differential accuracy in resummed cross sections
Bertolini, Daniele; Solon, Mikhail P.; Walsh, Jonathan R.
2017-03-30
Standard QCD resummation techniques provide precise predictions for the spectrum and the cumulant of a given observable. The integrated spectrum and the cumulant differ by higher-order terms which, however, can be numerically significant. Here in this paper we propose a method, which we call the σ-improved scheme, to resolve this issue. It consists of two steps: (i) include higher-order terms in the spectrum to improve the agreement with the cumulant central value, and (ii) employ profile scales that encode correlations between different points to give robust uncertainty estimates for the integrated spectrum. We provide a generic algorithm for determining suchmore » profile scales, and show the application to the thrust distribution in e +e - collisions at NLL'+NLO and NNLL'+NNLO.« less
Numerical simulation and experimental investigation about internal and external flows†
NASA Astrophysics Data System (ADS)
Wang, Tao; Yang, Guowei; Huang, Guojun; Zhou, Liandi
2006-06-01
In this paper, TASCflow3D is used to solve inner and outer 3D viscous incompressible turbulent flow (Re=5.6×106) around axisymmetric body with duct. The governing equation is a RANS equation with standard k ɛ turbulence model. The discrete method used is a finite volume method based on the finite element approach. In this method, the description of geometry is very flexible and at the same time important conservative properties are retained. The multi-block and algebraic multi-grid techniques are used for the convergence acceleration. Agreement between experimental results and calculation is good. It indicates that this novel approach can be used to simulate complex flow such as the interaction between rotor and stator or propulsion systems containing tip clearance and cavitation.
Playing biology's name game: identifying protein names in scientific text.
Hanisch, Daniel; Fluck, Juliane; Mevissen, Heinz-Theodor; Zimmer, Ralf
2003-01-01
A growing body of work is devoted to the extraction of protein or gene interaction information from the scientific literature. Yet, the basis for most extraction algorithms, i.e. the specific and sensitive recognition of protein and gene names and their numerous synonyms, has not been adequately addressed. Here we describe the construction of a comprehensive general purpose name dictionary and an accompanying automatic curation procedure based on a simple token model of protein names. We designed an efficient search algorithm to analyze all abstracts in MEDLINE in a reasonable amount of time on standard computers. The parameters of our method are optimized using machine learning techniques. Used in conjunction, these ingredients lead to good search performance. A supplementary web page is available at http://cartan.gmd.de/ProMiner/.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
7 CFR 51.1216 - Size requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Standards for Grades of Peaches Size § 51.1216 Size requirements. (a) The numerical count or a count-size... closed container shall be indicated on the container. (b) When the numerical count is not shown the...
Nagaosa, Ryuichi S
2014-04-30
This study proposes a new numerical formulation of the spread of a flammable gas leakage. A new numerical approach has been applied to establish fundamental data for a hazard assessment of flammable gas spread in an enclosed residential space. The approach employs an extended version of a two-compartment concept, and determines the leakage concentration of gas using a mass-balance based formulation. The study also introduces a computational fluid dynamics (CFD) technique for calculating three-dimensional details of the gas spread by resolving all the essential scales of fluid motions without a turbulent model. The present numerical technique promises numerical solutions with fewer uncertainties produced by the model equations while maintaining high accuracy. The study examines the effect of gas density on the concentration profiles of flammable gas spread. It also discusses the effect of gas leakage rate on gas concentration profiles. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
A novel numerical framework for self-similarity in plasticity: Wedge indentation in single crystals
NASA Astrophysics Data System (ADS)
Juul, K. J.; Niordson, C. F.; Nielsen, K. L.; Kysar, J. W.
2018-03-01
A novel numerical framework for analyzing self-similar problems in plasticity is developed and demonstrated. Self-similar problems of this kind include processes such as stationary cracks, void growth, indentation etc. The proposed technique offers a simple and efficient method for handling this class of complex problems by avoiding issues related to traditional Lagrangian procedures. Moreover, the proposed technique allows for focusing the mesh in the region of interest. In the present paper, the technique is exploited to analyze the well-known wedge indentation problem of an elastic-viscoplastic single crystal. However, the framework may be readily adapted to any constitutive law of interest. The main focus herein is the development of the self-similar framework, while the indentation study serves primarily as verification of the technique by comparing to existing numerical and analytical studies. In this study, the three most common metal crystal structures will be investigated, namely the face-centered cubic (FCC), body-centered cubic (BCC), and hexagonal close packed (HCP) crystal structures, where the stress and slip rate fields around the moving contact point singularity are presented.
Numerical simulation of liquid jet impact on a rigid wall
NASA Astrophysics Data System (ADS)
Aganin, A. A.; Guseva, T. S.
2016-11-01
Basic points of a numerical technique for computing high-speed liquid jet impact on a rigid wall are presented. In the technique the flows of the liquid and the surrounding gas are governed by the equations of gas dynamics in the density, velocity, and pressure, which are integrated by the CIP-CUP method on dynamically adaptive grids without explicitly tracking the gas-liquid interface. The efficiency of the technique is demonstrated by the results of computing the problems of impact of the liquid cone and the liquid wedge on a wall in the mode with the shockwave touching the wall by its edge. Numerical solutions of these problems are compared with the analytical solution of the problem of impact of the plane liquid flow on a wall. Applicability of the technique to the problems of the high-speed liquid jet impact on a wall is illustrated by the results of computing a problem of impact of a cylindrical liquid jet with the hemispherical end on a wall covered by a layer of the same liquid.
NASA Astrophysics Data System (ADS)
Fulcrand, R.; Jugieu, D.; Escriba, C.; Bancaud, A.; Bourrier, D.; Boukabache, A.; Gué, A. M.
2009-10-01
A flexible microfluidic system embedding microelectromagnets has been designed, modeled and fabricated by using a photosensitive resin as structural material. The fabrication process involves the integration of micro-coils in a multilayer SU-8 microfluidic system by combining standard electroplating and dry films lamination. This technique offers numerous advantages in terms of integration, biocompatibility and chemical resistance. Various designs of micro-coils, including spiral, square or serpentine wires, have been simulated and experimentally tested. It has been established that thermal dissipation in micro-coils depends strongly on the number of turns and current density but remains compatible with biological applications. Real-time experimentations show that these micro-actuators are efficient in trapping magnetic micro-beads without any external field source or a permanent magnet and highlight that the size of microfluidic channels has been adequately designed for optimal trapping. Moreover, we trap magnetic beads in less than 2 s and release them instantaneously into the micro-channel. The actuation solely relies on electric fields, which are easier to control than standard magneto-fluidic modules.
NASA Astrophysics Data System (ADS)
Alcinkaya, Burak; Sel, Kivanc
2018-01-01
The properties of phosphorus doped hydrogenated amorphous silicon carbide (a-SiCx:H) thin films, that were deposited by plasma enhanced chemical vapor deposition technique with four different carbon contents (x), were analyzed and compared with those of the intrinsic a-SiCx:H thin films. The carbon contents of the films were determined by X-ray photoelectron spectroscopy. The thickness and optical energies, such as Tauc, E04 and Urbach energies, of the thin films were determined by UV-Visible transmittance spectroscopy. The electrical properties of the films, such as conductivities and activation energies were analyzed by temperature dependent current-voltage measurements. Finally, the conduction mechanisms of the films were investigated by numerical analysis, in which the standard transport mechanism in the extended states and the nearest neighbor hopping mechanism in the band tail states were taken into consideration. It was determined that, by the effect of phosphorus doping the dominant conduction mechanism was the standard transport mechanism for all carbon contents.
Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories
Neu, Scott C.; Crawford, Karen L.; Toga, Arthur W.
2012-01-01
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead. PMID:22470336