Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Vandoormaal, J. P.; Turan, A.; Raithby, G. D.
1986-01-01
The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.
A technique to remove the tensile instability in weakly compressible SPH
NASA Astrophysics Data System (ADS)
Xu, Xiaoyang; Yu, Peng
2018-01-01
When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Spacecraft Charging Calculations: NASCAP-2K and SEE Spacecraft Charging Handbook
NASA Technical Reports Server (NTRS)
Davis, V. A.; Neergaard, L. F.; Mandell, M. J.; Katz, I.; Gardner, B. M.; Hilton, J. M.; Minor, J.
2002-01-01
For fifteen years NASA and the Air Force Charging Analyzer Program for Geosynchronous Orbits (NASCAP/GEO) has been the workhorse of spacecraft charging calculations. Two new tools, the Space Environment and Effects (SEE) Spacecraft Charging Handbook (recently released), and Nascap-2K (under development), use improved numeric techniques and modern user interfaces to tackle the same problem. The SEE Spacecraft Charging Handbook provides first-order, lower-resolution solutions while Nascap-2K provides higher resolution results appropriate for detailed analysis. This paper illustrates how the improvements in the numeric techniques affect the results.
NASA Astrophysics Data System (ADS)
Yamaguchi, Hideshi; Soeda, Takeshi
2015-03-01
A practical framework for an electron beam induced current (EBIC) technique has been established for conductive materials based on a numerical optimization approach. Although the conventional EBIC technique is useful for evaluating the distributions of dopants or crystal defects in semiconductor transistors, issues related to the reproducibility and quantitative capability of measurements using this technique persist. For instance, it is difficult to acquire high-quality EBIC images throughout continuous tests due to variation in operator skill or test environment. Recently, due to the evaluation of EBIC equipment performance and the numerical optimization of equipment items, the constant acquisition of high contrast images has become possible, improving the reproducibility as well as yield regardless of operator skill or test environment. The technique proposed herein is even more sensitive and quantitative than scanning probe microscopy, an imaging technique that can possibly damage the sample. The new technique is expected to benefit the electrical evaluation of fragile or soft materials along with LSI materials.
Multi-sensor Improved Sea-Surface Temperature (MISST) for IOOS - Navy Component
2013-09-30
application and data fusion techniques. 2. Parameterization of IR and MW retrieval differences, with consideration of diurnal warming and cool-skin effects...associated retrieval confidence, standard deviation (STD), and diurnal warming estimates to the application user community in the new GDS 2.0 GHRSST...including coral reefs, ocean modeling in the Gulf of Mexico, improved lake temperatures, numerical data assimilation by ocean models, numerical
Role of electromagnetic navigational bronchoscopy in pulmonary nodule management
Dahagam, Chanukya; Breen, David P.; Sarkar, Saiyad
2016-01-01
The incidence of pulmonary nodules and lung cancer is rising. Some of this increase in incidence is due to improved pick up by newer imaging modalities. However, the goal is to diagnose these lesion, many of which are located in the periphery, by safe and relatively non-invasive methods. This has led to the emergence of numerous techniques such as electromagnetic navigational bronchoscopy (ENB). Current evidence supports a role for these techniques in the diagnostic pathway. However, numerous factor influence the diagnostic accuracy. Thus despite significant advances, more research needs to be undertaken to further improve the currently available diagnostic technologies. PMID:27606080
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
NASA Astrophysics Data System (ADS)
White, Christopher Joseph
We describe the implementation of sophisticated numerical techniques for general-relativistic magnetohydrodynamics simulations in the Athena++ code framework. Improvements over many existing codes include the use of advanced Riemann solvers and of staggered-mesh constrained transport. Combined with considerations for computational performance and parallel scalability, these allow us to investigate black hole accretion flows with unprecedented accuracy. The capability of the code is demonstrated by exploring magnetically arrested disks.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Vanroony, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
The use of the Cholesky decomposition technique is analyzed as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g. as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stablity problems are briefly discussed.
The use of the modified Cholesky decomposition in divergence and classification calculations
NASA Technical Reports Server (NTRS)
Van Rooy, D. L.; Lynn, M. S.; Snyder, C. H.
1973-01-01
This report analyzes the use of the modified Cholesky decomposition technique as applied to the feature selection and classification algorithms used in the analysis of remote sensing data (e.g., as in LARSYS). This technique is approximately 30% faster in classification and a factor of 2-3 faster in divergence, as compared with LARSYS. Also numerical stability and accuracy are slightly improved. Other methods necessary to deal with numerical stability problems are briefly discussed.
Numerical modelling techniques of soft soil improvement via stone columns: A brief review
NASA Astrophysics Data System (ADS)
Zukri, Azhani; Nazir, Ramli
2018-04-01
There are a number of numerical studies on stone column systems in the literature. Most of the studies found were involved with two-dimensional analysis of the stone column behaviour, while only a few studies used three-dimensional analysis. The most popular software utilised in those studies was Plaxis 2D and 3D. Other types of software that used for numerical analysis are DIANA, EXAMINE, ZSoil, ABAQUS, ANSYS, NISA, GEOSTUDIO, CRISP, TOCHNOG, CESAR, GEOFEM (2D & 3D), FLAC, and FLAC 3. This paper will review the methodological approaches to model stone column numerically, both in two-dimensional and three-dimensional analyses. The numerical techniques and suitable constitutive model used in the studies will also be discussed. In addition, the validation methods conducted were to verify the numerical analysis conducted will be presented. This review paper also serves as a guide for junior engineers through the applicable procedures and considerations when constructing and running a two or three-dimensional numerical analysis while also citing numerous relevant references.
NASA Astrophysics Data System (ADS)
Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu
2018-03-01
Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.
NASA Astrophysics Data System (ADS)
Roul, Pradip; Warbhe, Ujwal
2017-08-01
The classical homotopy perturbation method proposed by J. H. He, Comput. Methods Appl. Mech. Eng. 178, 257 (1999) is useful for obtaining the approximate solutions for a wide class of nonlinear problems in terms of series with easily calculable components. However, in some cases, it has been found that this method results in slowly convergent series. To overcome the shortcoming, we present a new reliable algorithm called the domain decomposition homotopy perturbation method (DDHPM) to solve a class of singular two-point boundary value problems with Neumann and Robin-type boundary conditions arising in various physical models. Five numerical examples are presented to demonstrate the accuracy and applicability of our method, including thermal explosion, oxygen-diffusion in a spherical cell and heat conduction through a solid with heat generation. A comparison is made between the proposed technique and other existing seminumerical or numerical techniques. Numerical results reveal that only two or three iterations lead to high accuracy of the solution and this newly improved technique introduces a powerful improvement for solving nonlinear singular boundary value problems (SBVPs).
Preliminary numerical analysis of improved gas chromatograph model
NASA Technical Reports Server (NTRS)
Woodrow, P. T.
1973-01-01
A mathematical model for the gas chromatograph was developed which incorporates the heretofore neglected transport mechanisms of intraparticle diffusion and rates of adsorption. Because a closed-form analytical solution to the model does not appear realizable, techniques for the numerical solution of the model equations are being investigated. Criteria were developed for using a finite terminal boundary condition in place of an infinite boundary condition used in analytical solution techniques. The class of weighted residual methods known as orthogonal collocation is presently being investigated and appears promising.
RE-NUMERATE: A Workshop to Restore Essential Numerical Skills and Thinking via Astronomy Education
NASA Astrophysics Data System (ADS)
McCarthy, D.; Follette, K.
2013-04-01
The quality of science teaching for all ages is degraded by our students' gross lack of skills in elementary arithmetic and their unwillingness to think, and to express themselves, numerically. Out of frustration educators, and science communicators, often choose to avoid these problems, thereby reinforcing the belief that math is only needed in “math class” and preventing students from maturing into capable, well informed citizens. In this sense we teach students a pseudo science, not its real nature, beauty, and value. This workshop encourages and equips educators to immerse students in numerical thinking throughout a science course. The workshop begins by identifying common deficiencies in skills and attitudes among non-science collegians (freshman-senior) enrolled in General Education astronomy courses. The bulk of the workshop engages participants in well-tested techniques (e.g., presentation methods, curriculum, activities, mentoring approaches, etc.) for improving students' arithmetic skills, increasing their confidence, and improving their abilities in numerical expression. These techniques are grounded in 25+ years of experience in college classrooms and pre-college informal education. They are suited for use in classrooms (K-12 and college), informal venues, and science communication in general and could be applied across the standard school curriculum.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.
1985-01-01
The hybrid-upwind finite difference schemes employed in generally available combustor codes possess excessive numerical diffusion errors which preclude accurate quantative calculations. The present study has as its primary objective the identification and assessment of an improved solution algorithm as well as discretization schemes applicable to analysis of turbulent viscous recirculating flows. The assessment is carried out primarily in two dimensional/axisymetric geometries with a view to identifying an appropriate technique to be incorporated in a three-dimensional code.
Novel measurement techniques (development and analysis of silicon solar cells near 20% effciency)
NASA Technical Reports Server (NTRS)
Wolf, M.; Newhouse, M.
1986-01-01
Work in identifying, developing, and analyzing techniques for measuring bulk recombination rates, and surface recombination velocities and rates in all regions of high-efficiency silicon solar cells is presented. The accuracy of the previously developed DC measurement system was improved by adding blocked interference filters. The system was further automated by writing software that completely samples the unkown solar cell regions with data of numerous recombination velocity and lifetime pairs. The results can be displayed in three dimensions and the best fit can be found numerically using the simplex minimization algorithm. Also described is a theoretical methodology to analyze and compare existing dynamic measurement techniques.
Novel measurement techniques (development and analysis of silicon solar cells near 20% effciency)
NASA Astrophysics Data System (ADS)
Wolf, M.; Newhouse, M.
Work in identifying, developing, and analyzing techniques for measuring bulk recombination rates, and surface recombination velocities and rates in all regions of high-efficiency silicon solar cells is presented. The accuracy of the previously developed DC measurement system was improved by adding blocked interference filters. The system was further automated by writing software that completely samples the unkown solar cell regions with data of numerous recombination velocity and lifetime pairs. The results can be displayed in three dimensions and the best fit can be found numerically using the simplex minimization algorithm. Also described is a theoretical methodology to analyze and compare existing dynamic measurement techniques.
SNR Improvement of QEPAS System by Preamplifier Circuit Optimization and Frequency Locked Technique
NASA Astrophysics Data System (ADS)
Zhang, Qinduan; Chang, Jun; Wang, Zongliang; Wang, Fupeng; Jiang, Fengting; Wang, Mengyao
2018-06-01
Preamplifier circuit noise is of great importance in quartz enhanced photoacoustic spectroscopy (QEPAS) system. In this paper, several noise sources are evaluated and discussed in detail. Based on the noise characteristics, the corresponding noise reduction method is proposed. In addition, a frequency locked technique is introduced to further optimize the QEPAS system noise and improve signal, which achieves a better performance than the conventional frequency scan method. As a result, the signal-to-noise ratio (SNR) could be increased 14 times by utilizing frequency locked technique and numerical averaging technique in the QEPAS system for water vapor detection.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1993-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Average Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
A numerical study of mixing in supersonic combustors with hypermixing injectors
NASA Technical Reports Server (NTRS)
Lee, J.
1992-01-01
A numerical study was conducted to evaluate the performance of wall mounted fuel-injectors designed for potential Supersonic Combustion Ramjet (SCRAM-jet) engine applications. The focus of this investigation was to numerically simulate existing combustor designs for the purpose of validating the numerical technique and the physical models developed. Three different injector designs of varying complexity were studied to fully understand the computational implications involved in accurate predictions. A dual transverse injection system and two streamwise injector designs were studied. The streamwise injectors were designed with swept ramps to enhance fuel-air mixing and combustion characteristics at supersonic speeds without the large flow blockage and drag contribution of the transverse injection system. For this study, the Mass-Averaged Navier-Stokes equations and the chemical species continuity equations were solved. The computations were performed using a finite-volume implicit numerical technique and multiple block structured grid system. The interfaces of the multiple block structured grid systems were numerically resolved using the flux-conservative technique. Detailed comparisons between the computations and existing experimental data are presented. These comparisons show that numerical predictions are in agreement with the experimental data. These comparisons also show that a number of turbulence model improvements are needed for accurate combustor flowfield predictions.
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
Progress technology in microencapsulation methods for cell therapy.
Rabanel, Jean-Michel; Banquy, Xavier; Zouaoui, Hamza; Mokhtar, Mohamed; Hildgen, Patrice
2009-01-01
Cell encapsulation in microcapsules allows the in situ delivery of secreted proteins to treat different pathological conditions. Spherical microcapsules offer optimal surface-to-volume ratio for protein and nutrient diffusion, and thus, cell viability. This technology permits cell survival along with protein secretion activity upon appropriate host stimuli without the deleterious effects of immunosuppressant drugs. Microcapsules can be classified in 3 categories: matrix-core/shell microcapsules, liquid-core/shell microcapsules, and cells-core/shell microcapsules (or conformal coating). Many preparation techniques using natural or synthetic polymers as well as inorganic compounds have been reported. Matrix-core/shell microcapsules in which cells are hydrogel-embedded, exemplified by alginates capsule, is by far the most studied method. Numerous refinement of the technique have been proposed over the years such as better material characterization and purification, improvements in microbead generation methods, and new microbeads coating techniques. Other approaches, based on liquid-core capsules showed improved protein production and increased cell survival. But aside those more traditional techniques, new techniques are emerging in response to shortcomings of existing methods. More recently, direct cell aggregate coating have been proposed to minimize membrane thickness and implants size. Microcapsule performances are largely dictated by the physicochemical properties of the materials and the preparation techniques employed. Despite numerous promising pre-clinical results, at the present time each methods proposed need further improvements before reaching the clinical phase. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1975-01-01
A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.
Simplified nonplanar wafer bonding for heterogeneous device integration
NASA Astrophysics Data System (ADS)
Geske, Jon; Bowers, John E.; Riley, Anton
2004-07-01
We demonstrate a simplified nonplanar wafer bonding technique for heterogeneous device integration. The improved technique can be used to laterally integrate dissimilar semiconductor device structures on a lattice-mismatched substrate. Using the technique, two different InP-based vertical-cavity surface-emitting laser active regions have been integrated onto GaAs without compromising the quality of the photoluminescence. Experimental and numerical simulation results are presented.
Regnier, D.; Litaize, O.; Serot, O.
2015-12-23
Numerous nuclear processes involve the deexcitation of a compound nucleus through the emission of several neutrons, gamma-rays and/or conversion electrons. The characteristics of such a deexcitation are commonly derived from a total statistical framework often called “Hauser–Feshbach” method. In this work, we highlight a numerical limitation of this kind of method in the case of the deexcitation of a high spin initial state. To circumvent this issue, an improved technique called the Fluctuating Structure Properties (FSP) method is presented. Two FSP algorithms are derived and benchmarked on the calculation of the total radiative width for a thermal neutron capture onmore » 238U. We compare the standard method with these FSP algorithms for the prediction of particle multiplicities in the deexcitation of a high spin level of 143Ba. The gamma multiplicity turns out to be very sensitive to the numerical method. The bias between the two techniques can reach 1.5 γγ/cascade. Lastly, the uncertainty of these calculations coming from the lack of knowledge on nuclear structure is estimated via the FSP method.« less
Pulskamp, Jeffrey S; Bedair, Sarah S; Polcawich, Ronald G; Smith, Gabriel L; Martin, Joel; Power, Brian; Bhave, Sunil A
2012-05-01
This paper reports theoretical analysis and experimental results on a numerical electrode shaping design technique that permits the excitation of arbitrary modes in arbitrary geometries for piezoelectric resonators, for those modes permitted to exist by the nonzero piezoelectric coefficients and electrode configuration. The technique directly determines optimal electrode shapes by assessing the local suitability of excitation and detection electrode placement on two-port resonators without the need for iterative numerical techniques. The technique is demonstrated in 61 different electrode designs in lead zirconate titanate (PZT) thin film on silicon RF micro electro-mechanical system (MEMS) plate, beam, ring, and disc resonators for out-of-plane flexural and various contour modes up to 200 MHz. The average squared effective electromechanical coupling factor for the designs was 0.54%, approximately equivalent to the theoretical maximum value of 0.53% for a fully electroded length-extensional mode beam resonator comprised of the same composite. The average improvement in S(21) for the electrode-shaped designs was 14.6 dB with a maximum improvement of 44.3 dB. Through this piezoelectric electrodeshaping technique, 95% of the designs showed a reduction in insertion loss.
NASA Astrophysics Data System (ADS)
Jorris, Timothy R.
2007-12-01
To support the Air Force's Global Reach concept, a Common Aero Vehicle is being designed to support the Global Strike mission. "Waypoints" are specified for reconnaissance or multiple payload deployments and "no-fly zones" are specified for geopolitical restrictions or threat avoidance. Due to time critical targets and multiple scenario analysis, an autonomous solution is preferred over a time-intensive, manually iterative one. Thus, a real-time or near real-time autonomous trajectory optimization technique is presented to minimize the flight time, satisfy terminal and intermediate constraints, and remain within the specified vehicle heating and control limitations. This research uses the Hypersonic Cruise Vehicle (HCV) as a simplified two-dimensional platform to compare multiple solution techniques. The solution techniques include a unique geometric approach developed herein, a derived analytical dynamic optimization technique, and a rapidly emerging collocation numerical approach. This up-and-coming numerical technique is a direct solution method involving discretization then dualization, with pseudospectral methods and nonlinear programming used to converge to the optimal solution. This numerical approach is applied to the Common Aero Vehicle (CAV) as the test platform for the full three-dimensional reentry trajectory optimization problem. The culmination of this research is the verification of the optimality of this proposed numerical technique, as shown for both the two-dimensional and three-dimensional models. Additionally, user implementation strategies are presented to improve accuracy and enhance solution convergence. Thus, the contributions of this research are the geometric approach, the user implementation strategies, and the determination and verification of a numerical solution technique for the optimal reentry trajectory problem that minimizes time to target while satisfying vehicle dynamics and control limitation, and heating, waypoint, and no-fly zone constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allgor, R.J.; Feehery, W.F.; Tolsma, J.E.
The batch process development problem serves as good candidate to guide the development of process modeling environments. It demonstrates that very robust numerical techniques are required within an environment that can collect, organize, and maintain the data and models required to address the batch process development problem. This paper focuses on improving the robustness and efficiency of the numerical algorithms required in such a modeling environment through the development of hybrid numerical and symbolic strategies.
Triangular covariance factorizations for. Ph.D. Thesis. - Calif. Univ.
NASA Technical Reports Server (NTRS)
Thornton, C. L.
1976-01-01
An improved computational form of the discrete Kalman filter is derived using an upper triangular factorization of the error covariance matrix. The covariance P is factored such that P = UDUT where U is unit upper triangular and D is diagonal. Recursions are developed for propagating the U-D covariance factors together with the corresponding state estimate. The resulting algorithm, referred to as the U-D filter, combines the superior numerical precision of square root filtering techniques with an efficiency comparable to that of Kalman's original formula. Moreover, this method is easily implemented and involves no more computer storage than the Kalman algorithm. These characteristics make the U-D method an attractive realtime filtering technique. A new covariance error analysis technique is obtained from an extension of the U-D filter equations. This evaluation method is flexible and efficient and may provide significantly improved numerical results. Cost comparisons show that for a large class of problems the U-D evaluation algorithm is noticeably less expensive than conventional error analysis methods.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
2006-10-01
Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment
A Strassen-Newton algorithm for high-speed parallelizable matrix inversion
NASA Technical Reports Server (NTRS)
Bailey, David H.; Ferguson, Helaman R. P.
1988-01-01
Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.
Preparing Colorful Astronomical Images II
NASA Astrophysics Data System (ADS)
Levay, Z. G.; Frattare, L. M.
2002-12-01
We present additional techniques for using mainstream graphics software (Adobe Photoshop and Illustrator) to produce composite color images and illustrations from astronomical data. These techniques have been used on numerous images from the Hubble Space Telescope to produce photographic, print and web-based products for news, education and public presentation as well as illustrations for technical publication. We expand on a previous paper to present more detail and additional techniques, taking advantage of new or improved features available in the latest software versions. While Photoshop is not intended for quantitative analysis of full dynamic range data (as are IRAF or IDL, for example), we have had much success applying Photoshop's numerous, versatile tools to work with scaled images, masks, text and graphics in multiple semi-transparent layers and channels.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
2013-09-19
environments. This can include the development of new and/or improved analytical and numerical models, rapid data-processing techniques, and new subsurface ... imaging techniques that include active and passive sensor modalities in a variety of rural and urban terrains. Of particular interest is the broadband
Wideband piezoelectric energy harvester for low-frequency application with plucking mechanism
NASA Astrophysics Data System (ADS)
Hiraki, Yasuhiro; Masuda, Arata; Ikeda, Naoto; Katsumura, Hidenori; Kagata, Hiroshi; Okumura, Hidenori
2015-04-01
Wireless sensor networks need energy harvesting from vibrational environment for their power supply. The conventional resonance type vibration energy harvesters, however, are not always effective for low frequency application. The purpose of this paper is to propose a high efficiency energy harvester for low frequency application by utilizing plucking and SSHI techniques, and to investigate the effects of applying those techniques in terms of the energy harvesting efficiency. First, we derived an approximate formulation of energy harvesting efficiency of the plucking device by theoretical analysis. Next, it was confirmed that the improved efficiency agreed with numerical and experimental results. Also, a parallel SSHI, a switching circuit technique to improve the performance of the harvester was introduced and examined by numerical simulations and experiments. Contrary to the simulated results in which the efficiency was improved from 13.1% to 22.6% by introducing the SSHI circuit, the efficiency obtained in the experiment was only 7.43%. This would due to the internal resistance of the inductors and photo MOS relays on the switching circuit and the simulation including this factor revealed large negative influence of it. This result suggested that the reduction of the switching resistance was significantly important to the implementation of SSHI.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Moho Modeling Using FFT Technique
NASA Astrophysics Data System (ADS)
Chen, Wenjin; Tenzer, Robert
2017-04-01
To improve the numerical efficiency, the Fast Fourier Transform (FFT) technique was facilitated in Parker-Oldenburg's method for a regional gravimetric Moho recovery, which assumes the Earth's planar approximation. In this study, we extend this definition for global applications while assuming a spherical approximation of the Earth. In particular, we utilize the FFT technique for a global Moho recovery, which is practically realized in two numerical steps. The gravimetric forward modeling is first applied, based on methods for a spherical harmonic analysis and synthesis of the global gravity and lithospheric structure models, to compute the refined gravity field, which comprises mainly the gravitational signature of the Moho geometry. The gravimetric inverse problem is then solved iteratively in order to determine the Moho depth. The application of FFT technique to both numerical steps reduces the computation time to a fraction of that required without applying this fast algorithm. The developed numerical producers are used to estimate the Moho depth globally, and the gravimetric result is validated using the global (CRUST1.0) and regional (ESC) seismic Moho models. The comparison reveals a relatively good agreement between the gravimetric and seismic models, with the RMS of differences (of 4-5 km) at the level of expected uncertainties of used input datasets, while without the presence of significant systematic bias.
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Use of MODIS Cloud Top Pressure to Improve Assimilation Yields of AIRS Radiances in GSI
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi
2014-01-01
Improvements to global and regional numerical weather prediction have been demonstrated through assimilation of data from NASA's Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Previously, it has been shown that cloud top designation associated with quality control procedures within the Gridpoint Statistical Interpolation (GSI) system used operationally by a number of Joint Center for Satellite Data Assimilation (JCSDA) partners may not provide the best representation of cloud top pressure (CTP). Because this designated CTP determines which channels are cloud-free and, thus, available for assimilation, ensuring the most accurate representation of this value is imperative to obtaining the greatest impact from satellite radiances. This paper examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing analysis increments and numerical forecasts generated using operational techniques with a research technique that swaps CTP from the Moderate-resolution Imaging Spectroradiometer (MODIS) for the value of CTP calculated from the radiances within GSI.
Local numerical modelling of ultrasonic guided waves in linear and nonlinear media
NASA Astrophysics Data System (ADS)
Packo, Pawel; Radecki, Rafal; Kijanka, Piotr; Staszewski, Wieslaw J.; Uhl, Tadeusz; Leamy, Michael J.
2017-04-01
Nonlinear ultrasonic techniques provide improved damage sensitivity compared to linear approaches. The combination of attractive properties of guided waves, such as Lamb waves, with unique features of higher harmonic generation provides great potential for characterization of incipient damage, particularly in plate-like structures. Nonlinear ultrasonic structural health monitoring techniques use interrogation signals at frequencies other than the excitation frequency to detect changes in structural integrity. Signal processing techniques used in non-destructive evaluation are frequently supported by modeling and numerical simulations in order to facilitate problem solution. This paper discusses known and newly-developed local computational strategies for simulating elastic waves, and attempts characterization of their numerical properties in the context of linear and nonlinear media. A hybrid numerical approach combining advantages of the Local Interaction Simulation Approach (LISA) and Cellular Automata for Elastodynamics (CAFE) is proposed for unique treatment of arbitrary strain-stress relations. The iteration equations of the method are derived directly from physical principles employing stress and displacement continuity, leading to an accurate description of the propagation in arbitrarily complex media. Numerical analysis of guided wave propagation, based on the newly developed hybrid approach, is presented and discussed in the paper for linear and nonlinear media. Comparisons to Finite Elements (FE) are also discussed.
Comparison of Factorization-Based Filtering for Landing Navigation
NASA Technical Reports Server (NTRS)
McCabe, James S.; Brown, Aaron J.; DeMars, Kyle J.; Carson, John M., III
2017-01-01
This paper develops and analyzes methods for fusing inertial navigation data with external data, such as data obtained from an altimeter and a star camera. The particular filtering techniques are based upon factorized forms of the Kalman filter, specifically the UDU and Cholesky factorizations. The factorized Kalman filters are utilized to ensure numerical stability of the navigation solution. Simulations are carried out to compare the performance of the different approaches along a lunar descent trajectory using inertial and external data sources. It is found that the factorized forms improve upon conventional filtering techniques in terms of ensuring numerical stability for the investigated landing navigation scenario.
Low-cost regeneration techniques for mixed-species management – 20 years later
Thomas A. Waldrop; Helen H. Mohr
2012-01-01
Four variations of the fell-and-burn technique, a low-cost regeneration system developed for pine-hardwood mixtures in the Southern Appalachian Mountains, were tested in the Piedmont of South Carolina. All variations successfully improved the commercial value of low-quality hardwood stands by introducing a pine component. After 20 years, pines were almost as numerous...
2007-01-01
Aid (IWEDA) we developed techniques that allowed significant improvement in weather effects and impacts for wargames. TAWS was run for numerous and...found that the wargame realism was increased without impacting the run time. While these techniques are applicable to wargames in general, we tested...them by incorporation into the Advanced Warfighting Simulation (AWARS) model. AWARS was modified to incorporate weather impacts upon sensor
Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.
Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
Regularization in Orbital Mechanics; Theory and Practice
NASA Astrophysics Data System (ADS)
Roa, Javier
2017-09-01
Regularized equations of motion can improve numerical integration for the propagation of orbits, and simplify the treatment of mission design problems. This monograph discusses standard techniques and recent research in the area. While each scheme is derived analytically, its accuracy is investigated numerically. Algebraic and topological aspects of the formulations are studied, as well as their application to practical scenarios such as spacecraft relative motion and new low-thrust trajectories.
Advantages of multigrid methods for certifying the accuracy of PDE modeling
NASA Technical Reports Server (NTRS)
Forester, C. K.
1981-01-01
Numerical techniques for assessing and certifying the accuracy of the modeling of partial differential equations (PDE) to the user's specifications are analyzed. Examples of the certification process with conventional techniques are summarized for the three dimensional steady state full potential and the two dimensional steady Navier-Stokes equations using fixed grid methods (FG). The advantages of the Full Approximation Storage (FAS) scheme of the multigrid technique of A. Brandt compared with the conventional certification process of modeling PDE are illustrated in one dimension with the transformed potential equation. Inferences are drawn for how MG will improve the certification process of the numerical modeling of two and three dimensional PDE systems. Elements of the error assessment process that are common to FG and MG are analyzed.
A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer
NASA Technical Reports Server (NTRS)
Drummond, J. Philip
1988-01-01
Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.
Research on regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Kreitzberg, C. W.
1976-01-01
Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.
The Effects of Think-Aloud in a Collaborative Environment to Improve Comprehension of L2 Texts
ERIC Educational Resources Information Center
Seng, Goh Hock
2007-01-01
Numerous studies have shown that thinking aloud while reading can be an effective instructional technique in helping students improve their reading comprehension. However, most of the studies that examined the effects of think-aloud involve subjects reading individually and carried out in isolation away from the classroom context. Recently,…
A general numerical model for wave rotor analysis
NASA Technical Reports Server (NTRS)
Paxson, Daniel W.
1992-01-01
Wave rotors represent one of the promising technologies for achieving very high core temperatures and pressures in future gas turbine engines. Their operation depends upon unsteady gas dynamics and as such, their analysis is quite difficult. This report describes a numerical model which has been developed to perform such an analysis. Following a brief introduction, a summary of the wave rotor concept is given. The governing equations are then presented, along with a summary of the assumptions used to obtain them. Next, the numerical integration technique is described. This is an explicit finite volume technique based on the method of Roe. The discussion then focuses on the implementation of appropriate boundary conditions. Following this, some results are presented which first compare the numerical approximation to the governing differential equations and then compare the overall model to an actual wave rotor experiment. Finally, some concluding remarks are presented concerning the limitations of the simplifying assumptions and areas where the model may be improved.
PSH Transient Simulation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muljadi, Eduard
PSH Transient Simulation Modeling presentation from the WPTO FY14 - FY16 Peer Review. Transient effects are an important consideration when designing a PSH system, yet numerical techniques for hydraulic transient analysis still need improvements for adjustable-speed (AS) reversible pump-turbine applications.
An unconditionally stable method for numerically solving solar sail spacecraft equations of motion
NASA Astrophysics Data System (ADS)
Karwas, Alex
Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach is capable of accurately simulating sailcraft motion. Sailcraft and spacecraft simulations are compared to flight data and to other numerical solution techniques. The new formulation shows an increase in accuracy over a widely used trajectory propagation technique. Simulations for two-dimensional, three-dimensional, and variable attitude trajectories are presented to show the multiple capabilities of the new technique. An element of optimal control is also part of the new technique. An additional equation is added to the sailcraft equations of motion that maximizes thrust in a specific direction. A technical description and results of an example optimization problem are presented. The spacecraft attitude dynamics equations take the simulation a step further by providing control torques using the angular rate and acceleration outputs of the numerical formulation.
Improved importance sampling technique for efficient simulation of digital communication systems
NASA Technical Reports Server (NTRS)
Lu, Dingqing; Yao, Kung
1988-01-01
A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.
NASA Astrophysics Data System (ADS)
Pan, Min-Chun; Liao, Shiu-Wei; Chiu, Chun-Chin
2007-02-01
The waveform-reconstruction schemes of order tracking (OT) such as the Gabor and the Vold-Kalman filtering (VKF) techniques can extract specific order and/or spectral components in addition to characterizing the processed signal in rpm-frequency domain. The study first improves the Gabor OT (GOT) technique to handle the order-crossing problem, and then objectively compares the features of the improved GOT scheme and the angular-displacement VKF OT technique. It is numerically observed the improved method performs less accurately than the VKF_OT scheme at the crossing occurrences, but without end effect in the reconstructed waveform. As OT is not exact science, it may well be that the decrease in computation time can justify the reduced accuracy. The characterisation and discrimination of riding noise with crossing orders emitted by an electrical scooter are conducted as an example of the application.
Explicit evaluation of discontinuities in 2-D unsteady flows solved by the method of characteristics
NASA Astrophysics Data System (ADS)
Osnaghi, C.
When shock waves appear in the numerical solution of flows, a choice is necessary between shock capturing techniques, possible when equations are written in conservative form, and shock fitting techniques. If the second one is preferred, e.g. in order to obtain better definition and more physical description of the shock evolution in time, the method of characteristics is advantageous in the vicinity of the shock and it seems natural to use this method everywhere. This choice requires to improve the efficiency of the numerical scheme in order to produce competitive codes, preserving accuracy and flexibility, which are intrinsic features of the method: this is the goal of the present work.
Enhanced Resolution for Aquarius Salinity Retrieval near Land-Water Boundaries
NASA Technical Reports Server (NTRS)
Utku, Cuneyt; Le Vine, David M.
2014-01-01
A numerical reconstruction of the brightness temperature is examined as a potential way to improve the retrieval of salinity from Aquarius measurements closer to landwater boundaries. A test case using simulated ocean-land scenes suggest promise for the technique.
A numerical analysis of the aortic blood flow pattern during pulsed cardiopulmonary bypass.
Gramigna, V; Caruso, M V; Rossi, M; Serraino, G F; Renzulli, A; Fragomeni, G
2015-01-01
In the modern era, stroke remains a main cause of morbidity after cardiac surgery despite continuing improvements in the cardiopulmonary bypass (CPB) techniques. The aim of the current work was to numerically investigate the blood flow in aorta and epiaortic vessels during standard and pulsed CPB, obtained with the intra-aortic balloon pump (IABP). A multi-scale model, realized coupling a 3D computational fluid dynamics study with a 0D model, was developed and validated with in vivo data. The presence of IABP improved the flow pattern directed towards the epiaortic vessels with a mean flow increase of 6.3% and reduced flow vorticity.
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Feulner, M. R.; Abusali, P. A. M.; Ho, C. S.
1991-01-01
The method of modified back differences, a technique that significantly reduces the numerical integration errors associated with crossing shadow boundaries using a fixed-mesh multistep integrator without a significant increase in computer run time, is presented. While Hubbard's integral approach can produce significant improvements to the trajectory solution, the interpolation method provides the best overall results. It is demonstrated that iterating on the point mass term correction is also important for achieving the best overall results. It is also shown that the method of modified back differences can be implemented with only a small increase in execution time.
Evaluation and nonsurgical management of rotator cuff calcific tendinopathy.
Greis, Ari C; Derrington, Stephen M; McAuliffe, Matthew
2015-04-01
Rotator cuff calcific tendinopathy is a common finding that accounts for about 7% of patients with shoulder pain. There are numerous theories on the pathogenesis of rotator cuff calcific tendinopathy. The diagnosis is confirmed with radiography, MRI or ultrasound. There are numerous conservative treatment options available and most patients can be managed successfully without surgical intervention. Nonsteroidal anti-inflammatory drugs and multiple modalities are often used to manage pain and inflammation; physical therapy can help improve scapular mechanics and decrease dynamic impingement; ultrasound-guided needle aspiration and lavage techniques can provide long-term improvement in pain and function in these patients. Copyright © 2015 Elsevier Inc. All rights reserved.
Three-axis digital holographic microscopy for high speed volumetric imaging.
Saglimbeni, F; Bianchi, S; Lepore, A; Di Leonardo, R
2014-06-02
Digital Holographic Microscopy allows to numerically retrieve three dimensional information encoded in a single 2D snapshot of the coherent superposition of a reference and a scattered beam. Since no mechanical scans are involved, holographic techniques have a superior performance in terms of achievable frame rates. Unfortunately, numerical reconstructions of scattered field by back-propagation leads to a poor axial resolution. Here we show that overlapping the three numerical reconstructions obtained by tilted red, green and blue beams results in a great improvement over the axial resolution and sectioning capabilities of holographic microscopy. A strong reduction in the coherent background noise is also observed when combining the volumetric reconstructions of the light fields at the three different wavelengths. We discuss the performance of our technique with two test objects: an array of four glass beads that are stacked along the optical axis and a freely diffusing rod shaped E.coli bacterium.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pomeroy, J. W., E-mail: James.Pomeroy@Bristol.ac.uk; Kuball, M.
2015-10-14
Solid immersion lenses (SILs) are shown to greatly enhance optical spatial resolution when measuring AlGaN/GaN High Electron Mobility Transistors (HEMTs), taking advantage of the high refractive index of the SiC substrates commonly used for these devices. Solid immersion lenses can be applied to techniques such as electroluminescence emission microscopy and Raman thermography, aiding the development device physics models. Focused ion beam milling is used to fabricate solid immersion lenses in SiC substrates with a numerical aperture of 1.3. A lateral spatial resolution of 300 nm is demonstrated at an emission wavelength of 700 nm, and an axial spatial resolution of 1.7 ± 0.3 μm atmore » a laser wavelength of 532 nm is demonstrated; this is an improvement of 2.5× and 5×, respectively, when compared with a conventional 0.5 numerical aperture objective lens without a SIL. These results highlight the benefit of applying the solid immersion lenses technique to the optical characterization of GaN HEMTs. Further improvements may be gained through aberration compensation and increasing the SIL numerical aperture.« less
Suture Products and Techniques: What to Use, Where, and Why.
Regula, Christie G; Yag-Howard, Cyndi
2015-10-01
There are an increasing number of wound closure materials and suturing techniques described in the dermatologic and surgery literature. A dermatologic surgeon's familiarity with these materials and techniques is important to supplement his or her already established practices and improve surgical outcomes. To perform a thorough literature review of wound closure materials (sutures, tissue adhesives, surgical tape, and staples) and suturing techniques and to outline how and when to use them. A literature review was conducted using PubMed and other online search engines. Keywords searched included suture, tissue adhesive, tissue glue, surgical tape, staples, dermatologic suturing, and suturing techniques. Numerous articles outline the utility of various sutures, surgical adhesives, surgical tape, and staples in dermatologic surgery. In addition, there are various articles describing classic and novel suturing techniques along with their specific uses in cutaneous surgery. Numerous factors must be considered when choosing a wound closure material and suturing technique. These include wound tension, desire for wound edge eversion/inversion, desired hemostasis, repair type, patient's ability to care for the wound and return for suture removal, skin integrity, and wound location. Careful consideration of these factors and proper execution of suturing techniques can lead to excellent cosmetic results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho-Young; Kang, In Man, E-mail: imkang@ee.knu.ac.kr; Shon, Chae-Hwa
2015-05-07
A variable inductor with magnetorheological (MR) fluid has been successfully applied to power electronics applications; however, its thermal characteristics have not been investigated. To evaluate the performance of the variable inductor with respect to temperature, we measured the characteristics of temperature rise and developed a numerical analysis technique. The characteristics of temperature rise were determined experimentally and verified numerically by adopting a multiphysics analysis technique. In order to accurately estimate the temperature distribution in a variable inductor with an MR fluid-gap, the thermal solver should import the heat source from the electromagnetic solver to solve the eddy current problem. Tomore » improve accuracy, the B–H curves of the MR fluid under operating temperature were obtained using the magnetic property measurement system. In addition, the Steinmetz equation was applied to evaluate the core loss in a ferrite core. The predicted temperature rise for a variable inductor showed good agreement with the experimental data and the developed numerical technique can be employed to design a variable inductor with a high-frequency pulsed voltage source.« less
Lepper, Paul A; D'Spain, Gerald L
2007-08-01
The performance of traditional techniques of passive localization in ocean acoustics such as time-of-arrival (phase differences) and amplitude ratios measured by multiple receivers may be degraded when the receivers are placed on an underwater vehicle due to effects of scattering. However, knowledge of the interference pattern caused by scattering provides a potential enhancement to traditional source localization techniques. Results based on a study using data from a multi-element receiving array mounted on the inner shroud of an autonomous underwater vehicle show that scattering causes the localization ambiguities (side lobes) to decrease in overall level and to move closer to the true source location, thereby improving localization performance, for signals in the frequency band 2-8 kHz. These measurements are compared with numerical modeling results from a two-dimensional time domain finite difference scheme for scattering from two fluid-loaded cylindrical shells. Measured and numerically modeled results are presented for multiple source aspect angles and frequencies. Matched field processing techniques quantify the source localization capabilities for both measurements and numerical modeling output.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1985-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
Applications of numerical optimization methods to helicopter design problems - A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
High-spatial-resolution passive microwave sounding systems
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Rosenkranz, P. W.
1994-01-01
The principal contributions of this combined theoretical and experimental effort were to advance and demonstrate new and more accurate techniques for sounding atmospheric temperature, humidity, and precipitation profiles at millimeter wavelengths, and to improve the scientific basis for such soundings. Some of these techniques are being incorporated in both research and operational systems. Specific results include: (1) development of the MIT Microwave Temperature Sounder (MTS), a 118-GHz eight-channel imaging spectrometer plus a switched-frequency spectrometer near 53 GHz, for use on the NASA ER-2 high-altitude aircraft, (2) conduct of ER-2 MTS missions in multiple seasons and locations in combination with other instruments, mapping with unprecedented approximately 2-km lateral resolution atmospheric temperature and precipitation profiles, atmospheric transmittances (at both zenith and nadir), frontal systems, and hurricanes, (3) ground based 118-GHz 3-D spectral images of wavelike structure within clouds passing overhead, (4) development and analysis of approaches to ground- and space-based 5-mm wavelength sounding of the upper stratosphere and mesosphere, which supported the planning of improvements to operational weather satellites, (5) development of improved multidimensional and adaptive retrieval methods for atmospheric temperature and humidity profiles, (6) development of combined nonlinear and statistical retrieval techniques for 183-GHz humidity profile retrievals, (7) development of nonlinear statistical retrieval techniques for precipitation cell-top altitudes, and (8) numerical analyses of the impact of remote sensing data on the accuracy of numerical weather predictions; a 68-km gridded model was used to study the spectral properties of error growth.
Dictionary-based image reconstruction for superresolution in integrated circuit imaging.
Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim
2015-06-01
Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.
NASA Astrophysics Data System (ADS)
Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah
2018-04-01
This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual artifact banding phenomenon unlike the proposed method and USRM. In all, the proposed permeability and porosity fields generation coupled with the numerical simulator developed will aid in developing efficient mobility control schemes to improve on poor volumetric sweep efficiency in porous media.
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei
2016-03-01
In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.
UDU/T/ covariance factorization for Kalman filtering
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1980-01-01
There has been strong motivation to produce numerically stable formulations of the Kalman filter algorithms because it has long been known that the original discrete-time Kalman formulas are numerically unreliable. Numerical instability can be avoided by propagating certain factors of the estimate error covariance matrix rather than the covariance matrix itself. This paper documents filter algorithms that correspond to the covariance factorization P = UDU(T), where U is a unit upper triangular matrix and D is diagonal. Emphasis is on computational efficiency and numerical stability, since these properties are of key importance in real-time filter applications. The history of square-root and U-D covariance filters is reviewed. Simple examples are given to illustrate the numerical inadequacy of the Kalman covariance filter algorithms; these examples show how factorization techniques can give improved computational reliability.
Numerical Simulation of Non-Thermal Food Preservation
NASA Astrophysics Data System (ADS)
Rauh, C.; Krauss, J.; Ertunc, Ö.; Delgado, a.
2010-09-01
Food preservation is an important process step in food technology regarding product safety and product quality. Novel preservation techniques are currently developed, that aim at improved sensory and nutritional value but comparable safety than in conventional thermal preservation techniques. These novel non-thermal food preservation techniques are based for example on high pressures up to one GPa or pulsed electric fields. in literature studies the high potential of high pressures (HP) and of pulsed electric fields (PEF) is shown due to their high retention of valuable food components as vitamins and flavour and selective inactivation of spoiling enzymes and microorganisms. for the design of preservation processes based on the non-thermal techniques it is crucial to predict the effect of high pressure and pulsed electric fields on the food components and on the spoiling enzymes and microorganisms locally and time-dependent in the treated product. Homogenous process conditions (especially of temperature fields in HP and PEF processing and of electric fields in PEF) are aimed at to avoid the need of over-processing and the connected quality loss and to minimize safety risks due to under-processing. the present contribution presents numerical simulations of thermofluiddynamical phenomena inside of high pressure autoclaves and pulsed electric field treatment chambers. in PEF processing additionally the electric fields are considered. Implementing kinetics of occurring (bio-) chemical reactions in the numerical simulations of the temperature, flow and electric fields enables the evaluation of the process homogeneity and efficiency connected to different process parameters of the preservation techniques. Suggestions to achieve safe and high quality products are concluded out of the numerical results.
Improving "lab-on-a-chip" techniques using biomedical nanotechnology: a review.
Gorjikhah, Fatemeh; Davaran, Soodabeh; Salehi, Roya; Bakhtiari, Mohsen; Hasanzadeh, Arash; Panahi, Yunes; Emamverdy, Masumeh; Akbarzadeh, Abolfazl
2016-11-01
Nanotechnology and its applications in biomedical sciences principally in molecular nanodiagnostics are known as nanomolecular diagnostics, which provides new options for clinical nanodiagnostic techniques. Molecular nanodiagnostics are a critical role in the development of personalized medicine, which features point-of care performance of diagnostic procedure. This can to check patients at point-of-care facilities or in remote or resource-poor locations, therefore reducing checking time from days to minutes. In this review, applications of nanotechnology suited to biomedicine are discussed in two main class: biomedical applications for use inside (such as drugs, diagnostic techniques, prostheses, and implants) and outside the body (such as "lab-on-a-chip" techniques). A lab-on-a-chip (LOC) is a tool that incorporates numerous laboratory tasks onto a small device, usually only millimeters or centimeters in size. Finally, are discussed the applications of biomedical nanotechnology in improving "lab-on-a-chip" techniques.
Tu, Jia-Ying; Hsiao, Wei-De; Chen, Chih-Ying
2014-01-01
Testing techniques of dynamically substructured systems dissects an entire engineering system into parts. Components can be tested via numerical simulation or physical experiments and run synchronously. Additional actuator systems, which interface numerical and physical parts, are required within the physical substructure. A high-quality controller, which is designed to cancel unwanted dynamics introduced by the actuators, is important in order to synchronize the numerical and physical outputs and ensure successful tests. An adaptive forward prediction (AFP) algorithm based on delay compensation concepts has been proposed to deal with substructuring control issues. Although the settling performance and numerical conditions of the AFP controller are improved using new direct-compensation and singular value decomposition methods, the experimental results show that a linear dynamics-based controller still outperforms the AFP controller. Based on experimental observations, the least-squares fitting technique, effectiveness of the AFP compensation and differences between delay and ordinary differential equations are discussed herein, in order to reflect the fundamental issues of actuator modelling in relevant literature and, more specifically, to show that the actuator and numerical substructure are heterogeneous dynamic components and should not be collectively modelled as a homogeneous delay differential equation. PMID:25104902
Control of Flow Structure in Square Cross-Sectioned U Bend using Numerical Modeling
NASA Astrophysics Data System (ADS)
Yavuz, Mehmet Metin; Guden, Yigitcan
2014-11-01
Due to the curvature in U-bends, the flow development involves complex flow structures including Dean vortices and high levels of turbulence that are quite critical in considering noise problems and structural failure of the ducts. Computational fluid dynamic (CFD) models are developed using ANSYS Fluent to analyze and to control the flow structure in a square cross-sectioned U-bend with a radius of curvature Rc/D = 0.65. The predictions of velocity profiles on different angular positions of the U-bend are compared against the experimental results available in the literature and the previous numerical studies. The performances of different turbulence models are evaluated to propose the best numerical approach that has high accuracy with reduced computation time. The numerical results of the present study indicate improvements with respect to the previous numerical predictions and very good agreement with the available experimental results. In addition, a flow control technique is utilized to regulate the flow inside the bend. The elimination of Dean vortices along with significant reduction in turbulence levels in different cross flow planes are successfully achieved when the flow control technique is applied. The project is supported by Meteksan Defense Industries, Inc.
Communication Avoiding and Overlapping for Numerical Linear Algebra
2012-05-08
future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques
Statistical Mechanics and Dynamics of the Outer Solar System.I. The Jupiter/Saturn Zone
NASA Technical Reports Server (NTRS)
Grazier, K. R.; Newman, W. I.; Kaula, W. M.; Hyman, J. M.
1996-01-01
We report on numerical simulations designed to understand how the solar system evolved through a winnowing of planetesimals accreeted from the early solar nebula. This sorting process is driven by the energy and angular momentum and continues to the present day. We reconsider the existence and importance of stable niches in the Jupiter/Saturn Zone using greatly improved numerical techniques based on high-order optimized multi-step integration schemes coupled to roundoff error minimizing methods.
NASA Astrophysics Data System (ADS)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Evaluation of constraint stabilization procedures for multibody dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1987-01-01
Comparative numerical studies of four constraint treatment techniques for the simulation of general multibody dynamic systems are presented, and results are presented for the example of a classical crank mechanism and for a simplified version of the seven-link manipulator deployment problem. The staggered stabilization technique (Park, 1986) is found to yield improved accuracy and robustness over Baumgarte's (1972) technique, the singular decomposition technique (Walton and Steeves, 1969), and the penalty technique (Lotstedt, 1979). Furthermore, the staggered stabilization technique offers software modularity, and the only data each solution module needs to exchange with the other is a set of vectors plus a common module to generate the gradient matrix of the constraints, B.
Magnetic Field Applications in Semiconductor Crystal Growth and Metallurgy
NASA Technical Reports Server (NTRS)
Mazuruk, Konstantin; Ramachandran, Narayanan; Grugel, Richard; Curreri, Peter A. (Technical Monitor)
2002-01-01
The Traveling Magnetic Field (TMF) technique, recently proposed to control meridional flow in electrically conducting melts, is reviewed. In particular, the natural convection damping capability of this technique has been numerically demonstrated with the implication of significantly improving crystal quality. Advantages of the traveling magnetic field, in comparison to the more mature rotating magnetic field method, are discussed. Finally, results of experiments with mixing metallic alloys in long ampoules using TMF is presented
Lunar surface chemistry: A new imaging technique
Andre, C.G.; Bielefeld, M.J.; Eliason, E.; Soderblom, L.A.; Adler, I.; Philpotts, J.A.
1977-01-01
Detailed chemical maps of the lunar surface have been constructed by applying a new weighted-filter imaging technique to Apollo 15 and Apollo 16 x-ray fluorescence data. The data quality improvement is amply demonstrated by (i) modes in the frequency distribution, representing highland and mare soil suites, which are not evident before data filtering and (ii) numerous examples of chemical variations which are correlated with small-scale (about 15 kilometer) lunar topographic features.
Lunar surface chemistry - A new imaging technique
NASA Technical Reports Server (NTRS)
Andre, C. G.; Adler, I.; Bielefeld, M. J.; Eliason, E.; Soderblom, L. A.; Philpotts, J. A.
1977-01-01
Detailed chemical maps of the lunar surface have been constructed by applying a new weighted-filter imaging technique to Apollo 15 and Apollo 16 X-ray fluorescence data. The data quality improvement is amply demonstrated by (1) modes in the frequency distribution, representing highland and mare soil suites, which are not evident before data filtering, and (2) numerous examples of chemical variations which are correlated with small-scale (about 15 kilometer) lunar topographic features.
NASA Technical Reports Server (NTRS)
Luke, K. L.; Cheng, L.-J.
1986-01-01
Heavily doped emitter and junction regions of silicon solar cells are investigated by means of the electron-beam-induced-current (EBIC) technique. Although the experimental EBIC data are collected under three-dimensional conditions, it is analytically demonstrated with two numerical examples that the solutions obtained with one-dimensional numerical modeling are adequate. EBIC data for bare and oxide-covered emitter surfaces are compared with theory. The improvement in collection efficiency when an emitter surface is covered with a 100-A SiO2 film varies with beam energy; for a cell with a junction depth of 0.35 microns, the improvement is about 54 percent at 2 keV.
A Pressure Plate-Based Method for the Automatic Assessment of Foot Strike Patterns During Running.
Santuz, Alessandro; Ekizos, Antonis; Arampatzis, Adamantios
2016-05-01
The foot strike pattern (FSP, description of how the foot touches the ground at impact) is recognized to be a predictor of both performance and injury risk. The objective of the current investigation was to validate an original foot strike pattern assessment technique based on the numerical analysis of foot pressure distribution. We analyzed the strike patterns during running of 145 healthy men and women (85 male, 60 female). The participants ran on a treadmill with integrated pressure plate at three different speeds: preferred (shod and barefoot 2.8 ± 0.4 m/s), faster (shod 3.5 ± 0.6 m/s) and slower (shod 2.3 ± 0.3 m/s). A custom-designed algorithm allowed the automatic footprint recognition and FSP evaluation. Incomplete footprints were simultaneously identified and corrected from the software itself. The widely used technique of analyzing high-speed video recordings was checked for its reliability and has been used to validate the numerical technique. The automatic numerical approach showed a good conformity with the reference video-based technique (ICC = 0.93, p < 0.01). The great improvement in data throughput and the increased completeness of results allow the use of this software as a powerful feedback tool in a simple experimental setup.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Chou, Shih-Hung; Jedlovec, Gary
2012-01-01
Improvements to global and regional numerical weather prediction (NWP) have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) that mimics the analysis methodology, domain, and observational datasets for the regional North American Mesoscale (NAM) model run at the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC) are run to examine the impact of each type of AIRS data set. The first configuration will assimilate the AIRS radiance data along with other conventional and satellite data using techniques implemented within the operational system; the second configuration will assimilate AIRS retrieved profiles instead of AIRS radiances in the same manner. Preliminary results of this study will be presented and focus on the analysis impact of the radiances and profiles for selected cases.
NASA Astrophysics Data System (ADS)
Mucha, Waldemar; Kuś, Wacław
2018-01-01
The paper presents a practical implementation of hybrid simulation using Real Time Finite Element Method (RTFEM). Hybrid simulation is a technique for investigating dynamic material and structural properties of mechanical systems by performing numerical analysis and experiment at the same time. It applies to mechanical systems with elements too difficult or impossible to model numerically. These elements are tested experimentally, while the rest of the system is simulated numerically. Data between the experiment and numerical simulation are exchanged in real time. Authors use Finite Element Method to perform the numerical simulation. The following paper presents the general algorithm for hybrid simulation using RTFEM and possible improvements of the algorithm for computation time reduction developed by the authors. The paper focuses on practical implementation of presented methods, which involves testing of a mountain bicycle frame, where the shock absorber is tested experimentally while the rest of the frame is simulated numerically.
Ultrasound-guided piriformis muscle injection. A new approach.
Bevilacqua Alén, E; Diz Villar, A; Curt Nuño, F; Illodo Miramontes, G; Refojos Arencibia, F J; López González, J M
2016-12-01
Piriformis syndrome is an uncommon cause of buttock and leg pain. Some treatment options include the injection of piriformis muscle with local anesthetic and steroids. Various techniques for piriformis muscle injection have been described. Ultrasound allows direct visualization and real time injection of the piriformis muscle. We describe 5 consecutive patients, diagnosed of piriformis syndrome with no improvement after pharmacological treatment. Piriformis muscle injection with local anesthetics and steroids was performed using an ultrasound technique based on a standard technique. All 5 patients have improved their pain measured by numeric verbal scale. One patient had a sciatic after injection that improved in 10 days spontaneously. We describe an ultrasound-guided piriformis muscle injection that has the advantages of being effective, simple, and safe. Copyright © 2016 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.
High range free space optic transmission using new dual diffuser modulation technique
NASA Astrophysics Data System (ADS)
Rahman, A. K.; Julai, N.; Jusoh, M.; Rashidi, C. B. M.; Aljunid, S. A.; Anuar, M. S.; Talib, M. F.; Zamhari, Nurdiani; Sahari, S. k.; Tamrin, K. F.; Jong, Rudiyanto P.; Zaidel, D. N. A.; Mohtadzar, N. A. A.; Sharip, M. R. M.; Samat, Y. S.
2017-11-01
Free space optical communication fsoc is vulnerable with fluctuating atmospheric. This paper focus analyzes the finding of new technique dual diffuser modulation (ddm) to mitigate the atmospheric turbulence effect. The performance of fsoc under the presence of atmospheric turbulence will cause the laser beam keens to (a) beam wander, (b) beam spreading and (c) scintillation. The most deteriorate the fsoc is scintillation where it affected the wavefront cause to fluctuating signal and ultimately receiver can turn into saturate or loss signal. Ddm approach enhances the detecting bit `1' and bit `0' and improves the power received to combat with turbulence effect. The performance focus on signal-to-noise (snr) and bit error rate (ber) where the numerical result shows that the ddm technique able to improves the range where estimated approximately 40% improvement under weak turbulence and 80% under strong turbulence.
An efficient HZETRN (a galactic cosmic ray transport code)
NASA Technical Reports Server (NTRS)
Shinn, Judy L.; Wilson, John W.
1992-01-01
An accurate and efficient engineering code for analyzing the shielding requirements against the high-energy galactic heavy ions is needed. The HZETRN is a deterministic code developed at Langley Research Center that is constantly under improvement both in physics and numerical computation and is targeted for such use. One problem area connected with the space-marching technique used in this code is the propagation of the local truncation error. By improving the numerical algorithms for interpolation, integration, and grid distribution formula, the efficiency of the code is increased by a factor of eight as the number of energy grid points is reduced. The numerical accuracy of better than 2 percent for a shield thickness of 150 g/cm(exp 2) is found when a 45 point energy grid is used. The propagating step size, which is related to the perturbation theory, is also reevaluated.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Sampling the Airway: Improving the Predictive and Toxicological Value of Bronchoalveolar Lavage
Bronchoalveolar lavage (BAL) is a relatively simple technique to obtain biological material in the form of BAL fluid (BALF) from airways of humans and laboratory animals. Numerous predictive biomarkers of pulmonary injury and diseases can be detected in BALF which aid in diagnosi...
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Improvement in QEPAS system utilizing a second harmonic based wavelength calibration technique
NASA Astrophysics Data System (ADS)
Zhang, Qinduan; Chang, Jun; Wang, Fupeng; Wang, Zongliang; Xie, Yulei; Gong, Weihua
2018-05-01
A simple laser wavelength calibration technique, based on second harmonic signal, is demonstrated in this paper to improve the performance of quartz enhanced photoacoustic spectroscopy (QEPAS) gas sensing system, e.g. improving the signal to noise ratio (SNR), detection limit and long-term stability. Constant current, corresponding to the gas absorption line, combining f/2 frequency sinusoidal signal are used to drive the laser (constant driving mode), a software based real-time wavelength calibration technique is developed to eliminate the wavelength drift due to ambient fluctuations. Compared to conventional wavelength modulation spectroscopy (WMS), this method allows lower filtering bandwidth and averaging algorithm applied to QEPAS system, improving SNR and detection limit. In addition, the real-time wavelength calibration technique guarantees the laser output is modulated steadily at gas absorption line. Water vapor is chosen as an objective gas to evaluate its performance compared to constant driving mode and conventional WMS system. The water vapor sensor was designed insensitive to the incoherent external acoustic noise by the numerical averaging technique. As a result, the SNR increases 12.87 times in wavelength calibration technique based system compared to conventional WMS system. The new system achieved a better linear response (R2 = 0 . 9995) in concentration range from 300 to 2000 ppmv, and achieved a minimum detection limit (MDL) of 630 ppbv.
Two alternative ways for solving the coordination problem in multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1991-01-01
Two techniques for formulating the coupling between levels in multilevel optimization by linear decomposition, proposed as improvements over the original formulation, now several years old, that relied on explicit equality constraints which were shown by application experience as occasionally causing numerical difficulties. The two new techniques represent the coupling without using explicit equality constraints, thus avoiding the above diffuculties and also reducing computational cost of the procedure. The old and new formulations are presented in detail and illustrated by an example of a structural optimization. A generic version of the improved algorithm is also developed for applications to multidisciplinary systems not limited to structures.
NASA Astrophysics Data System (ADS)
Kim, Younghyun; Sung, Yunsu; Yang, Jung-Tack; Choi, Woo-Young
2018-02-01
The characteristics of high-power broad-area laser diodes with the improved heat sinking structure are numerically analyzed by a technology computer-aided design based self-consistent electro-thermal-optical simulation. The high-power laser diodes consist of a separate confinement heterostructure of a compressively strained InGaAsP quantum well and GaInP optical cavity layers, and a 100-μm-wide rib and a 2000-μm long cavity. In order to overcome the performance deteriorations of high-power laser diodes caused by self-heating such as thermal rollover and thermal blooming, we propose the high-power broad-area laser diode with improved heat-sinking structure, which another effective heat-sinking path toward the substrate side is added by removing a bulk substrate. It is possible to obtain by removing a 400-μm-thick GaAs substrate with an AlAs sacrificial layer utilizing well-known epitaxial liftoff techniques. In this study, we present the performance improvement of the high-power laser diode with the heat-sinking structure by suppressing thermal effects. It is found that the lateral far-field angle as well as quantum well temperature is expected to be improved by the proposed heat-sinking structure which is required for high beam quality and optical output power, respectively.
1993-03-10
template which runs a Romberg algorithm in the background to numerically integrate the BVN [12:257]. Appendix A als- lists the results from two other...for computing these values: a Taylor series expansion, the Romberg algorithm , and the CBN technique. Appendix A lists CEPpop. values for eleven...determining factor in this selection process. Of the 175 populations ex- amined in the experiment, the MathCAD version of the Romberg algorithm failed
Analytical cytology applied to detection of induced cytogenetic abnormalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gray, J.W.; Lucas, J.; Straume, T.
1987-08-06
Radiation-induced biological damage results in formation of a broad spectrum of cytogenetic changes such as translocations, dicentrics, ring chromosomes, and acentric fragments. A battery of analytical cytologic techniques are now emerging that promise to significantly improve the precision and ease with which these radiation induced cytogenetic changes can be quantified. This report summarizes techniques to facilitate analysis of the frequency of occurrence of structural and numerical aberrations in control and irradiated human cells. 14 refs., 2 figs.
Optimal startup control of a jacketed tubular reactor.
NASA Technical Reports Server (NTRS)
Hahn, D. R.; Fan, L. T.; Hwang, C. L.
1971-01-01
The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.
Strategies for Improving Rehearsal Technique: Using Research Findings to Promote Better Rehearsals
ERIC Educational Resources Information Center
Silvey, Brian A.
2014-01-01
Music education researchers and conducting pedagogues have identified numerous behaviors that contribute to increased verbal and nonverbal teaching effectiveness of conductors on the podium. This article is a review of literature concerning several conductor behaviors that may (a) increase the effectiveness of rehearsals, (b) enhance the…
Species of Perkinsus are responsible for high mortalities of bivalve molluscs world-wide. Techniques to accurately estimate parasites in tissues are required to improve understanding of perkinsosis. This study quantifies the number and tissue distribution of Perkinsus marinus in ...
The Future of Higher Education in Nigeria: Global Challenges and Opportunities
ERIC Educational Resources Information Center
Oni, Adesoji A.; Alade, Ibiwumi A.
2008-01-01
Among the numerous components of development of higher education are; growth in quantity, quality, relevance and diversity of curriculum [programme and courses]; widening of access and broadening of equity, innovation in teaching methods and techniques; improvement in the quantity and quality of research activities; more and better community…
Development of known-fate survival monitoring techniques for juvenile wild pigs (Sus scrofa)
David A. Keiter; John C. Kilgo; Mark A. Vukovich; Fred L. Cunningham; James C. Beasley
2017-01-01
Context. Wild pigs are an invasive species linked to numerous negative impacts on natural and anthropogenic ecosystems in many regions of the world. Robust estimates of juvenile wild pig survival are needed to improve population dynamics models to facilitate management of this economically and ecologically...
NASA Technical Reports Server (NTRS)
Garai, Anirban; Diosady, Laslo T.; Murman, Scott M.; Madavan, Nateri K.
2016-01-01
The perfectly matched layer (PML) technique is developed in the context of a high- order spectral-element Discontinuous-Galerkin (DG) method. The technique is applied to a range of test cases and is shown to be superior compared to other approaches, such as those based on using characteristic boundary conditions and sponge layers, for treating the inflow and outflow boundaries of computational domains. In general, the PML technique improves the quality of the numerical results for simulations of practical flow configurations, but it also exhibits some instabilities for large perturbations. A preliminary analysis that attempts to understand the source of these instabilities is discussed.
Numerical modeling of pollutant transport using a Lagrangian marker particle technique
NASA Technical Reports Server (NTRS)
Spaulding, M.
1976-01-01
A derivation and code were developed for the three-dimensional mass transport equation, using a particle-in-cell solution technique, to solve coastal zone waste discharge problems where particles are a major component of the waste. Improvements in the particle movement techniques are suggested and typical examples illustrated. Preliminary model comparisons with analytic solutions for an instantaneous point release in a uniform flow show good results in resolving the waste motion. The findings to date indicate that this computational model will provide a useful technique to study the motion of sediment, dredged spoils, and other particulate waste commonly deposited in coastal waters.
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1988-01-01
A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Damage Evaluation Based on a Wave Energy Flow Map Using Multiple PZT Sensors
Liu, Yaolu; Hu, Ning; Xu, Hong; Yuan, Weifeng; Yan, Cheng; Li, Yuan; Goda, Riu; Alamusi; Qiu, Jinhao; Ning, Huiming; Wu, Liangke
2014-01-01
A new wave energy flow (WEF) map concept was proposed in this work. Based on it, an improved technique incorporating the laser scanning method and Betti's reciprocal theorem was developed to evaluate the shape and size of damage as well as to realize visualization of wave propagation. In this technique, a simple signal processing algorithm was proposed to construct the WEF map when waves propagate through an inspection region, and multiple lead zirconate titanate (PZT) sensors were employed to improve inspection reliability. Various damages in aluminum and carbon fiber reinforced plastic laminated plates were experimentally and numerically evaluated to validate this technique. The results show that it can effectively evaluate the shape and size of damage from wave field variations around the damage in the WEF map. PMID:24463430
An automatic step adjustment method for average power analysis technique used in fiber amplifiers
NASA Astrophysics Data System (ADS)
Liu, Xue-Ming
2006-04-01
An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.
A successive overrelaxation iterative technique for an adaptive equalizer
NASA Technical Reports Server (NTRS)
Kosovych, O. S.
1973-01-01
An adaptive strategy for the equalization of pulse-amplitude-modulated signals in the presence of intersymbol interference and additive noise is reported. The successive overrelaxation iterative technique is used as the algorithm for the iterative adjustment of the equalizer coefficents during a training period for the minimization of the mean square error. With 2-cyclic and nonnegative Jacobi matrices substantial improvement is demonstrated in the rate of convergence over the commonly used gradient techniques. The Jacobi theorems are also extended to nonpositive Jacobi matrices. Numerical examples strongly indicate that the improvements obtained for the special cases are possible for general channel characteristics. The technique is analytically demonstrated to decrease the mean square error at each iteration for a large range of parameter values for light or moderate intersymbol interference and for small intervals for general channels. Analytically, convergence of the relaxation algorithm was proven in a noisy environment and the coefficient variance was demonstrated to be bounded.
Somatic Embryogenesis: Still a Relevant Technique in Citrus Improvement.
Omar, Ahmad A; Dutt, Manjul; Gmitter, Frederick G; Grosser, Jude W
2016-01-01
The genus Citrus contains numerous fresh and processed fruit cultivars that are economically important worldwide. New cultivars are needed to battle industry threatening diseases and to create new marketing opportunities. Citrus improvement by conventional methods alone has many limitations that can be overcome by applications of emerging biotechnologies, generally requiring cell to plant regeneration. Many citrus genotypes are amenable to somatic embryogenesis, which became a key regeneration pathway in many experimental approaches to cultivar improvement. This chapter provides a brief history of plant somatic embryogenesis with focus on citrus, followed by a discussion of proven applications in biotechnology-facilitated citrus improvement techniques, such as somatic hybridization, somatic cybridization, genetic transformation, and the exploitation of somaclonal variation. Finally, two important new protocols that feature plant regeneration via somatic embryogenesis are provided: protoplast transformation and Agrobacterium-mediated transformation of embryogenic cell suspension cultures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, F.; Ruiz, C.; Becker, A.
We study the suppression of reflections in the numerical simulation of the time-dependent Schroedinger equation for strong-field problems on a grid using exterior complex scaling (ECS) as an absorbing boundary condition. It is shown that the ECS method can be applied in both the length and the velocity gauge as long as appropriate approximations are applied in the ECS transformation of the electron-field coupling. It is found that the ECS method improves the suppression of reflection as compared to the conventional masking function technique in typical simulations of atoms exposed to an intense laser pulse. Finally, we demonstrate the advantagemore » of the ECS technique to avoid unphysical artifacts in the evaluation of high harmonic spectra.« less
Numerical Study of Low Emission Gas Turbine Combustor Concepts
NASA Technical Reports Server (NTRS)
Yang, Song-Lin
2002-01-01
To further reduce pollutant emissions, such as CO, NO(x), UHCs, etc., in the next few decades, innovative concepts of gas turbine combustors must be developed. Several concepts, such as the LIPP (Lean- Premixed- Prevaporized), RQL (Rich-Burn Quick-Quench Lean-Burn), and LDI (Lean-Direct-Injection), have been under study for many years. To fully realize the potential of these concepts, several improvements, such as inlet geometry, air swirler, aerothermochemistry control, fuel preparation, fuel injection and injector design, etc., must be made, which can be studied through the experimental method and/or the numerical technique. The purpose of this proposal is to use the CFD technique to study, and hence, to guide the design process for low emission gas turbine combustors. A total of 13 technical papers have been (or will be) published.
Sparsity based terahertz reflective off-axis digital holography
NASA Astrophysics Data System (ADS)
Wan, Min; Muniraj, Inbarasan; Malallah, Ra'ed; Zhao, Liang; Ryle, James P.; Rong, Lu; Healy, John J.; Wang, Dayong; Sheridan, John T.
2017-05-01
Terahertz radiation lies between the microwave and infrared regions in the electromagnetic spectrum. Emitted frequencies range from 0.1 to 10 THz with corresponding wavelengths ranging from 30 μm to 3 mm. In this paper, a continuous-wave Terahertz off-axis digital holographic system is described. A Gaussian fitting method and image normalisation techniques were employed on the recorded hologram to improve the image resolution. A synthesised contrast enhanced hologram is then digitally constructed. Numerical reconstruction is achieved using the angular spectrum method of the filtered off-axis hologram. A sparsity based compression technique is introduced before numerical data reconstruction in order to reduce the dataset required for hologram reconstruction. Results prove that a tiny amount of sparse dataset is sufficient in order to reconstruct the hologram with good image quality.
Numerical simulation of supersonic and hypersonic inlet flow fields
NASA Technical Reports Server (NTRS)
Mcrae, D. Scott; Kontinos, Dean A.
1995-01-01
This report summarizes the research performed by North Carolina State University and NASA Ames Research Center under Cooperative Agreement NCA2-719, 'Numerical Simulation of Supersonic and Hypersonic Inlet Flow Fields". Four distinct rotated upwind schemes were developed and investigated to determine accuracy and practicality. The scheme found to have the best combination of attributes, including reduction to grid alignment with no rotation, was the cell centered non-orthogonal (CCNO) scheme. In 2D, the CCNO scheme improved rotation when flux interpolation was extended to second order. In 3D, improvements were less dramatic in all cases, with second order flux interpolation showing the least improvement over grid aligned upwinding. The reduction in improvement is attributed to uncertainty in determining optimum rotation angle and difficulty in performing accurate and efficient interpolation of the angle in 3D. The CCNO rotational technique will prove very useful for increasing accuracy when second order interpolation is not appropriate and will materially improve inlet flow solutions.
Dynamic one-dimensional modeling of secondary settling tanks and design impacts of sizing decisions.
Li, Ben; Stenstrom, Michael K
2014-03-01
As one of the most significant components in the activated sludge process (ASP), secondary settling tanks (SSTs) can be investigated with mathematical models to optimize design and operation. This paper takes a new look at the one-dimensional (1-D) SST model by analyzing and considering the impacts of numerical problems, especially the process robustness. An improved SST model with Yee-Roe-Davis technique as the PDE solver is proposed and compared with the widely used Takács model to show its improvement in numerical solution quality. The improved and Takács models are coupled with a bioreactor model to reevaluate ASP design basis and several popular control strategies for economic plausibility, contaminant removal efficiency and system robustness. The time-to-failure due to rising sludge blanket during overloading, as a key robustness indicator, is analyzed to demonstrate the differences caused by numerical issues in SST models. The calculated results indicate that the Takács model significantly underestimates time to failure, thus leading to a conservative design. Copyright © 2013 Elsevier Ltd. All rights reserved.
Numeric data distribution: The vital role of data exchange in today's world
NASA Technical Reports Server (NTRS)
Chase, Malcolm W.
1994-01-01
The major aim of the NIST standard Reference Data Program (SRD) is to provide critically evaluated numeric data to the scientific and technical community in a convenient and accessible form. A second aim of the program is to provide feedback into the experimental and theoretical programs to help raise the general standards of measurement. By communicating the experience gained in evaluating the world output of data in the physical sciences, NIST/SRD helps to advance the level of experimental techniques and improve the reliability of physical measurements.
Alternating Direction Implicit (ADI) schemes for a PDE-based image osmosis model
NASA Astrophysics Data System (ADS)
Calatroni, L.; Estatico, C.; Garibaldi, N.; Parisotto, S.
2017-10-01
We consider Alternating Direction Implicit (ADI) splitting schemes to compute efficiently the numerical solution of the PDE osmosis model considered by Weickert et al. in [10] for several imaging applications. The discretised scheme is shown to preserve analogous properties to the continuous model. The dimensional splitting strategy traduces numerically into the solution of simple tridiagonal systems for which standard matrix factorisation techniques can be used to improve upon the performance of classical implicit methods, even for large time steps. Applications to the shadow removal problem are presented.
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Saumil S.; Fischer, Paul F.; Min, Misun
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
Energy and technology review: Engineering modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.
1986-10-01
This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.
NASA Astrophysics Data System (ADS)
Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.
2018-03-01
Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.
NASA Astrophysics Data System (ADS)
Trivedi, Nitin; Kumar, Manoj; Haldar, Subhasis; Deswal, S. S.; Gupta, Mridula; Gupta, R. S.
2017-09-01
A charge plasma technique based dopingless (DL) accumulation mode (AM) junctionless (JL) cylindrical surrounding gate (CSG) MOSFET has been proposed and extensively investigated. Proposed device has no physical junction at source to channel and channel to drain interface. The complete silicon pillar has been considered as undoped. The high free electron density or induced N+ region is designed by keeping the work function of source/drain metal contacts lower than the work function of undoped silicon. Thus, its fabrication complexity is drastically reduced by curbing the requirement of high temperature doping techniques. The electrical/analog characteristics for the proposed device has been extensively investigated using the numerical simulation and are compared with conventional junctionless cylindrical surrounding gate (JL-CSG) MOSFET with identical dimensions. For the numerical simulation purpose ATLAS-3D device simulator is used. The results show that the proposed device is more short channel immune to conventional JL-CSG MOSFET and suitable for faster switching applications due to higher I ON/ I OFF ratio.
NASA Technical Reports Server (NTRS)
Fetterman, Timothy L.; Noor, Ahmed K.
1987-01-01
Computational procedures are presented for evaluating the sensitivity derivatives of the vibration frequencies and eigenmodes of framed structures. Both a displacement and a mixed formulation are used. The two key elements of the computational procedure are: (a) Use of dynamic reduction techniques to substantially reduce the number of degrees of freedom; and (b) Application of iterative techniques to improve the accuracy of the derivatives of the eigenmodes. The two reduction techniques considered are the static condensation and a generalized dynamic reduction technique. Error norms are introduced to assess the accuracy of the eigenvalue and eigenvector derivatives obtained by the reduction techniques. The effectiveness of the methods presented is demonstrated by three numerical examples.
Performance Optimization of Marine Science and Numerical Modeling on HPC Cluster
Yang, Dongdong; Yang, Hailong; Wang, Luming; Zhou, Yucong; Zhang, Zhiyuan; Wang, Rui; Liu, Yi
2017-01-01
Marine science and numerical modeling (MASNUM) is widely used in forecasting ocean wave movement, through simulating the variation tendency of the ocean wave. Although efforts have been devoted to improve the performance of MASNUM from various aspects by existing work, there is still large space unexplored for further performance improvement. In this paper, we aim at improving the performance of propagation solver and data access during the simulation, in addition to the efficiency of output I/O and load balance. Our optimizations include several effective techniques such as the algorithm redesign, load distribution optimization, parallel I/O and data access optimization. The experimental results demonstrate that our approach achieves higher performance compared to the state-of-the-art work, about 3.5x speedup without degrading the prediction accuracy. In addition, the parameter sensitivity analysis shows our optimizations are effective under various topography resolutions and output frequencies. PMID:28045972
Numerical modeling of cold room's hinged door opening and closing processes
NASA Astrophysics Data System (ADS)
Carneiro, R.; Gaspar, P. D.; Silva, P. D.; Domingues, L. C.
2016-06-01
The need of rationalize energy consumption in agrifood industry has fasten the development of methodologies to improve the thermal and energy performances of cold rooms. This paper presents a three-dimensional (3D) transient Computational Fluid Dynamics (CFD) modelling of a cold room to evaluate the air infiltration rate through hinged doors. A species transport model is used for modelling the tracer gas concentration decay technique. Numerical predictions indicate that air temperature difference between spaces affects the air infiltration. For this case study, the infiltration rate increases 0.016 m3 s-1 per K of air temperature difference. The knowledge about the evolution of air infiltration during door opening/closing times allows to draw some conclusions about its influence on the air conditions inside the cold room, as well as to suggest best practices and simple technical improvements that can minimize air infiltration, and consequently improve thermal performance and energy consumption rationalization.
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
Delay in the spin-up of precipitation early in numerical atmospheric forecasts is a deficiency correctable by diabatic initialization combined with diabatic forcing. For either to be effective requires some knowledge of the magnitude and vertical placement of the latent heating fields. Until recently the best source of cloud and rain water data was the remotely sensed vertical integrated precipitation rate or liquid water content. Vertical placement of the condensation remains unknown. Some information about the vertical distribution of the heating rates and precipitating liquid water and ice can be obtained from retrieval techniques that use a physical model of precipitating clouds to refine and improve the interpretation of the remotely sensed data. A description of this procedure and an examination of its 3-D liquid water products, along with improved modeling methods that enhance or speed-up storm development is discussed.
Improvements in the efficiency of turboexpanders in cryogenic applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agahi, R.R.; Lin, M.C.; Ershaghi, B.
1996-12-31
Process designers have utilized turboexpanders in cryogenic processes because of their higher thermal efficiencies when compared with conventional refrigeration cycles. Process design and equipment performance have improved substantially through the utilization of modern technologies. Turboexpander manufacturers have also adopted Computational Fluid Dynamic Software, Computer Numerical Control Technology and Holography Techniques to further improve an already impressive turboexpander efficiency performance. In this paper, the authors explain the design process of the turboexpander utilizing modern technology. Two cases of turboexpanders processing helium (4.35{degrees}K) and hydrogen (56{degrees}K) will be presented.
Study of Variable Frequency Induction Heating in Steel Making Process
NASA Astrophysics Data System (ADS)
Fukutani, Kazuhiko; Umetsu, Kenji; Itou, Takeo; Isobe, Takanori; Kitahara, Tadayuki; Shimada, Ryuichi
Induction heating technologies have been the standard technologies employed in steel making processes because they are clean, they have a high energy density, they are highly the controllable, etc. However, there is a problem in using them; in general, frequencies of the electric circuits have to be kept fixed to improve their power factors, and this constraint makes the processes inflexible. In order to overcome this problem, we have developed a new heating technique-variable frequency power supply with magnetic energy recovery switching. This technique helps us in improving the quality of steel products as well as the productivity. We have also performed numerical calculations and experiments to evaluate its effect on temperature distributions on heated steel plates. The obtained results indicate that the application of the technique in steel making processes would be advantageous.
2013-09-30
transiting whales in the Southern California Bight, b) the use of passive underwater acoustic techniques for improved habitat assessment in biologically...sensitive areas and improved ecosystem modeling, and c) the application of the physics of excitable media to numerical modeling of biological choruses...was on the potential impact of man-made sounds on the calling behavior of transiting humpback whales in the Southern California Bight. The main
On the primary variable switching technique for simulating unsaturated-saturated flows
NASA Astrophysics Data System (ADS)
Diersch, H.-J. G.; Perrochet, P.
Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.
NASA Astrophysics Data System (ADS)
Madun, A.; Meghzili, S. A.; Tajudin, SAA; Yusof, M. F.; Zainalabidin, M. H.; Al-Gheethi, A. A.; Dan, M. F. Md; Ismail, M. A. M.
2018-04-01
The most important application of various geotechnical construction techniques is for ground improvement. Many soil improvement project had been developed due to the ongoing increase in urban and industrial growth and the need for greater access to lands. Stone columns are one of the best effective and feasible techniques for soft clay soil improvement. Stone columns increase the bearing capacity and reduce the settlement of soil. Finite element analyses were performed using the program PLAXIS 2D. An elastic-perfectly plastic constitutive relation, based on the Mohr–Coulomb criterion, governs the soft clay and stone column behaviour. This paper presents on how the response surface methodology (RSM) software is used to optimize the effect of the diameters and lengths of column on the load bearing capacity and settlement of soft clay. Load tests through the numerical modelling using Plaxis 2D were carried out on the loading plate at 66 mm. Stone column load bearing capacity increases with the increasing diameter of the column and settlement decreases with the increasing length of the column. Results revealed that the bigger column diameter, the higher load bearing capacity of soil while the longer column length, the lower settlement of soil. However, the optimum design of stone column was varied with each factor (diameter and length) separately for improvement.
Van Hooreweder, Brecht; Apers, Yanni; Lietaert, Karel; Kruth, Jean-Pierre
2017-01-01
This paper provides new insights into the fatigue properties of porous metallic biomaterials produced by additive manufacturing. Cylindrical porous samples with diamond unit cells were produced from Ti6Al4V powder using Selective Laser Melting (SLM). After measuring all morphological and quasi-static properties, compression-compression fatigue tests were performed to determine fatigue strength and to identify important fatigue influencing factors. In a next step, post-SLM treatments were used to improve the fatigue life of these biomaterials by changing the microstructure and by reducing stress concentrators and surface roughness. In particular, the influence of stress relieving, hot isostatic pressing and chemical etching was studied. Analytical and numerical techniques were developed to calculate the maximum local tensile stress in the struts as function of the strut diameter and load. With this method, the variability in the relative density between all samples was taken into account. The local stress in the struts was then used to quantify the exact influence of the applied post-SLM treatments on the fatigue life. A significant improvement of the fatigue life was achieved. Also, the post-SLM treatments, procedures and calculation methods can be applied to different types of porous metallic structures and hence this paper provides useful tools for improving fatigue performance of metallic biomaterials. Additive Manufacturing (AM) techniques such as Selective Laser Melting (SLM) are increasingly being used for producing customized porous metallic biomaterials. These biomaterials are regularly used for biomedical implants and hence a long lifetime is required. In this paper, a set of post-built surface and heat treatments is presented that can be used to significantly improve the fatigue life of porous SLM-Ti6Al4V samples. In addition, a novel and efficient analytical local stress method was developed to accurately quantify the influence of the post-built treatments on the fatigue life. Also numerical simulation techniques were used for validation. The developed methods and techniques can be applied to other types of porous biomaterials and hence provide new and useful tools for improving and predicting the fatigue life of porous biomaterials. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
[Interventional radiology in bone metastases].
Chiras, Jacques; Cormier, Evelyne; Baragan, Hector; Jean, Betty; Rose, Michèle
2007-02-01
Interventional radiology takes a large place in the treatment of bone metastases by numerous techniques, percutaneous or endovascular. Vertebroplasty appears actually as the most important technique for stabilisation of spine metastases as it induces satisfactory stabilisation of the vertebra and offer clear improvement of the quality of life. Due to the success of this technique cementoplasty of other bones, mainly pelvic girdle, largely develop. The heath due to the polymerisation of the cement induce carcinolytic effect but this effect is not as important as that can be created with radiofrequency destruction. This last technique appears actually as the most important development to destroy definitively some bone metastases and replace progressively alcoholic destruction of such lesions. Angiographic techniques appear more confidential but endovascular embolization is very useful to diminish the risk of surgical treatment of hyper vascular metastases. Chemoembolization is actually developped to associate the relief of pain induced by endovascular embolization and the carcinolytic effect obtained by local endovascular chemotherapy. All these techniques should develop largely during the next years and their efficacy and safety should improve largely by treating earlier the metastasis.
NASA Astrophysics Data System (ADS)
Ammari, Habib; Qiu, Lingyun; Santosa, Fadil; Zhang, Wenlong
2017-12-01
In this paper we present a mathematical and numerical framework for a procedure of imaging anisotropic electrical conductivity tensor by integrating magneto-acoutic tomography with data acquired from diffusion tensor imaging. Magneto-acoustic tomography with magnetic induction (MAT-MI) is a hybrid, non-invasive medical imaging technique to produce conductivity images with improved spatial resolution and accuracy. Diffusion tensor imaging (DTI) is also a non-invasive technique for characterizing the diffusion properties of water molecules in tissues. We propose a model for anisotropic conductivity in which the conductivity is proportional to the diffusion tensor. Under this assumption, we propose an optimal control approach for reconstructing the anisotropic electrical conductivity tensor. We prove convergence and Lipschitz type stability of the algorithm and present numerical examples to illustrate its accuracy and feasibility.
Numerical modeling techniques for flood analysis
NASA Astrophysics Data System (ADS)
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
An Improved Treatment of External Boundary for Three-Dimensional Flow Computations
NASA Technical Reports Server (NTRS)
Tsynkov, Semyon V.; Vatsa, Veer N.
1997-01-01
We present an innovative numerical approach for setting highly accurate nonlocal boundary conditions at the external computational boundaries when calculating three-dimensional compressible viscous flows over finite bodies. The approach is based on application of the difference potentials method by V. S. Ryaben'kii and extends our previous technique developed for the two-dimensional case. The new boundary conditions methodology has been successfully combined with the NASA-developed code TLNS3D and used for the analysis of wing-shaped configurations in subsonic and transonic flow regimes. As demonstrated by the computational experiments, the improved external boundary conditions allow one to greatly reduce the size of the computational domain while still maintaining high accuracy of the numerical solution. Moreover, they may provide for a noticeable speedup of convergence of the multigrid iterations.
NASA Astrophysics Data System (ADS)
Cai, Jiaxiang; Liang, Hua; Zhang, Chun
2018-06-01
Based on the multi-symplectic Hamiltonian formula of the generalized Rosenau-type equation, a multi-symplectic scheme and an energy-preserving scheme are proposed. To improve the accuracy of the solution, we apply the composition technique to the obtained schemes to develop high-order schemes which are also multi-symplectic and energy-preserving respectively. Discrete fast Fourier transform makes a significant improvement to the computational efficiency of schemes. Numerical results verify that all the proposed schemes have satisfactory performance in providing accurate solution and preserving the discrete mass and energy invariants. Numerical results also show that although each basic time step is divided into several composition steps, the computational efficiency of the composition schemes is much higher than that of the non-composite schemes.
NASA Technical Reports Server (NTRS)
Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco
2010-01-01
An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).
Advanced computational techniques for incompressible/compressible fluid-structure interactions
NASA Astrophysics Data System (ADS)
Kumar, Vinod
2005-07-01
Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.
Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo
2009-01-01
Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.
Microwave holographic metrology for antenna diagnosis
NASA Astrophysics Data System (ADS)
Rahmat-Samii, Y.
1990-11-01
Advances in antenna diagnostic methodologies have been very significant in recent years. In particular, microwave holographic diagnostic techniques have been applied very successfully in improving the performance of reflector and array antennas. These techniques use the knowledge of the measured amplitude and phase of the antenna radiated fields and then take advantage of the existing Fourier transform relationships between the radiated fields and the effective aperture or current distribution to eventually determine the reflector surface or array excitation coefficients anomalies. In this paper an overview of the recent developments in applying microwave holography is presented. The theoretical, numerical and measurement aspects of this technique is detailed by providing representative results.
Numerical and Experimental Investigation of the Turbulent Flow in a Ribbed Serpentine Passage
NASA Technical Reports Server (NTRS)
Iaccarino, Gianluca; Kalitzin, Georgi; Elkins, Christopher J.
2003-01-01
In this paper, the turbulent flow in a serpentine with oblique ribs is investigated experimentally and by numerical simulations. The measurements are carried out by using Magnetic Resonance Velocimetry (MRV) and the simulations using the Immersed Boundary (IB) technique. A brief description of these two approaches is reported in following sections. The results are reported in terms of velocity distributions in various planes in the serpentine; differences between measurements and simulations are presented qualitatively and quantitatively. The study of the discrepancy allows us to identify areas of needed improvements in the turbulence modeling.
Computer numeric control generation of toric surfaces
NASA Astrophysics Data System (ADS)
Bradley, Norman D.; Ball, Gary A.; Keller, John R.
1994-05-01
Until recently, the manufacture of toric ophthalmic lenses relied largely upon expensive, manual techniques for generation and polishing. Recent gains in computer numeric control (CNC) technology and tooling enable lens designers to employ single- point diamond, fly-cutting methods in the production of torics. Fly-cutting methods continue to improve, significantly expanding lens design possibilities while lowering production costs. Advantages of CNC fly cutting include precise control of surface geometry, rapid production with high throughput, and high-quality lens surface finishes requiring minimal polishing. As accessibility and affordability increase within the ophthalmic market, torics promise to dramatically expand lens design choices available to consumers.
A hybrid experimental-numerical technique for determining 3D velocity fields from planar 2D PIV data
NASA Astrophysics Data System (ADS)
Eden, A.; Sigurdson, M.; Mezić, I.; Meinhart, C. D.
2016-09-01
Knowledge of 3D, three component velocity fields is central to the understanding and development of effective microfluidic devices for lab-on-chip mixing applications. In this paper we present a hybrid experimental-numerical method for the generation of 3D flow information from 2D particle image velocimetry (PIV) experimental data and finite element simulations of an alternating current electrothermal (ACET) micromixer. A numerical least-squares optimization algorithm is applied to a theory-based 3D multiphysics simulation in conjunction with 2D PIV data to generate an improved estimation of the steady state velocity field. This 3D velocity field can be used to assess mixing phenomena more accurately than would be possible through simulation alone. Our technique can also be used to estimate uncertain quantities in experimental situations by fitting the gathered field data to a simulated physical model. The optimization algorithm reduced the root-mean-squared difference between the experimental and simulated velocity fields in the target region by more than a factor of 4, resulting in an average error less than 12% of the average velocity magnitude.
NASA Technical Reports Server (NTRS)
Mickey, F. E.; Mcewan, A. J.; Ewing, E. G.; Huyler, W. C., Jr.; Khajeh-Nouri, B.
1970-01-01
An analysis was conducted with the objective of upgrading and improving the loads, stress, and performance prediction methods for Apollo spacecraft parachutes. The subjects considered were: (1) methods for a new theoretical approach to the parachute opening process, (2) new experimental-analytical techniques to improve the measurement of pressures, stresses, and strains in inflight parachutes, and (3) a numerical method for analyzing the dynamical behavior of rapidly loaded pilot chute risers.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
Manufacturing engineering: Principles for optimization
NASA Astrophysics Data System (ADS)
Koenig, Daniel T.
Various subjects in the area of manufacturing engineering are addressed. The topics considered include: manufacturing engineering organization concepts and management techniques, factory capacity and loading techniques, capital equipment programs, machine tool and equipment selection and implementation, producibility engineering, methods, planning and work management, and process control engineering in job shops. Also discussed are: maintenance engineering, numerical control of machine tools, fundamentals of computer-aided design/computer-aided manufacture, computer-aided process planning and data collection, group technology basis for plant layout, environmental control and safety, and the Integrated Productivity Improvement Program.
An integrated approach to improving noisy speech perception
NASA Astrophysics Data System (ADS)
Koval, Serguei; Stolbov, Mikhail; Smirnova, Natalia; Khitrov, Mikhail
2002-05-01
For a number of practical purposes and tasks, experts have to decode speech recordings of very poor quality. A combination of techniques is proposed to improve intelligibility and quality of distorted speech messages and thus facilitate their comprehension. Along with the application of noise cancellation and speech signal enhancement techniques removing and/or reducing various kinds of distortions and interference (primarily unmasking and normalization in time and frequency fields), the approach incorporates optimal listener expert tactics based on selective listening, nonstandard binaural listening, accounting for short-term and long-term human ear adaptation to noisy speech, as well as some methods of speech signal enhancement to support speech decoding during listening. The approach integrating the suggested techniques ensures high-quality ultimate results and has successfully been applied by Speech Technology Center experts and by numerous other users, mainly forensic institutions, to perform noisy speech records decoding for courts, law enforcement and emergency services, accident investigation bodies, etc.
Statistical analysis of RHIC beam position monitors performance
NASA Astrophysics Data System (ADS)
Calaga, R.; Tomás, R.
2004-04-01
A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.
Ultrasonic tissue characterization for monitoring nanostructured TiO2-induced bone growth
NASA Astrophysics Data System (ADS)
Rus, G.; García-Martínez, J.
2007-07-01
The use of bioactive nanostructured TiO2 has recently been proposed for improving orthopaedic implant adhesion due to its improved biocompatibility with bone, since it induces: (i) osteoblast function, (ii) apatite nucleation and (iii) protein adsorption. The present work focuses on a non-ionizing radiation emitting technique for quantifying in real time the improvement in terms of mechanical properties of the surrounding bone due to the presence of the nanostructured TiO2 prepared by controlled precipitation and acid ageing. The mechanical strength is the ultimate goal of a bone implant and is directly related to the elastic moduli. Ultrasonics are high frequency mechanical waves and are therefore suited for characterizing elastic moduli. As opposed to echographic techniques, which are not correlated to elastic properties and are not able to penetrate bone, a low frequency ultrasonic transmission test is proposed, in which a P-wave is transmitted through the specimen and recorded. The problem is posed as an inverse problem, in which the unknown is a set of parameters that describe the mechanical constants of the sequence of layers. A finite element numerical model that depends on these parameters is used to predict the transformation of the waveform and compare to the measurement. The parameters that best describe the real tissue are obtained by minimizing the discrepancy between the real and numerically predicted waveforms. A sensitivity study to the uncertainties of the model is performed for establishing the feasibility of using this technique to investigate the macroscopic effect on bone growth of nanostructured TiO2 and its beneficial effect on implant adhesion.
Evaluation of Bogus Vortex Techniques with Four-Dimensional Variational Data Assimilation
NASA Technical Reports Server (NTRS)
Pu, Zhao-Xia; Braun, Scott A.
2000-01-01
The effectiveness of techniques for creating "bogus" vortices in numerical simulations of hurricanes is examined by using the Penn State/NCAR nonhydrostatic mesoscale model (MM5) and its adjoint system. A series of four-dimensional variational data assimilation (4-D VAR) experiments is conducted to generate an initial vortex for Hurricane Georges (1998) in the Atlantic Ocean by assimilating bogus sea-level pressure and surface wind information into the mesoscale numerical model. Several different strategies are tested for improving the vortex representation. The initial vortices produced by the 4-D VAR technique are able to reproduce many of the structural features of mature hurricanes. The vortices also result in significant improvements to the hurricane forecasts in terms of both intensity and track. In particular, with assimilation of only bogus sea-level pressure information, the response in the wind field is contained largely within the divergent component, with strong convergence leading to strong upward motion near the center. Although the intensity of the initial vortex seems to be well represented, a dramatic spin down of the storm occurs within the first 6 h of the forecast. With assimilation of bogus surface wind data only, an expected dominance of the rotational component of the wind field is generated, but the minimum pressure is adjusted inadequately compared to the actual hurricane minimum pressure. Only when both the bogus surface pressure and wind information are assimilated together does the model produce a vortex that represents the actual intensity of the hurricane and results in significant improvements to forecasts of both hurricane intensity and track.
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.
2017-12-01
Stochastic volatility models enable to capture the real world features of the options better than the classical Black-Scholes treatment. Here we focus on pricing of European-style options under the Stein-Stein stochastic volatility model when the option value depends on the time, on the price of the underlying asset and on the volatility as a function of a mean reverting Orstein-Uhlenbeck process. A standard mathematical approach to this model leads to the non-stationary second-order degenerate partial differential equation of two spatial variables completed by the system of boundary and terminal conditions. In order to improve the numerical valuation process for a such pricing equation, we propose a numerical technique based on the discontinuous Galerkin method and the Crank-Nicolson scheme. Finally, reference numerical experiments on real market data illustrate comprehensive empirical findings on options with stochastic volatility.
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.
1975-01-01
The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.
A diagnostic technique used to obtain cross range radiation centers from antenna patterns
NASA Technical Reports Server (NTRS)
Lee, T. H.; Burnside, W. D.
1988-01-01
A diagnostic technique to obtain cross range radiation centers based on antenna radiation patterns is presented. This method is similar to the synthetic aperture processing of scattered fields in the radar application. Coherent processing of the radiated fields is used to determine the various radiation centers associated with the far-zone pattern of an antenna for a given radiation direction. This technique can be used to identify an unexpected radiation center that creates an undesired effect in a pattern; on the other hand, it can improve a numerical simulation of the pattern by identifying other significant mechanisms. Cross range results for two 8' reflector antennas are presented to illustrate as well as validate that technique.
De Meulemeester, Kayleigh E; Castelein, Birgit; Coppieters, Iris; Barbe, Tom; Cools, Ann; Cagnie, Barbara
2017-01-01
The aim of this study was to investigate short-term and long-term treatment effects of dry needling (DN) and manual pressure (MP) technique with the primary goal of determining if DN has better effects on disability, pain, and muscle characteristics in treating myofascial neck/shoulder pain in women. In this randomized clinical trial, 42 female office workers with myofascial neck/shoulder pain were randomly allocated to either a DN or MP group and received 4 treatments. They were evaluated with the Neck Disability Index, general numeric rating scale, pressure pain threshold, and muscle characteristics before and after treatment. For each outcome parameter, a linear mixed-model analysis was applied to reveal group-by-time interaction effects or main effects for the factor "time." No significant differences were found between DN and MP. In both groups, significant improvement in the Neck Disability Index was observed after 4 treatments and 3 months (P < .001); the general numerical rating scale also significantly decreased after 3 months. After the 4-week treatment program, there was a significant improvement in pain pressure threshold, muscle elasticity, and stiffness. Both treatment techniques lead to short-term and long-term treatment effects. Dry needling was found to be no more effective than MP in the treatment of myofascial neck/shoulder pain. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Clegg, R. A.; White, D. M.; Hayhurst, C.; Ridel, W.; Harwick, W.; Hiermaier, S.
2003-09-01
The development and validation of an advanced material model for orthotropic materials, such as fibre reinforced composites, is described. The model is specifically designed to facilitate the numerical simulation of impact and shock wave propagation through orthotropic materials and the prediction of subsequent material damage. Initial development of the model concentrated on correctly representing shock wave propagation in composite materials under high and hypervelocity impact conditions [1]. This work has now been extended to further concentrate on the development of improved numerical models and material characterisation techniques for the prediction of damage, including residual strength, in fibre reinforced composite materials. The work is focussed on Kevlar-epoxy however materials such as CFRP are also being considered. The paper describes our most recent activities in relation to the implementation of advanced material modelling options in this area. These enable refined non-liner directional characteristics of composite materials to be modelled, in addition to the correct thermodynamic response under shock wave loading. The numerical work is backed by an extensive experimental programme covering a wide range of static and dynamic tests to facilitate derivation of model input data and to validate the predicted material response. Finally, the capability of the developing composite material model is discussed in relation to a hypervelocity impact problem.
A simplified model for TIG-dressing numerical simulation
NASA Astrophysics Data System (ADS)
Ferro, P.; Berto, F.; James, M. N.
2017-04-01
Irrespective of the mechanical properties of the alloy to be welded, the fatigue strength of welded joints is primarily controlled by the stress concentration associated with the weld toe or weld root. In order to reduce the effects of such notch defects in welds, which are influenced by tensile properties of the alloy, post-weld improvement techniques have been developed. The two most commonly used techniques are weld toe grinding and TIG dressing, which are intended to both remove toe defects such as non-metallic intrusions and to re-profile the weld toe region to give a lower stress concentration. In the case of TIG dressing the weld toe is re-melted to provide a smoother transition between the plate and the weld crown and to beneficially modify the residual stress redistribution. Assessing the changes to weld stress state arising from TIG-dressing is most easily accomplished through a complex numerical simulation that requires coupled thermo-fluid dynamics and solid mechanics. However, this can be expensive in terms of computational cost and time needed to reach a solution. The present paper therefore proposes a simplified numerical model that overcomes such drawbacks and which simulates the remelted toe region by means of the activation and deactivation of elements in the numerical model.
Nimmermark, Magnus O; Wang, John J; Maynard, Charles; Cohen, Mauricio; Gilcrist, Ian; Heitner, John; Hudson, Michael; Palmeri, Sebastian; Wagner, Galen S; Pahlm, Olle
2011-01-01
The study purpose is to determine whether numeric and/or graphic ST measurements added to the display of the 12-lead electrocardiogram (ECG) would influence cardiologists' decision to provide myocardial reperfusion therapy. Twenty ECGs with borderline ST-segment deviation during elective percutaneous coronary intervention and 10 controls before balloon inflation were included. Only 5 of the 20 ECGs during coronary balloon occlusion met the 2007 American Heart Association guidelines for ST-elevation myocardial infarction (STEMI). Fifteen cardiologists read 4 sets of these ECGs as the basis for a "yes/no" reperfusion therapy decision. Sets 1 and 4 were the same 12-lead ECGs alone. Set 2 also included numeric ST-segment measurements, and set 3 included both numeric and graphically displayed ST measurements ("ST Maps"). The mean (range) positive reperfusion decisions were 10.6 (2-15), 11.4 (1-19), 9.7 (2-14), and 10.7 (1-15) for sets 1 to 4, respectively. The accuracies of the observers for the 5 STEMI ECGs were 67%, 69%, and 77% for the standard format, the ST numeric format, and the ST graphic format, respectively. The improved detection rate (77% vs 67%) with addition of both numeric and graphic displays did achieve statistical significance (P < .025). The corresponding specificities for the 10 control ECGs were 85%, 79%, and 89%, respectively. In conclusion, a wide variation of reperfusion decisions was observed among clinical cardiologists, and their decisions were not altered by adding ST deviation measurements in numeric and/or graphic displays. Acute coronary occlusion detection rate was low for ECGs meeting STEMI criteria, and this was improved by adding ST-segment measurements in numeric and graphic forms. These results merit further study of the clinical value of this technique for improved acute coronary occlusion treatment decision support. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Towner, Robert L.; Band, Jonathan L.
2012-01-01
An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.
Rapid prototyping model for percutaneous nephrolithotomy training.
Bruyère, Franck; Leroux, Cecile; Brunereau, Laurent; Lermusiaux, Patrick
2008-01-01
Rapid prototyping is a technique used for creating computer images in three dimensions more efficiently than classic techniques. Percutaneous nephrolithotomy (PCNL) is a popular method to remove kidney stones; however, broader use by the urologic community has been hampered by the morbidity associated with needle puncture to gain access to the renal calix (bleeding, pneumothorax, hydrothorax, inadvertent colon injury). A training model to improve technique and understanding of renal anatomy could improve complications related to renal puncture; however, no model currently exists for resident training. We created a training model using the rapid prototyping technique based on abdominal CT images of a patient scheduled to undergo PCNL. This allowed our staff and residents to train on the model before performing the operation. This model allowed anticipation of particular difficulties inherent to the patient's anatomy. After training, the procedure proceeded without complication, and the patient was discharged at postoperative day 1 without problems. We hypothesize that rapid prototyping could be useful for resident education, allowing the creation of numerous models for research and surgical training. In addition, we anticipate that experienced urologists could find this technique helpful in preparation for difficult PCNL operations.
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Towards a generalized computational fluid dynamics technique for all Mach numbers
NASA Technical Reports Server (NTRS)
Walters, R. W.; Slack, D. C.; Godfrey, A. G.
1993-01-01
Currently there exists no single unified approach for efficiently and accurately solving computational fluid dynamics (CFD) problems across the Mach number regime, from truly low speed incompressible flows to hypersonic speeds. There are several CFD codes that have evolved into sophisticated prediction tools with a wide variety of features including multiblock capabilities, generalized chemistry and thermodynamics models among other features. However, as these codes evolve, the demand placed on the end user also increases simply because of the myriad of features that are incorporated into these codes. In order for a user to be able to solve a wide range of problems, several codes may be needed requiring the user to be familiar with the intricacies of each code and their rather complicated input files. Moreover, the cost of training users and maintaining several codes becomes prohibitive. The objective of the current work is to extend the compressible, characteristic-based, thermochemical nonequilibrium Navier-Stokes code GASP to very low speed flows and simultaneously improve convergence at all speeds. Before this work began, the practical speed range of GASP was Mach numbers on the order of 0.1 and higher. In addition, a number of new techniques have been developed for more accurate physical and numerical modeling. The primary focus has been on the development of optimal preconditioning techniques for the Euler and the Navier-Stokes equations with general finite-rate chemistry models and both equilibrium and nonequilibrium thermodynamics models. We began with the work of Van Leer, Lee, and Roe for inviscid, one-dimensional perfect gases and extended their approach to include three-dimensional reacting flows. The basic steps required to accomplish this task were a transformation to stream-aligned coordinates, the formulation of the preconditioning matrix, incorporation into both explicit and implicit temporal integration schemes, and modification of the numerical flux formulae. In addition, we improved the convergence rate of the implicit time integration schemes in GASP through the use of inner iteration strategies and the use of the GMRES (General Minimized Resisual) which belongs to the class of algorithms referred to as Krylov subspace iteration. Finally, we significantly improved the practical utility of GASP through the addition of mesh sequencing, a technique in which computations begin on a coarse grid and get interpolated onto successively finer grids. The fluid dynamic problems of interest to the propulsion community involve complex flow physics spanning different velocity regimes and possibly involving chemical reactions. This class of problems results in widely disparate time scales causing numerical stiffness. Even in the absence of chemical reactions, eigenvalue stiffness manifests itself at transonic and very low speed flows which can be quantified by the large condition number of the system and evidenced by slow convergence rates. This results in the need for thorough numerical analysis and subsequent implementation of sophisticated numerical techniques for these difficult yet practical problems. As a result of this work, we have been able to extend the range of applicability of compressible codes to very low speed inviscid flows (M = .001) and reacting flows.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
In-line phase contrast micro-CT reconstruction for biomedical specimens.
Fu, Jian; Tan, Renbo
2014-01-01
X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.
Reduced and Validated Kinetic Mechanisms for Hydrogen-CO-sir Combustion in Gas Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yiguang Ju; Frederick Dryer
2009-02-07
Rigorous experimental, theoretical, and numerical investigation of various issues relevant to the development of reduced, validated kinetic mechanisms for synthetic gas combustion in gas turbines was carried out - including the construction of new radiation models for combusting flows, improvement of flame speed measurement techniques, measurements and chemical kinetic analysis of H{sub 2}/CO/CO{sub 2}/O{sub 2}/diluent mixtures, revision of the H{sub 2}/O{sub 2} kinetic model to improve flame speed prediction capabilities, and development of a multi-time scale algorithm to improve computational efficiency in reacting flow simulations.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
Zhang, Xiaoliang; Martin, Alastair; Jordan, Caroline; Lillaney, Prasheel; Losey, Aaron; Pang, Yong; Hu, Jeffrey; Wilson, Mark; Cooke, Daniel; Hetts, Steven W
2017-04-01
It is technically challenging to design compact yet sensitive miniature catheter radio frequency (RF) coils for endovascular interventional MR imaging. In this work, a new design method for catheter RF coils is proposed based on the coaxial transmission line resonator (TLR) technique. Due to its distributed circuit, the TLR catheter coil does not need any lumped capacitors to support its resonance, which simplifies the practical design and construction and provides a straightforward technique for designing miniature catheter-mounted imaging coils that are appropriate for interventional neurovascular procedures. The outer conductor of the TLR serves as an RF shield, which prevents electromagnetic energy loss, and improves coil Q factors. It also minimizes interaction with surrounding tissues and signal losses along the catheter coil. To investigate the technique, a prototype catheter coil was built using the proposed coaxial TLR technique and evaluated with standard RF testing and measurement methods and MR imaging experiments. Numerical simulation was carried out to assess the RF electromagnetic field behavior of the proposed TLR catheter coil and the conventional lumped-element catheter coil. The proposed TLR catheter coil was successfully tuned to 64 MHz for proton imaging at 1.5 T. B 1 fields were numerically calculated, showing improved magnetic field intensity of the TLR catheter coil over the conventional lumped-element catheter coil. MR images were acquired from a dedicated vascular phantom using the TLR catheter coil and also the system body coil. The TLR catheter coil is able to provide a significant signal-to-noise ratio (SNR) increase (a factor of 200 to 300) over its imaging volume relative to the body coil. Catheter imaging RF coil design using the proposed coaxial TLR technique is feasible and advantageous in endovascular interventional MR imaging applications.
Numerical simulations of strongly correlated electron and spin systems
NASA Astrophysics Data System (ADS)
Changlani, Hitesh Jaiprakash
Developing analytical and numerical tools for strongly correlated systems is a central challenge for the condensed matter physics community. In the absence of exact solutions and controlled analytical approximations, numerical techniques have often contributed to our understanding of these systems. Exact Diagonalization (ED) requires the storage of at least two vectors the size of the Hilbert space under consideration (which grows exponentially with system size) which makes it affordable only for small systems. The Density Matrix Renormalization Group (DMRG) uses an intelligent Hilbert space truncation procedure to significantly reduce this cost, but in its present formulation is limited to quasi-1D systems. Quantum Monte Carlo (QMC) maps the Schrodinger equation to the diffusion equation (in imaginary time) and only samples the eigenvector over time, thereby avoiding the memory limitation. However, the stochasticity involved in the method gives rise to the "sign problem" characteristic of fermion and frustrated spin systems. The first part of this thesis is an effort to make progress in the development of a numerical technique which overcomes the above mentioned problems. We consider novel variational wavefunctions, christened "Correlator Product States" (CPS), that have a general functional form which hopes to capture essential correlations in the ground states of spin and fermion systems in any dimension. We also consider a recent proposal to modify projector (Green's Function) Quantum Monte Carlo to ameliorate the sign problem for realistic and model Hamiltonians (such as the Hubbard model). This exploration led to our own set of improvements, primarily a semistochastic formulation of projector Quantum Monte Carlo. Despite their limitations, existing numerical techniques can yield physical insights into a wide variety of problems. The second part of this thesis considers one such numerical technique - DMRG - and adapts it to study the Heisenberg antiferromagnet on a generic tree graph. Our attention turns to a systematic numerical and semi-analytical study of the effect of local even/odd sublattice imbalance on the low energy spectrum of antiferromagnets on regular Cayley trees. Finally, motivated by previous experiments and theories of randomly diluted antiferromagnets (where an even/odd sublattice imbalance naturally occurs), we present our study of the Heisenberg antiferromagnet on the Cayley tree at the percolation threshold. Our work shows how to detect "emergent" low energy degrees of freedom and compute the effective interactions between them by using data from DMRG calculations.
Newtonian nudging for a Richards equation-based distributed hydrological model
NASA Astrophysics Data System (ADS)
Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark
The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model
NASA Astrophysics Data System (ADS)
Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.
In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.
Variational data assimilation for the initial-value dynamo problem.
Li, Kuan; Jackson, Andrew; Livermore, Philip W
2011-11-01
The secular variation of the geomagnetic field as observed at the Earth's surface results from the complex magnetohydrodynamics taking place in the fluid core of the Earth. One way to analyze this system is to use the data in concert with an underlying dynamical model of the system through the technique of variational data assimilation, in much the same way as is employed in meteorology and oceanography. The aim is to discover an optimal initial condition that leads to a trajectory of the system in agreement with observations. Taking the Earth's core to be an electrically conducting fluid sphere in which convection takes place, we develop the continuous adjoint forms of the magnetohydrodynamic equations that govern the dynamical system together with the corresponding numerical algorithms appropriate for a fully spectral method. These adjoint equations enable a computationally fast iterative improvement of the initial condition that determines the system evolution. The initial condition depends on the three dimensional form of quantities such as the magnetic field in the entire sphere. For the magnetic field, conservation of the divergence-free condition for the adjoint magnetic field requires the introduction of an adjoint pressure term satisfying a zero boundary condition. We thus find that solving the forward and adjoint dynamo system requires different numerical algorithms. In this paper, an efficient algorithm for numerically solving this problem is developed and tested for two illustrative problems in a whole sphere: one is a kinematic problem with prescribed velocity field, and the second is associated with the Hall-effect dynamo, exhibiting considerable nonlinearity. The algorithm exhibits reliable numerical accuracy and stability. Using both the analytical and the numerical techniques of this paper, the adjoint dynamo system can be solved directly with the same order of computational complexity as that required to solve the forward problem. These numerical techniques form a foundation for ultimate application to observations of the geomagnetic field over the time scale of centuries.
Using Concept Relations to Improve Ranking in Information Retrieval
Price, Susan L.; Delcambre, Lois M.
2005-01-01
Despite improved search engine technology, most searches return numerous documents not directly related to the query. This problem is mitigated if relevant documents appear high on a ranked list of search results. We propose that some queries and the underlying information needs can be modeled as relationships between concepts (relations), and we match relations in queries to relations in documents to try to improve ranking of search results. We investigate four techniques to identify two relationships important in medicine, causes and treats, to improve the ranking of medical text documents relevant to clinical questions about causation and treatment. Preliminary results suggest that identifying relation instances can improve the ranking of search results. PMID:16779114
NASA Astrophysics Data System (ADS)
Lei, F.; Crow, W. T.; Kustas, W. P.; Yang, Y.; Anderson, M. C.
2017-12-01
Improving the water usage efficiency and maintaining water use sustainability is challenging under rapidly changed natural environments. For decades, extensive field investigations and conceptual/physical numerical modeling have been developed to quantify and track surface water and energy fluxes at different spatial and temporal scales. Meanwhile, with the development of satellite-based sensors, land surface eco-hydrological parameters can be retrieved remotely to supplement ground-based observations. However, both models and remote sensing retrievals contain various sources of errors and an accurate and spatio-temporally continuous simulation and forecasting system at the field-scale is crucial for the efficient water management in agriculture. Specifically, data assimilation technique can optimally integrate measurements acquired from various sources (including in-situ and remotely-sensed data) with numerical models through consideration of different types of uncertainties. In this presentation, we will focus on improving the estimation of water and energy fluxes over a vineyard in California, U.S. A high-resolution remotely-sensed Evaporative Fraction (EF) product from the Atmosphere-Land Exchange Inverse (ALEXI) model will be incorporated into a Soil Vegetation Atmosphere Transfer (SVAT) model via a 2-D data assimilation method. The results will show that both the accuracy and spatial variability of soil water content and evapotranspiration in SVAT model can be enhanced through the assimilation of EF data. Furthermore, we will demonstrate that by taking the optimized soil water flux as initial condition and combining it with weather forecasts, future field water status can be predicted under different irrigation scenarios. Finally, we will discuss the practical potential of these advances by leveraging our numerical experiment for the design of new irrigation strategies and water management techniques.
Severe storms and local weather research
NASA Technical Reports Server (NTRS)
1981-01-01
Developments in the use of space related techniques to understand storms and local weather are summarized. The observation of lightning, storm development, cloud development, mesoscale phenomena, and ageostrophic circulation are discussed. Data acquisition, analysis, and the development of improved sensor and computer systems capability are described. Signal processing and analysis and application of Doppler lidar data are discussed. Progress in numerous experiments is summarized.
Effects of Interventions Based in Behavior Analysis on Motor Skill Acquisition: A Meta-Analysis
ERIC Educational Resources Information Center
Alstot, Andrew E.; Kang, Minsoo; Alstot, Crystal D.
2013-01-01
Techniques based in applied behavior analysis (ABA) have been shown to be useful across a variety of settings to improve numerous behaviors. Specifically within physical activity settings, several studies have examined the effect of interventions based in ABA on a variety of motor skills, but the overall effects of these interventions are unknown.…
Coristine, Andrew J.; Yerly, Jerome; Stuber, Matthias
2016-01-01
Background Two-dimensional (2D) spatially selective radiofrequency (RF) pulses may be used to excite restricted volumes. By incorporating a "pencil beam" 2D pulse into a T2-Prep, one may create a "2D-T2-Prep" that combines T2-weighting with an intrinsic outer volume suppression. This may particularly benefit parallel imaging techniques, where artefacts typically originate from residual foldover signal. By suppressing foldover signal with a 2D-T2-Prep, image quality may therefore improve. We present numerical simulations, phantom and in vivo validations to address this hypothesis. Methods A 2D-T2-Prep and a conventional T2-Prep were used with GRAPPA-accelerated MRI (R = 1.6). The techniques were first compared in numerical phantoms, where per pixel maps of SNR (SNRmulti), noise, and g-factor were predicted for idealized sequences. Physical phantoms, with compartments doped to mimic blood, myocardium, fat, and coronary vasculature, were scanned with both T2-Preparation techniques to determine the actual SNRmulti and vessel sharpness. For in vivo experiments, the right coronary artery (RCA) was imaged in 10 healthy adults, using accelerations of R = 1,3, and 6, and vessel sharpness was measured for each. Results In both simulations and phantom experiments, the 2D-T2-Prep improved SNR relative to the conventional T2-Prep, by an amount that depended on both the acceleration factor and the degree of outer volume suppression. For in vivo images of the RCA, vessel sharpness improved most at higher acceleration factors, demonstrating that the 2D-T2-Prep especially benefits accelerated coronary MRA. Conclusion Suppressing outer volume signal with a 2D-T2-Prep improves image quality particularly well in GRAPPA-accelerated acquisitions in simulations, phantoms, and volunteers, demonstrating that it should be considered when performing accelerated coronary MRA. PMID:27736866
NASA Astrophysics Data System (ADS)
Blochet, Quentin; Delloro, Francesco; N'Guyen, Franck; Jeulin, Dominique; Borit, François; Jeandin, Michel
2017-04-01
This article is dealing with the effects of surface preparation of the substrate on aluminum cold-sprayed coating bond strength. Different sets of AA2024-T3 specimens have been coated with pure Al 1050 feedstock powder, using a conventional cold spray coating technique. The sets were grit-blasted (GB) before coating. The study focuses on substrate surface topography evolution before coating and coating-substrate interface morphology after coating. To study coating adhesion by LASAT® technique for each set, specimens with and without preceding GB treatment were tested in load-controlled conditions. Then, several techniques were used to evaluate the effects of substrate surface treatment on the final coating mechanical properties. Irregularities induced by the GB treatment modify significantly the interface morphology. Results showed that particle anchoring was improved dramatically by the presence of craters. The substrate surface was characterized by numerous anchors. Numerical simulation results exhibited the increasing deformation of particle onto the grit-blasted surface. In addition, results showed a strong relationship between the coating-substrate bond strength on the deposited material and surface preparation.
An Overview of the History of Orthopedic Surgery.
Swarup, Ishaan; O'Donnell, Joseph F
Orthopedic surgery has a long and rich history. While the modern term orthopedics was coined in the 1700s, orthopedic principles were beginning to be developed and used during primitive times. The Egyptians continued these practices, and described ways to recognize and manage common orthopedic conditions. The Greeks and Romans subsequently began to study medicine in a systematic manner, and greatly improved our understanding of orthopedic anatomy and surgical technique. After a period of little progress during the Middle Ages, rapid advancement was noted during the Renaissance, including the description of various injuries, improvements in surgical technique, and development of orthopedic hospitals. Collectively, these advances provided the foundation for modern orthopedics. Currently, orthopedic surgery is a rapidly developing field that has benefited from the works of numerous scholars and surgeons. It is important to recognize the successes and failures of the past, in order to advance research and practice as well as improve patient care and clinical outcomes.
NASA's program on icing research and technology
NASA Technical Reports Server (NTRS)
Reinmann, John J.; Shaw, Robert J.; Ranaudo, Richard J.
1989-01-01
NASA's program in aircraft icing research and technology is reviewed. The program relies heavily on computer codes and modern applied physics technology in seeking icing solutions on a finer scale than those offered in earlier programs. Three major goals of this program are to offer new approaches to ice protection, to improve our ability to model the response of an aircraft to an icing encounter, and to provide improved techniques and facilities for ground and flight testing. This paper reviews the following program elements: (1) new approaches to ice protection; (2) numerical codes for deicer analysis; (3) measurement and prediction of ice accretion and its effect on aircraft and aircraft components; (4) special wind tunnel test techniques for rotorcraft icing; (5) improvements of icing wind tunnels and research aircraft; (6) ground de-icing fluids used in winter operation; (7) fundamental studies in icing; and (8) droplet sizing instruments for icing clouds.
Numerical investigation of tube hyroforming of TWT using Corner Fill Test
NASA Astrophysics Data System (ADS)
Zribi, Temim; Khalfallah, Ali
2018-05-01
Tube hydroforming presents a very good alternative to conventional forming processes for obtaining good quality mechanical parts used in several industrial fields, such as the automotive and aerospace sectors. Research in the field of tube hydroforming is aimed at improving the formability, stiffness and weight reduction of manufactured parts using this process. In recent years, a new method of hydroforming appears; it consists of deforming parts made from welded tubes and having different thicknesses. This technique which contributes to the weight reduction of the hydroformed tubes is a good alternative to the conventional tube hydroforming. This technique makes it possible to build rigid and light structures with a reduced cost. However, it is possible to improve the weight reduction by using dissimilar tailor welded tubes (TWT). This paper is a first attempt to analyze by numerical simulation the behavior of TWT hydroformed in square cross section dies, commonly called (Corner Fill Test). Considered tubes are composed of two materials assembled by butt welding. The present analysis will focus on the effect of loading paths on the formability of the structure by determining the change in thickness in several sections of the part. A comparison between the results obtained by hydroforming the butt joint of tubes made of dissimilar materials and those obtained using single-material tube is achieved. Numerical calculations show that the bi-material welded tube has better thinning resistance and a more even thickness distribution in the circumferential directions when compared to the single-material tube.
A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image
Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor
2015-01-01
Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d + = 0.38), a small-to-medium reduction in beauty ideal internalisation (d + = -0.37), and a large reduction in social comparison tendencies (d + = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.
2015-12-01
Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be followed in the future.
The GISS sounding temperature impact test
NASA Technical Reports Server (NTRS)
Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.
1978-01-01
The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.
An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes
NASA Astrophysics Data System (ADS)
Ding, Deng; Chong U, Sio
2010-05-01
An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.
Three Dimensional Imaging of the Nucleon
NASA Astrophysics Data System (ADS)
More, Jai; Mukherjee, Asmita; Nair, Sreeraj
2018-05-01
We study the Wigner distributions of quarks and gluons in light-front dressed quark model using the overlap of light front wave functions (LFWFs). We take the target to be a dressed quark, this is a composite spin -1/2 state of quark dressed with a gluon. This state allows us to calculate the quark and gluon Wigner distributions analytically in terms of LFWFs using Hamiltonian perturbation theory. We analyze numerically the Wigner distributions of quark and gluon and report their nature in the contour plots. We use an improved numerical technique to remove the cutoff dependence of the Fourier transformed integral over \\varvec{Δ}_\\perp.
Power corrections in the N -jettiness subtraction scheme
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
Power corrections in the N -jettiness subtraction scheme
Boughezal, Radja; Liu, Xiaohui; Petriello, Frank
2017-03-30
We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less
NASA Astrophysics Data System (ADS)
Saxena, A. K.; Kaushik, T. C.; Gupta, Satish C.
2010-03-01
Two low energy (1.6 and 8 kJ) portable electrically exploding foil accelerators are developed for moderately high pressure shock studies at small laboratory scale. Projectile velocities up to 4.0 km/s have been measured on Kapton flyers of thickness 125 μm and diameter 8 mm, using an in-house developed Fabry-Pérot velocimeter. An asymmetric tilt of typically few milliradians has been measured in flyers using fiber optic technique. High pressure impact experiments have been carried out on tantalum, and aluminum targets up to pressures of 27 and 18 GPa, respectively. Peak particle velocities at the target-glass interface as measured by Fabry-Pérot velocimeter have been found in good agreement with the reported equation of state data. A one-dimensional hydrodynamic code based on realistic models of equation of state and electrical resistivity has been developed to numerically simulate the flyer velocity profiles. The developed numerical scheme is validated against experimental and simulation data reported in literature on such systems. Numerically computed flyer velocity profiles and final flyer velocities have been found in close agreement with the previously reported experimental results with a significant improvement over reported magnetohydrodynamic simulations. Numerical modeling of low energy systems reported here predicts flyer velocity profiles higher than experimental values, indicating possibility of further improvement to achieve higher shock pressures.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Iuculano, Teresa; Cohen Kadosh, Roi
2014-01-01
Nearly 7% of the population exhibit difficulties in dealing with numbers and performing arithmetic, a condition named Developmental Dyscalculia (DD), which significantly affects the educational and professional outcomes of these individuals, as it often persists into adulthood. Research has mainly focused on behavioral rehabilitation, while little is known about performance changes and neuroplasticity induced by the concurrent application of brain-behavioral approaches. It has been shown that numerical proficiency can be enhanced by applying a small-yet constant-current through the brain, a non-invasive technique named transcranial electrical stimulation (tES). Here we combined a numerical learning paradigm with transcranial direct current stimulation (tDCS) in two adults with DD to assess the potential benefits of this methodology to remediate their numerical difficulties. Subjects learned to associate artificial symbols to numerical quantities within the context of a trial and error paradigm, while tDCS was applied to the posterior parietal cortex (PPC). The first subject (DD1) received anodal stimulation to the right PPC and cathodal stimulation to the left PPC, which has been associated with numerical performance's improvements in healthy subjects. The second subject (DD2) received anodal stimulation to the left PPC and cathodal stimulation to the right PPC, which has been shown to impair numerical performance in healthy subjects. We examined two indices of numerical proficiency: (i) automaticity of number processing; and (ii) mapping of numbers onto space. Our results are opposite to previous findings with non-dyscalculic subjects. Only anodal stimulation to the left PPC improved both indices of numerical proficiency. These initial results represent an important step to inform the rehabilitation of developmental learning disabilities, and have relevant applications for basic and applied research in cognitive neuroscience, rehabilitation, and education.
Iuculano, Teresa; Cohen Kadosh, Roi
2014-01-01
Nearly 7% of the population exhibit difficulties in dealing with numbers and performing arithmetic, a condition named Developmental Dyscalculia (DD), which significantly affects the educational and professional outcomes of these individuals, as it often persists into adulthood. Research has mainly focused on behavioral rehabilitation, while little is known about performance changes and neuroplasticity induced by the concurrent application of brain-behavioral approaches. It has been shown that numerical proficiency can be enhanced by applying a small—yet constant—current through the brain, a non-invasive technique named transcranial electrical stimulation (tES). Here we combined a numerical learning paradigm with transcranial direct current stimulation (tDCS) in two adults with DD to assess the potential benefits of this methodology to remediate their numerical difficulties. Subjects learned to associate artificial symbols to numerical quantities within the context of a trial and error paradigm, while tDCS was applied to the posterior parietal cortex (PPC). The first subject (DD1) received anodal stimulation to the right PPC and cathodal stimulation to the left PPC, which has been associated with numerical performance's improvements in healthy subjects. The second subject (DD2) received anodal stimulation to the left PPC and cathodal stimulation to the right PPC, which has been shown to impair numerical performance in healthy subjects. We examined two indices of numerical proficiency: (i) automaticity of number processing; and (ii) mapping of numbers onto space. Our results are opposite to previous findings with non-dyscalculic subjects. Only anodal stimulation to the left PPC improved both indices of numerical proficiency. These initial results represent an important step to inform the rehabilitation of developmental learning disabilities, and have relevant applications for basic and applied research in cognitive neuroscience, rehabilitation, and education. PMID:24570659
Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren
2017-12-01
Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.
SOIL AND SEDIMENT SAMPLING METHODS | Science ...
The EPA Office of Solid Waste and Emergency Response's (OSWER) Office of Superfund Remediation and Technology Innovation (OSRTI) needs innovative methods and techniques to solve new and difficult sampling and analytical problems found at the numerous Superfund sites throughout the United States. Inadequate site characterization and a lack of knowledge of surface and subsurface contaminant distributions hinders EPA's ability to make the best decisions on remediation options and to conduct the most effective cleanup efforts. To assist OSWER, NERL conducts research to improve their capability to more accurately, precisely, and efficiently characterize Superfund, RCRA, LUST, oil spills, and brownfield sites and to improve their risk-based decision making capabilities, research is being conducted on improving soil and sediment sampling techniques and improving the sampling and handling of volatile organic compound (VOC) contaminated soils, among the many research programs and tasks being performed at ESD-LV.Under this task, improved sampling approaches and devices will be developed for characterizing the concentration of VOCs in soils. Current approaches and devices used today can lose up to 99% of the VOCs present in the sample due inherent weaknesses in the device and improper/inadequate collection techniques. This error generally causes decision makers to markedly underestimate the soil VOC concentrations and, therefore, to greatly underestimate the ecological
NASA Technical Reports Server (NTRS)
Lee, J.
1994-01-01
A generalized flow solver using an implicit Lower-upper (LU) diagonal decomposition based numerical technique has been coupled with three low-Reynolds number kappa-epsilon models for analysis of problems with engineering applications. The feasibility of using the LU technique to obtain efficient solutions to supersonic problems using the kappa-epsilon model has been demonstrated. The flow solver is then used to explore limitations and convergence characteristics of several popular two equation turbulence models. Several changes to the LU solver have been made to improve the efficiency of turbulent flow predictions. In general, the low-Reynolds number kappa-epsilon models are easier to implement than the models with wall-functions, but require much finer near-wall grid to accurately resolve the physics. The three kappa-epsilon models use different approaches to characterize the near wall regions of the flow. Therefore, the limitations imposed by the near wall characteristics have been carefully resolved. The convergence characteristics of a particular model using a given numerical technique are also an important, but most often overlooked, aspect of turbulence model predictions. It is found that some convergence characteristics could be sacrificed for more accurate near-wall prediction. However, even this gain in accuracy is not sufficient to model the effects of an external pressure gradient imposed by a shock-wave/ boundary-layer interaction. Additional work on turbulence models, especially for compressibility, is required since the solutions obtained with base line turbulence are in only reasonable agreement with the experimental data for the viscous interaction problems.
Compensator improvement for multivariable control systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.
1977-01-01
A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.
NASA Technical Reports Server (NTRS)
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
An improved, robust, axial line singularity method for bodies of revolution
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1989-01-01
The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.
Supporting clinician educators to achieve "work-work balance".
Maniate, Jerry; Dath, Deepak; Cooke, Lara; Leslie, Karen; Snell, Linda; Busari, Jamiu
2016-10-01
Clinician Educators (CE) have numerous responsibilities in different professional domains, including clinical, education, research, and administration. Many CEs face tensions trying to manage these often competing professional responsibilities and achieve "work-work balance." Rich discussions of techniques for work-work balance amongst CEs at a medical education conference inspired the authors to gather, analyze, and summarize these techniques to share with others. In this paper we present the CE's "Four Ps"; these are practice points that support both the aspiring and established CE to help improve their performance and productivity as CEs, and allow them to approach work-work balance.
Evaluation of Improved Engine Compartment Overheat Detection Techniques.
1986-08-01
radiation properties (emissivity and reflectivity) of the surface. The first task of the numerical procedure is to investigate the radiosity (radiative heat...and radiosity are spatially uniform within each zone. 0 Radiative properties are spatially uniform and independent of direction. 0 The enclosure is...variation in the radiosity will be nonuniform in distribution in that region. The zone analysis method assumes the : . ,. temperature and radiation
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1999-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1998, the Institutes thirteenth year of operation.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
2001-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Glenn Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1999, the Institute's fourteenth year of operation.
Institute for Computational Mechanics in Propulsion (ICOMP)
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr. (Editor); Balog, Karen (Editor); Povinelli, Louis A. (Editor)
1998-01-01
The Institute for Computational Mechanics in Propulsion (ICOMP) was formed to develop techniques to improve problem-solving capabilities in all aspects of computational mechanics related to propulsion. ICOMP is operated by the Ohio Aerospace Institute (OAI) and funded via numerous cooperative agreements by the NASA Lewis Research Center in Cleveland, Ohio. This report describes the activities at ICOMP during 1997, the Institute's twelfth year of operation.
SIL-STED microscopy technique enhancing super-resolution of fluorescence microscopy
NASA Astrophysics Data System (ADS)
Park, No-Cheol; Lim, Geon; Lee, Won-sup; Moon, Hyungbae; Choi, Guk-Jong; Park, Young-Pil
2017-08-01
We have characterized a new type STED microscope which combines a high numerical aperture (NA) optical head with a solid immersion lens (SIL), and we call it as SIL-STED microscope. The advantage of a SIL-STED microscope is that its high NA of the SIL makes it superior to a general STED microscope in lateral resolution, thus overcoming the optical diffraction limit at the macromolecular level and enabling advanced super-resolution imaging of cell surface or cell membrane structure and function Do. This study presents the first implementation of higher NA illumination in a STED microscope limiting the fluorescence lateral resolution to about 40 nm. The refractive index of the SIL which is made of material KTaO3 is about 2.23 and 2.20 at a wavelength of 633 nm and 780 nm which are used for excitation and depletion in STED imaging, respectively. Based on the vector diffraction theory, the electric field focused by the SILSTED microscope is numerically calculated so that the numerical results of the point dispersion function of the microscope and the expected resolution could be analyzed. For further investigation, fluorescence imaging of nano size fluorescent beads is fulfilled to show improved performance of the technique.
NASA Astrophysics Data System (ADS)
Grazzini, A.; Lacidogna, G.; Valente, S.; Accornero, F.
2018-06-01
Masonry walls of historical buildings are subject to rising damp effects due to capillary or rain infiltrations, which in the time produce decay and delamination of historical plasters. In the restoration of masonry buildings, the plaster detachment frequently occurs because of mechanical incompatibility in repair mortar. An innovative laboratory procedure is described for test mechanical adhesion of new repair mortars. Compression static tests were carried out on composite specimens stone block-repair mortar, which specific geometry can test the de-bonding process of mortar in adherence with a stone masonry structure. The acoustic emission (AE) technique was employed for estimating the amount of energy released from fracture propagation in adherence surface between mortar and stone. A numerical simulation was elaborated based on the cohesive crack model. The evolution of detachment process of mortar in a coupled stone brick-mortar system was analysed by triangulation of AE signals, which can improve the numerical model and predict the type of failure in the adhesion surface of repair plaster. Through the cohesive crack model, it was possible to interpret theoretically the de-bonding phenomena occurring at the interface between stone block and mortar. Therefore, the mechanical behaviour of the interface is characterized.
Development of the symmetrical laser shock test for weak bond inspection.
NASA Astrophysics Data System (ADS)
Sagnard, Maxime; Berthe, Laurent; Ecault, Romain; Touchard, Fabienne; Boustie, Michel
2017-06-01
This paper presents the LAser Shock Adhesion Test (LASAT) using symmetrical laser shocks. The study is part of ComBoNDT European project that develops new Non-Destructive Tests (NDT) to assess adherence properties of bonded composite structures. This NDT technique relies on the creation of a plasma on both side of the sample using two lasers. The plasma expands and generates shockwaves inside the material. When combined, the shockwaves create a local tensile strength. Properly set, this stress can be used to test interfaces adherence. Numerous experiments have shown that this adaptive technique can discriminate a good bond from a weak one, without damaging the composite structure. Weak bonds are usually created by contaminated surfaces (residues of release agent, finger prints, ...) and were artificially recreated for ComBoNDT test samples. Numerical simulations are being developed as well, to improve the comprehension of the physical phenomenon. And ultimately, using these numerical results, one should be able to find the correct laser parameters (intensity, laser spot diameter) to generate the right tensile strength at the desired location. This project has received funding from the European Union's Horizon 2020 research and innovation program under Grant agreement N 63649.
Monitoring by forward scatter radar techniques: an improved second-order analytical model
NASA Astrophysics Data System (ADS)
Falconi, Marta Tecla; Comite, Davide; Galli, Alessandro; Marzano, Frank S.; Pastina, Debora; Lombardo, Pierfrancesco
2017-10-01
In this work, a second-order phase approximation is introduced to provide an improved analytical model of the signal received in forward scatter radar systems. A typical configuration with a rectangular metallic object illuminated while crossing the baseline, in far- or near-field conditions, is considered. An improved second-order model is compared with a simplified one already proposed by the authors and based on a paraxial approximation. A phase error analysis is carried out to investigate benefits and limitations of the second-order modeling. The results are validated by developing full-wave numerical simulations implementing the relevant scattering problem on a commercial tool.
Lubrication of nonconformal contacts. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Jeng, Y. R.
1985-01-01
Minimum film thickness results for piezoviscous-rigid regime of lubrication are developed for a compressible Newtonian fluid with Roelands viscosity. The results provide a basis for the analysis and design of a wide range of machine elements operating in the piezoviscous-rigid regime of lubrication. A new numerical method of calculating elastic deformation in contact stresses is developed using a biquadratic polynomial to approximate the pressure distribution on the whole domain analyzed. The deformation of every node is expressed as a linear combination of the nodal pressures whose coefficients can be combined into an influence coefficient matrix. This approach has the advantages of improved numerical accuracy, less computing time and smaller storage size required for influence matrix. The ideal elastohydrodynamic lubrication is extended to real bearing systems in order to gain an understanding of failure mechanisms in machine elements. The improved elastic deformation calculation is successfully incorporated into the EHL numerical scheme. Using this revised numerical technique and the flow factor model developed by Patir and Cheng (1978) the surface roughness effects on the elastohydrodynamic lubrication of point contact is considered. Conditions typical of an EHL contact in the piezoviscous-elastic regime entrained in pure rolling are investigated. Results are compared with the smooth surface solutions. Experiments are conducted to study the transient EHL effects in instrument ball bearings.
NASA Astrophysics Data System (ADS)
Amina, Benabderrahmane; Miloud, Aminallah; Samir, Laouedj; Abdelylah, Benazza; Solano, J. P.
2016-10-01
In this paper, we present a three dimensional numerical investigation of heat transfer in a parabolic trough collector receiver with longitudinal fins using different kinds of nanofluid, with an operational temperature of 573 K and nanoparticle concentration of 1% in volume. The outer surface of the absorber receives a non-uniform heat flux, which is obtained by using the Monte Carlo ray tracing technique. The numerical results are contrasted with empirical results available in the open literature. A significant improvement of heat transfer is derived when the Reynolds number varies in the range 2.57×104 ≤ Re ≤ 2.57×105, the tube-side Nusselt number increases from 1.3 to 1.8 times, also the metallic nanoparticles improve heat transfer greatly than other nanoparticles, combining both mechanisms provides better heat transfer and higher thermo-hydraulic performance.
NASA Astrophysics Data System (ADS)
Součková, Natálie; Kuklová, Jana; Popelka, Lukáš; Matějka, Milan
2012-04-01
This paper focuses on a suppression of the flow separation, which occurs on a deflected flap, by means of vortex generators (VG's). An airfoil NACA 63A421 with a simple flap and vane-type vortex generators were used. The investigation was carried out by using experimental and numerical methods. The data from the numerical simulation of the flapped airfoil without VG's control were used for the vortex generator design. Two sizes, two different shapes and various spacing of the vortex generators were tested. The flow past the airfoil was visualized through three methods, namely tuft filaments technique, oil and thermo camera visualization. The experiments were performed in closed circuit wind tunnels with closed and open test sections. The lift curves for both cases without and with vortex generators were acquired for a lift coefficient improvement determination. The improvement was achieved for several cases by means all of the applied methods.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Numerical and experimental investigation of turbine blade film cooling
NASA Astrophysics Data System (ADS)
Berkache, Amar; Dizene, Rabah
2017-12-01
The blades in a gas turbine engine are exposed to extreme temperature levels that exceed the melting temperature of the material. Therefore, efficient cooling is a requirement for high performance of the gas turbine engine. The present study investigates film cooling by means of 3D numerical simulations using a commercial code: Fluent. Three numerical models, namely k-ɛ, RSM and SST turbulence models; are applied and then prediction results are compared to experimental measurements conducted by PIV technique. The experimental model realized in the ENSEMA laboratory uses a flat plate with several rows of staggered holes. The performance of the injected flow into the mainstream is analyzed. The comparison shows that the RANS closure models improve the over-predictions of center-line film cooling velocities that is caused by the limitations of the RANS method due to its isotropy eddy diffusivity.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
Fazlollahtabar, Hamed
2010-12-01
Consumer expectations for automobile seat comfort continue to rise. With this said, it is evident that the current automobile seat comfort development process, which is only sporadically successful, needs to change. In this context, there has been growing recognition of the need for establishing theoretical and methodological automobile seat comfort. On the other hand, seat producer need to know the costumer's required comfort to produce based on their interests. The current research methodologies apply qualitative approaches due to anthropometric specifications. The most significant weakness of these approaches is the inexact extracted inferences. Despite the qualitative nature of the consumer's preferences there are some methods to transform the qualitative parameters into numerical value which could help seat producer to improve or enhance their products. Nonetheless this approach would help the automobile manufacturer to provide their seats from the best producer regarding to the consumers idea. In this paper, a heuristic multi criteria decision making technique is applied to make consumers preferences in the numeric value. This Technique is combination of Analytical Hierarchy Procedure (AHP), Entropy method, and Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS). A case study is conducted to illustrate the applicability and the effectiveness of the proposed heuristic approach. Copyright © 2010 Elsevier Ltd. All rights reserved.
Mansour, M M; Spink, A E F
2013-01-01
Grid refinement is introduced in a numerical groundwater model to increase the accuracy of the solution over local areas without compromising the run time of the model. Numerical methods developed for grid refinement suffered certain drawbacks, for example, deficiencies in the implemented interpolation technique; the non-reciprocity in head calculations or flow calculations; lack of accuracy resulting from high truncation errors, and numerical problems resulting from the construction of elongated meshes. A refinement scheme based on the divergence theorem and Taylor's expansions is presented in this article. This scheme is based on the work of De Marsily (1986) but includes more terms of the Taylor's series to improve the numerical solution. In this scheme, flow reciprocity is maintained and high order of refinement was achievable. The new numerical method is applied to simulate groundwater flows in homogeneous and heterogeneous confined aquifers. It produced results with acceptable degrees of accuracy. This method shows the potential for its application to solving groundwater heads over nested meshes with irregular shapes. © 2012, British Geological Survey © NERC 2012. Ground Water © 2012, National GroundWater Association.
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Estimation variance bounds of importance sampling simulations in digital communication systems
NASA Technical Reports Server (NTRS)
Lu, D.; Yao, K.
1991-01-01
In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.
NASA Astrophysics Data System (ADS)
Xia, Huihui; Kan, Ruifeng; Xu, Zhenyu; He, Yabai; Liu, Jianguo; Chen, Bing; Yang, Chenguang; Yao, Lu; Wei, Min; Zhang, Guangle
2017-03-01
We present a system for accurate tomographic reconstruction of the combustion temperature and H2O vapor concentration of a flame based on laser absorption measurements, in combination with an innovative two-step algebraic reconstruction technique. A total of 11 collimated laser beams generated from outputs of fiber-coupled diode lasers formed a two-dimensional 5 × 6 orthogonal beam grids and measured at two H2O absorption transitions (7154.354/7154.353 cm-1 and 7467.769 cm-1). The measurement system was designed on a rotation platform to achieve a two-folder improvement in spatial resolution. Numerical simulation showed that the proposed two-step algebraic reconstruction technique for temperature and concentration, respectively, greatly improved the reconstruction accuracy of species concentration when compared with a traditional calculation. Experimental results demonstrated the good performances of the measurement system and the two-step reconstruction technique for applications such as flame monitoring and combustion diagnosis.
NASA Astrophysics Data System (ADS)
Dixon, Kenneth
A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.
Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...
2016-12-30
In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less
On the accuracy and precision of numerical waveforms: effect of waveform extraction methodology
NASA Astrophysics Data System (ADS)
Chu, Tony; Fong, Heather; Kumar, Prayush; Pfeiffer, Harald P.; Boyle, Michael; Hemberger, Daniel A.; Kidder, Lawrence E.; Scheel, Mark A.; Szilagyi, Bela
2016-08-01
We present a new set of 95 numerical relativity simulations of non-precessing binary black holes (BBHs). The simulations sample comprehensively both black-hole spins up to spin magnitude of 0.9, and cover mass ratios 1-3. The simulations cover on average 24 inspiral orbits, plus merger and ringdown, with low initial orbital eccentricities e\\lt {10}-4. A subset of the simulations extends the coverage of non-spinning BBHs up to mass ratio q = 10. Gravitational waveforms at asymptotic infinity are computed with two independent techniques: extrapolation and Cauchy characteristic extraction. An error analysis based on noise-weighted inner products is performed. We find that numerical truncation error, error due to gravitational wave extraction, and errors due to the Fourier transformation of signals with finite length of the numerical waveforms are of similar magnitude, with gravitational wave extraction errors dominating at noise-weighted mismatches of ˜ 3× {10}-4. This set of waveforms will serve to validate and improve aligned-spin waveform models for gravitational wave science.
NASA Astrophysics Data System (ADS)
Wang, Xiaoqiang; Ju, Lili; Du, Qiang
2016-07-01
The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.
Eastern approaches for enhancing women's sexuality: mindfulness, acupuncture, and yoga (CME).
Brotto, Lori A; Krychman, Michael; Jacobson, Pamela
2008-12-01
A significant proportion of women report unsatisfying sexual experiences despite no obvious difficulties in the traditional components of sexual response (desire, arousal, and orgasm). Some suggest that nongoal-oriented spiritual elements to sexuality might fill the gap that more contemporary forms of treatment are not addressing. Eastern techniques including mindfulness, acupuncture, and yoga, are Eastern techniques, which have been applied to women's sexuality. Here, we review the literature on their efficacy. Our search revealed two empirical studies of mindfulness, two of acupuncture, and one of yoga in the treatment of sexual dysfunction. Literature review of empirical sources. Mindfulness significantly improves several aspects of sexual response and reduces sexual distress in women with sexual desire and arousal disorders. In women with provoked vestibulodynia, acupuncture significantly reduces pain and improves quality of life. There is also a case series of acupuncture significantly improving desire among women with hypoactive sexual desire disorder. Although yoga has only been empirically examined and found to be effective for treating sexual dysfunction (premature ejaculation) in men, numerous historical books cite benefits of yoga for women's sexuality. The empirical literature supporting Eastern techniques, such as mindfulness, acupuncture, and yoga, for women's sexual complaints and loss of satisfaction is sparse but promising. Future research should aim to empirically support Eastern techniques in women's sexuality.
Byliński, Hubert; Gębicki, Jacek; Dymerski, Tomasz; Namieśnik, Jacek
2017-07-04
One of the major sources of error that occur during chemical analysis utilizing the more conventional and established analytical techniques is the possibility of losing part of the analytes during the sample preparation stage. Unfortunately, this sample preparation stage is required to improve analytical sensitivity and precision. Direct techniques have helped to shorten or even bypass the sample preparation stage; and in this review, we comment of some of the new direct techniques that are mass-spectrometry based. The study presents information about the measurement techniques using mass spectrometry, which allow direct sample analysis, without sample preparation or limiting some pre-concentration steps. MALDI - MS, PTR - MS, SIFT - MS, DESI - MS techniques are discussed. These solutions have numerous applications in different fields of human activity due to their interesting properties. The advantages and disadvantages of these techniques are presented. The trends in development of direct analysis using the aforementioned techniques are also presented.
Nahmani, Marc; Lanahan, Conor; DeRosier, David; Turrigiano, Gina G.
2017-01-01
Superresolution microscopy has fundamentally altered our ability to resolve subcellular proteins, but improving on these techniques to study dense structures composed of single-molecule-sized elements has been a challenge. One possible approach to enhance superresolution precision is to use cryogenic fluorescent imaging, reported to reduce fluorescent protein bleaching rates, thereby increasing the precision of superresolution imaging. Here, we describe an approach to cryogenic photoactivated localization microscopy (cPALM) that permits the use of a room-temperature high-numerical-aperture objective lens to image frozen samples in their native state. We find that cPALM increases photon yields and show that this approach can be used to enhance the effective resolution of two photoactivatable/switchable fluorophore-labeled structures in the same frozen sample. This higher resolution, two-color extension of the cPALM technique will expand the accessibility of this approach to a range of laboratories interested in more precise reconstructions of complex subcellular targets. PMID:28348224
A Parallel 2D Numerical Simulation of Tumor Cells Necrosis by Local Hyperthermia
NASA Astrophysics Data System (ADS)
Reis, R. F.; Loureiro, F. S.; Lobosco, M.
2014-03-01
Hyperthermia has been widely used in cancer treatment to destroy tumors. The main idea of the hyperthermia is to heat a specific region like a tumor so that above a threshold temperature the tumor cells are destroyed. This can be accomplished by many heat supply techniques and the use of magnetic nanoparticles that generate heat when an alternating magnetic field is applied has emerged as a promise technique. In the present paper, the Pennes bioheat transfer equation is adopted to model the thermal tumor ablation in the context of magnetic nanoparticles. Numerical simulations are carried out considering different injection sites for the nanoparticles in an attempt to achieve better hyperthermia conditions. Explicit finite difference method is employed to solve the equations. However, a large amount of computation is required for this purpose. Therefore, this work also presents an initial attempt to improve performance using OpenMP, a parallel programming API. Experimental results were quite encouraging: speedups around 35 were obtained on a 64-core machine.
Harte, Philip T.
1994-01-01
Proper discretization of a ground-water-flow field is necessary for the accurate simulation of ground-water flow by models. Although discretiza- tion guidelines are available to ensure numerical stability, current guidelines arc flexible enough (particularly in vertical discretization) to allow for some ambiguity of model results. Testing of two common types of vertical-discretization schemes (horizontal and nonhorizontal-model-layer approach) were done to simulate sloping hydrogeologic units characteristic of New England. Differences of results of model simulations using these two approaches are small. Numerical errors associated with use of nonhorizontal model layers are small (4 percent). even though this discretization technique does not adhere to the strict formulation of the finite-difference method. It was concluded that vertical discretization by means of the nonhorizontal layer approach has advantages in representing the hydrogeologic units tested and in simplicity of model-data input. In addition, vertical distortion of model cells by this approach may improve the representation of shallow flow processes.
Analytical and numerical solutions for mass diffusion in a composite cylindrical body
NASA Astrophysics Data System (ADS)
Kumar, A.
1980-12-01
The analytical and numerical solution techniques were investigated to study moisture diffusion problems in cylindrical bodies that are assumed to be composed of a finite number of layers of different materials. A generalized diffusion model for an n-layer cylindrical body with discontinuous moisture content at the interfaces was developed and the formal solutions were obtained. The model is to be used for describing mass transfer rates of any composite body, such as an ear of corn which could be assumed of consisting two different layers: the inner core represents the woody cob and the outer cylinder represents the kernel layer. Data describing the fully exposed drying characteristics of ear corn at high air velocity were obtained under different drying conditions. Ear corns were modeled as homogeneous bodies since composite model did not improve the fit substantially. A computer program using multidimensional optimization technique showed that diffusivity was an exponential function of moisture content and an arrhenius function of temperature of drying air.
Subjective analysis of energy-management projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morris, R.
The most successful energy conservation projects always reflect human effort to fine-tune engineering and technological improvements. Subjective analysis is a technique for predicting and measuring human interaction before a project begins. The examples of a subjective analysis for office buildings incorporate evaluative questions that are structured to produce numeric values for computer scoring. Each project would need to develop its own pertinent questions and determine appropriate values for the answers.
Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile
2005-09-01
develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic
Resolving phase information of the optical local density of state with scattering near-field probes
NASA Astrophysics Data System (ADS)
Prasad, R.; Vincent, R.
2016-10-01
We theoretically discuss the link between the phase measured using a scattering optical scanning near-field microscopy (s-SNOM) and the local density of optical states (LDOS). A remarkable result is that the LDOS information is directly included in the phase of the probe. Therefore by monitoring the spatial variation of the trans-scattering phase, we locally measure the phase modulation associated with the probe and the optical paths. We demonstrate numerically that a technique involving two-phase imaging of a sample with two different sized tips should allow to obtain the image the pLDOS. For this imaging method, numerical comparison with extinction probe measurement shows crucial qualitative and quantitative improvement.
Mathematical and Numerical Aspects of the Adaptive Fast Multipole Poisson-Boltzmann Solver
Zhang, Bo; Lu, Benzhuo; Cheng, Xiaolin; ...
2013-01-01
This paper summarizes the mathematical and numerical theories and computational elements of the adaptive fast multipole Poisson-Boltzmann (AFMPB) solver. We introduce and discuss the following components in order: the Poisson-Boltzmann model, boundary integral equation reformulation, surface mesh generation, the nodepatch discretization approach, Krylov iterative methods, the new version of fast multipole methods (FMMs), and a dynamic prioritization technique for scheduling parallel operations. For each component, we also remark on feasible approaches for further improvements in efficiency, accuracy and applicability of the AFMPB solver to large-scale long-time molecular dynamics simulations. Lastly, the potential of the solver is demonstrated with preliminary numericalmore » results.« less
Weighted least squares techniques for improved received signal strength based localization.
Tarrío, Paula; Bernardos, Ana M; Casar, José R
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization
Tarrío, Paula; Bernardos, Ana M.; Casar, José R.
2011-01-01
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092
Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad
2017-01-01
Background Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Methods Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Results Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Conclusions Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility. PMID:29302476
Satellite based Ocean Forecasting, the SOFT project
NASA Astrophysics Data System (ADS)
Stemmann, L.; Tintoré, J.; Moneris, S.
2003-04-01
The knowledge of future oceanic conditions would have enormous impact on human marine related areas. For such reasons, a number of international efforts are being carried out to obtain reliable and manageable ocean forecasting systems. Among the possible techniques that can be used to estimate the near future states of the ocean, an ocean forecasting system based on satellite imagery is developped through the Satelitte based Ocean ForecasTing project (SOFT). SOFT, established by the European Commission, considers the development of a forecasting system of the ocean space-time variability based on satellite data by using Artificial Intelligence techniques. This system will be merged with numerical simulation approaches, via assimilation techniques, to get a hybrid SOFT-numerical forecasting system of improved performance. The results of the project will provide efficient forecasting of sea-surface temperature structures, currents, dynamic height, and biological activity associated to chlorophyll fields. All these quantities could give valuable information on the planning and management of human activities in marine environments such as navigation, fisheries, pollution control, or coastal management. A detailed identification of present or new needs and potential end-users concerned by such an operational tool is being performed. The project would study solutions adapted to these specific needs.
Yao, Yuan; Du, Fenglei; Wang, Chunjie; Liu, Yuqiu; Weng, Jian; Chen, Feiyan
2015-01-01
This study examined whether long-term abacus-based mental calculation (AMC) training improved numerical processing efficiency and at what stage of information processing the effect appeard. Thirty-three children participated in the study and were randomly assigned to two groups at primary school entry, matched for age, gender and IQ. All children went through the same curriculum except that the abacus group received a 2-h/per week AMC training, while the control group did traditional numerical practice for a similar amount of time. After a 2-year training, they were tested with a numerical Stroop task. Electroencephalographic (EEG) and event related potential (ERP) recording techniques were used to monitor the temporal dynamics during the task. Children were required to determine the numerical magnitude (NC) (NC task) or the physical size (PC task) of two numbers presented simultaneously. In the NC task, the AMC group showed faster response times but similar accuracy compared to the control group. In the PC task, the two groups exhibited the same speed and accuracy. The saliency of numerical information relative to physical information was greater in AMC group. With regards to ERP results, the AMC group displayed congruity effects both in the earlier (N1) and later (N2 and LPC (late positive component) time domain, while the control group only displayed congruity effects for LPC. In the left parietal region, LPC amplitudes were larger for the AMC than the control group. Individual differences for LPC amplitudes over left parietal area showed a positive correlation with RTs in the NC task in both congruent and neutral conditions. After controlling for the N2 amplitude, this correlation also became significant in the incongruent condition. Our results suggest that AMC training can strengthen the relationship between symbolic representation and numerical magnitude so that numerical information processing becomes quicker and automatic in AMC children. PMID:26042012
A review of numerical techniques approaching microstructures of crystalline rocks
NASA Astrophysics Data System (ADS)
Zhang, Yahui; Wong, Louis Ngai Yuen
2018-06-01
The macro-mechanical behavior of crystalline rocks including strength, deformability and failure pattern are dominantly influenced by their grain-scale structures. Numerical technique is commonly used to assist understanding the complicated mechanisms from a microscopic perspective. Each numerical method has its respective strengths and limitations. This review paper elucidates how numerical techniques take geometrical aspects of the grain into consideration. Four categories of numerical methods are examined: particle-based methods, block-based methods, grain-based methods, and node-based methods. Focusing on the grain-scale characters, specific relevant issues including increasing complexity of micro-structure, deformation and breakage of model elements, fracturing and fragmentation process are described in more detail. Therefore, the intrinsic capabilities and limitations of different numerical approaches in terms of accounting for the micro-mechanics of crystalline rocks and their phenomenal mechanical behavior are explicitly presented.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Etchepareborda, Pablo; Vadnjal, Ana Laura; Federico, Alejandro; Kaufmann, Guillermo H
2012-09-15
We evaluate the extension of the exact nonlinear reconstruction technique developed for digital holography to the phase-recovery problems presented by other optical interferometric methods, which use carrier modulation. It is shown that the introduction of an analytic wavelet analysis in the ridge of the cepstrum transformation corresponding to the analyzed interferogram can be closely related to the well-known wavelet analysis of the interferometric intensity. Subsequently, the phase-recovery process is improved. The advantages and limitations of this framework are analyzed and discussed using numerical simulations in singular scalar light fields and in temporal speckle pattern interferometry.
Evidence-based perianesthesia care: accelerated postoperative recovery programs.
Pasero, Chris; Belden, Jan
2006-06-01
Prolonged stress response after surgery can cause numerous adverse effects, including gastrointestinal dysfunction, muscle wasting, impaired cognition, and cardiopulmonary, infectious, and thromboembolic complications. These events can delay hospital discharge, extend convalescence, and negatively impact long-term prognosis. Recent advances in perioperative management practices have allowed better control of the stress response and improved outcomes for patients undergoing surgery. At the center of the current focus on improved outcomes are evidence-based fast-track surgical techniques and what is commonly referred to as "accelerated postoperative recovery programs." These programs require a multidisciplinary, coordinated effort, and nurses are essential to their successful implementation.
External and internal geometry of European adults.
Bertrand, Samuel; Skalli, Wafa; Delacherie, Laurent; Bonneau, Dominique; Kalifa, Gabriel; Mitton, David
2006-12-15
The primary objective of the study was to bring a deeper knowledge of the human anthropometry, investigating the external and internal body geometry of small women, mid-sized men and tall men. Sixty-four healthy European adults were recruited. External measurements were performed using classical anthropometric instruments. Internal measurements of the trunk bones were performed using a stereo-radiographic 3D reconstruction technique. Besides the original procedure presented in this paper for performing in vivo geometrical data acquisition on numerous volunteers, this study provides an extensive description of both external and internal (trunk skeleton) human body geometry for three morphotypes. Moreover, this study proposes a global external and internal geometrical description of 5th female 50th male and 95th male percentile subjects. This study resulted in a unique geometrical database enabling improvement for numerical models of the human body for crash test simulation and offering numerous possibilities in the anthropometry field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheriasl, Reza; Ghavam, Kamyar; Worswick, Michael
2011-05-04
The effect of temperature on formability of aluminum alloy sheet is studied by developing the Forming Limit Diagrams, FLD, for aluminum alloy 3000-series using the Marciniak and Kuczynski technique by numerical simulation. The numerical model is conducted in LS-DYNA and incorporates the Barlat's YLD2000 anisotropic yield function and the temperature dependant Bergstrom hardening law. Three different temperatures; room temperature, 250 deg. C and 300 deg. C, are studied. For each temperature case, various loading conditions are applied to the M-K defect model. The effect of the material anisotropy is considered by varying the defect angle. A simplified failure criterion ismore » used to predict the onset of necking. Minor and major strains are obtained from the simulations and plotted for each temperature level. It is demonstrated that temperature improves the forming limit of aluminum 3000-series alloy sheet.« less
Micro-PIV Study of Supercritical CO2-Water Interactions in Porous Micromodels
NASA Astrophysics Data System (ADS)
Kazemifar, Farzan; Blois, Gianluca; Christensen, Kenneth T.
2015-11-01
Multiphase flow of immiscible fluids in porous media is encountered in numerous natural systems and engineering applications such as enhanced oil recovery (EOR), and CO2 sequestration among others. Geological sequestration of CO2 in saline aquifers has emerged as a viable option for reducing CO2 emissions, and thus it has been the subject of numerous studies in recent years. A key objective is improving the accuracy of numerical models used for field-scale simulations by incorporation/better representation of the pore-scale flow physics. This necessitates experimental data for developing, testing and validating such models. We have studied drainage and imbibition processes in a homogeneous, two-dimensional porous micromodel with CO2 and water at reservoir-relevant conditions. Microscopic particle image velocimetry (micro-PIV) technique was applied to obtain spatially- and temporally-resolved velocity vector fields in the aqueous phase. The results provide new insight into the flow processes at the pore scale.
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
Shi, Chaoyang; Kojima, Masahiro; Tercero, Carlos; Najdovski, Zoran; Ikeda, Seiichi; Fukuda, Toshio; Arai, Fumihito; Negoro, Makoto
2014-12-01
There are several complications associated with Stent-assisted Coil Embolization (SACE) in cerebral aneurysm treatments, due to damaging operations by surgeons and undesirable mechanical properties of stents. Therefore, it is necessary to develop an in vitro simulator that provides both training and research for evaluating the mechanical properties of stents. A new in vitro simulator for three-dimensional digital subtraction angiography was constructed, followed by aneurysm models fabricated with new materials. Next, this platform was used to provide training and to conduct photoelastic stress analysis to evaluate the SACE technique. The average interaction stress increasingly varied for the two different stents. Improvements for the Maximum-Likelihood Expectation-Maximization method were developed to reconstruct cross-sections with both thickness and stress information. The technique presented can improve a surgeon's skills and quantify the performance of stents to improve mechanical design and classification. This method can contribute to three-dimensional stress and volume variation evaluation and assess a surgeon's skills. Copyright © 2013 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, I.; Sinclair, C.I.K.; Magne, E.
This paper describes the life extension of a semi-submersible drilling rig built in the early 1970`s. A nominal design life of 20 years was estimated at the time of building; however, in the interim period, numerous improvements have been made in fatigue life estimation ad life improvement techniques, raising the possibility that a further 20 years of operation could be considered. The life extension strategy made use of a number of aspects of offshore technology which were not available at the time of construction of the rig. Finite element studies and results from offshore research programs were used to gaugemore » the effect of fatigue life improvement techniques. The program demonstrated the feasibility of extending the operation of the rig for a further 20 years, with the interval between in-service inspection increased to every five years. It also provided a valuable database of fracture toughness data for the rig materials, which may be used in future work to address reliability issues.« less
Computational wave dynamics for innovative design of coastal structures
GOTOH, Hitoshi; OKAYASU, Akio
2017-01-01
For innovative designs of coastal structures, Numerical Wave Flumes (NWFs), which are solvers of Navier-Stokes equation for free-surface flows, are key tools. In this article, various methods and techniques for NWFs are overviewed. In the former half, key techniques of NWFs, namely the interface capturing (MAC, VOF, C-CUP) and significance of NWFs in comparison with the conventional wave models are described. In the latter part of this article, recent improvements of the particle method are shown as one of cores of NWFs. Methods for attenuating unphysical pressure fluctuation and improving accuracy, such as CMPS method for momentum conservation, Higher-order Source of Poisson Pressure Equation (PPE), Higher-order Laplacian, Error-Compensating Source in PPE, and Gradient Correction for ensuring Taylor-series consistency, are reviewed briefly. Finally, the latest new frontier of the accurate particle method, including Dynamic Stabilization for providing minimum-required artificial repulsive force to improve stability of computation, and Space Potential Particle for describing the exact free-surface boundary condition, is described. PMID:29021506
Supporting clinician educators to achieve “work-work balance”
Maniate, Jerry; Dath, Deepak; Cooke, Lara; Leslie, Karen; Snell, Linda; Busari, Jamiu
2016-01-01
Clinician Educators (CE) have numerous responsibilities in different professional domains, including clinical, education, research, and administration. Many CEs face tensions trying to manage these often competing professional responsibilities and achieve “work-work balance.” Rich discussions of techniques for work-work balance amongst CEs at a medical education conference inspired the authors to gather, analyze, and summarize these techniques to share with others. In this paper we present the CE’s “Four Ps”; these are practice points that support both the aspiring and established CE to help improve their performance and productivity as CEs, and allow them to approach work-work balance. PMID:28344698
Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet
NASA Technical Reports Server (NTRS)
Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.
2000-01-01
This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.
Use of makeup, hairstyles, glasses, and prosthetics as adjuncts to scar camouflage.
Sidle, Douglas M; Decker, Jennifer R
2011-08-01
Scars after facial trauma or surgery can be a source of distress for patients, and facial plastic surgeons are frequently called upon to help manage them. Although no technique can remove a scar, numerous treatment modalities have been developed to improve facial scar appearance with varying levels of invasiveness. This article reviews techniques that camouflage scars without surgical intervention. Topical scar treatments, camouflage cosmetics, use of hairstyling and glasses, and facial prosthetics are discussed. In addition, professional counseling is provided on selection and application of topical cosmetics for use as part of an office practice. 2011 Elsevier Inc. All rights reserved.
Genome editing in plants: Advancing crop transformation and overview of tools.
Shah, Tariq; Andleeb, Tayyaba; Lateef, Sadia; Noor, Mehmood Ali
2018-05-07
Genome manipulation technology is one of emerging field which brings real revolution in genetic engineering and biotechnology. Targeted editing of genomes pave path to address a wide range of goals not only to improve quality and productivity of crops but also permit to investigate the fundamental roots of biological systems. These goals includes creation of plants with valued compositional properties and with characters that confer resistance to numerous biotic and abiotic stresses. Numerous novel genome editing systems have been introduced during the past few years; these comprise zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeats/Cas9 (CRISPR/Cas9). Genome editing technique is consistent for improving average yield to achieve the growing demands of the world's existing food famine and to launch a feasible and environmentally safe agriculture scheme, to more specific, productive, cost-effective and eco-friendly. These exciting novel methods, concisely reviewed herein, have verified themselves as efficient and reliable tools for the genetic improvement of plants. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Adaptive OFDM Radar Waveform Design for Improved Micro-Doppler Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
Here we analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a rotating target having multiple scattering centers. The use of a frequency-diverse OFDM signal enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. We characterize the accuracy of micro-Doppler frequency estimation by computing the Cramer-Rao bound (CRB) on the angular-velocity estimate of the target. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to themore » OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations with respect to the signal-to-noise ratios, number of temporal samples, and number of OFDM subcarriers. We also analysed numerically the improvement in estimation accuracy due to the adaptive waveform design. A grid-based maximum likelihood estimation technique is applied to evaluate the corresponding mean-squared error performance.« less
A solution to the Navier-Stokes equations based upon the Newton Kantorovich method
NASA Technical Reports Server (NTRS)
Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.
1977-01-01
An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.
Cigada, Alfredo; Lurati, Massimiliano; Ripamonti, Francesco; Vanali, Marcello
2008-12-01
This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.
Observation Leads to Improved Operations in Nuclear Medicine.
Religioso, Deo G
2016-01-01
The concept of observation--going out and seeing what is happening in daily operations---would seem like a normal management activity, but the reality in practice of the philosophy and technique is often underutilized. Once an observation has been determined, the next steps are to test and validate any discoveries on paper. For process change to be implemented, numerical data is needed to back-up observations in order to be heard and taken seriously by the executive team. Boca Raton Regional Hospital saw an opportunity to improve the process for radiopharmaceutical standing orders within its nuclear imaging department. As a result of this observation, the facility realized improved savings and an increase in employee motivation.
Approaches to flame resistant polymeric materials
NASA Technical Reports Server (NTRS)
Liepins, R.
1975-01-01
Four research and development areas are considered for further exploration in the quest of more flame-resistant polymeric materials. It is suggested that improvements in phenolphthalein polycarbonate processability may be gained through linear free energy relationship correlations. Looped functionality in the backbone of a polymer leads to both improved thermal resistance and increased solubility. The guidelines used in the pyrolytic carbon production constitute a good starting point for the development of improved flame-resistant materials. Numerous organic reactions requiring high temperatures and the techniques of protected functionality and latent functionality constitute the third area for exploration. Finally, some well-known organic reactions are suggested for the formation of polymers that were not made before.
Efficient Robust Optimization of Metal Forming Processes using a Sequential Metamodel Based Strategy
NASA Astrophysics Data System (ADS)
Wiebenga, J. H.; Klaseboer, G.; van den Boogaard, A. H.
2011-08-01
The coupling of Finite Element (FE) simulations to mathematical optimization techniques has contributed significantly to product improvements and cost reductions in the metal forming industries. The next challenge is to bridge the gap between deterministic optimization techniques and the industrial need for robustness. This paper introduces a new and generally applicable structured methodology for modeling and solving robust optimization problems. Stochastic design variables or noise variables are taken into account explicitly in the optimization procedure. The metamodel-based strategy is combined with a sequential improvement algorithm to efficiently increase the accuracy of the objective function prediction. This is only done at regions of interest containing the optimal robust design. Application of the methodology to an industrial V-bending process resulted in valuable process insights and an improved robust process design. Moreover, a significant improvement of the robustness (>2σ) was obtained by minimizing the deteriorating effects of several noise variables. The robust optimization results demonstrate the general applicability of the robust optimization strategy and underline the importance of including uncertainty and robustness explicitly in the numerical optimization procedure.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
Recent advances in computational-analytical integral transforms for convection-diffusion problems
NASA Astrophysics Data System (ADS)
Cotta, R. M.; Naveira-Cotta, C. P.; Knupp, D. C.; Zotin, J. L. Z.; Pontes, P. C.; Almeida, A. P.
2017-10-01
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries, multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate, critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
NASA Astrophysics Data System (ADS)
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute fluxes - where Hydrus simulations may fail to converge - no numerical problem appears, and ii) accuracy of simulations even for loose spatial domain discretisations, which can only be obtained by Hydrus with fine discretisations.
NASA Astrophysics Data System (ADS)
Javernick, L.; Bertoldi, W.; Redolfi, M.
2017-12-01
Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical agreement simulations suffered to represent braiding planforms (evolving toward meandering), and parameterization that ensured braided produced exaggerated activation and bank erosion rates. Marie Sklodowska-Curie Individual Fellowship: River-HMV, 656917
NASA Astrophysics Data System (ADS)
Raikovskiy, N. A.; Tretyakov, A. V.; Abramov, S. A.; Nazmeev, F. G.; Pavlichev, S. V.
2017-08-01
The paper presents a numerical study method of the cooling medium flowing in the water jacket of self-lubricating sliding bearing based on ANSYS CFX. The results of numerical calculations have satisfactory convergence with the empirical data obtained on the testbed. Verification data confirm the possibility of applying this numerical technique for the analysis of coolant flowings in the self-lubricating bearing containing the water jacket.
Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Schwartz, C. S.
2017-12-01
Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.
Using Computational and Mechanical Models to Study Animal Locomotion
Miller, Laura A.; Goldman, Daniel I.; Hedrick, Tyson L.; Tytell, Eric D.; Wang, Z. Jane; Yen, Jeannette; Alben, Silas
2012-01-01
Recent advances in computational methods have made realistic large-scale simulations of animal locomotion possible. This has resulted in numerous mathematical and computational studies of animal movement through fluids and over substrates with the purpose of better understanding organisms’ performance and improving the design of vehicles moving through air and water and on land. This work has also motivated the development of improved numerical methods and modeling techniques for animal locomotion that is characterized by the interactions of fluids, substrates, and structures. Despite the large body of recent work in this area, the application of mathematical and numerical methods to improve our understanding of organisms in the context of their environment and physiology has remained relatively unexplored. Nature has evolved a wide variety of fascinating mechanisms of locomotion that exploit the properties of complex materials and fluids, but only recently are the mathematical, computational, and robotic tools available to rigorously compare the relative advantages and disadvantages of different methods of locomotion in variable environments. Similarly, advances in computational physiology have only recently allowed investigators to explore how changes at the molecular, cellular, and tissue levels might lead to changes in performance at the organismal level. In this article, we highlight recent examples of how computational, mathematical, and experimental tools can be combined to ultimately answer the questions posed in one of the grand challenges in organismal biology: “Integrating living and physical systems.” PMID:22988026
Hasani, Mojtaba H; Gharibzadeh, Shahriar; Farjami, Yaghoub; Tavakkoli, Jahan
2013-09-01
Various numerical algorithms have been developed to solve the Khokhlov-Kuznetsov-Zabolotskaya (KZK) parabolic nonlinear wave equation. In this work, a generalized time-domain numerical algorithm is proposed to solve the diffraction term of the KZK equation. This algorithm solves the transverse Laplacian operator of the KZK equation in three-dimensional (3D) Cartesian coordinates using a finite-difference method based on the five-point implicit backward finite difference and the five-point Crank-Nicolson finite difference discretization techniques. This leads to a more uniform discretization of the Laplacian operator which in turn results in fewer calculation gridding nodes without compromising accuracy in the diffraction term. In addition, a new empirical algorithm based on the LU decomposition technique is proposed to solve the system of linear equations obtained from this discretization. The proposed empirical algorithm improves the calculation speed and memory usage, while the order of computational complexity remains linear in calculation of the diffraction term in the KZK equation. For evaluating the accuracy of the proposed algorithm, two previously published algorithms are used as comparison references: the conventional 2D Texas code and its generalization for 3D geometries. The results show that the accuracy/efficiency performance of the proposed algorithm is comparable with the established time-domain methods.
NASA Technical Reports Server (NTRS)
VanHeukelem, Laurie; Thomas, Crystal S.; Glibert, Patricia M.
2001-01-01
The need for accurate determination of chlorophyll a (chl a) is of interest for numerous reasons. From the need for ground-truth data for remote sensing to pigment detection for laboratory experimentation, it is essential to know the accuracy of the analyses and the factors potentially contributing to variability and error. Numerous methods and instrument techniques are currently employed in the analyses of chl a. These methods range from spectrophotometric quantification, to fluorometric analysis and determination by high performance liquid chromatography. Even within the application of HPLC techniques, methods vary. Here we provide the results of a comparison among methods and provide some guidance for improving the accuracy of these analyses. These results are based on a round-robin conducted among numerous investigators, including several in the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) and HyCODE Programs. Our purpose here is not to present the full results of the laboratory intercalibration; those results will be presented elsewhere. Rather, here we highlight some of the major factors that may contribute to the variability observed. Specifically, we aim to assess the comparability of chl a analyses performed by fluorometry and HPLC, and we identify several factors in the analyses which may contribute disproportionately to this variability.
Transonic small disturbances equation applied to the solution of two-dimensional nonsteady flows
NASA Technical Reports Server (NTRS)
Couston, M.; Angelini, J. J.; Mulak, P.
1980-01-01
Transonic nonsteady flows are of large practical interest. Aeroelastic instability prediction, control figured vehicle techniques or rotary wings in forward flight are some examples justifying the effort undertaken to improve knowledge of these problems is described. The numerical solution of these problems under the potential flow hypothesis is described. The use of an alternating direction implicit scheme allows the efficient resolution of the two dimensional transonic small perturbations equation.
Knudsen Cell Studies of Ti-Al Thermodynamics
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Copland, Evan H.; Mehrotra, Gopal M.; Auping, Judith; Gray, Hugh R. (Technical Monitor)
2002-01-01
In this paper we describe the Knudsen cell technique for measurement of thermodynamic activities in alloys. Numerous experimental details must be adhered to in order to obtain useful experimental data. These include introduction of an in-situ standard, precise temperature measurement, elimination of thermal gradients, and precise cell positioning. Our first design is discussed and some sample data on Ti-Al alloys is presented. The second modification and associated improvements are also discussed.
NASA Astrophysics Data System (ADS)
Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.
2015-10-01
We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.
Mukdadi, Osama; Shandas, Robin
2004-01-01
Nonlinear wave propagation in tissue can be employed for tissue harmonic imaging, ultrasound surgery, and more effective tissue ablation for high intensity focused ultrasound (HIFU). Wave propagation in soft tissue and scattering from microbubbles (ultrasound contrast agents) are modeled to improve detectability, signal-to-noise ratio, and contrast harmonic imaging used for echo particle image velocimetry (Echo-PIV) technique. The wave motion in nonlinear material (tissue) is studied using KZK-type parabolic evolution equation. This model considers ultrasound beam diffraction, attenuation, and tissue nonlinearity. Time-domain numerical model is based on that originally developed by Lee and Hamilton [J. Acoust. Soc. Am 97:906-917 (1995)] for axi-symmetric acoustic field. The initial acoustic waveform emitted from the transducer is assumed to be a broadband wave modulated by Gaussian envelope. Scattering from microbubbles seeded in the blood stream is characterized. Hence, we compute the pressure field impinges the wall of a coated microbubble; the dynamics of oscillating microbubble can be modeled using Rayleigh-Plesset-type equation. Here, the continuity and the radial-momentum equation of encapsulated microbubbles are used to account for the lipid layer surrounding the microbubble. Numerical results show the effects of tissue and microbubble nonlinearities on the propagating pressure wave field. These nonlinearities have a strong influence on the waveform distortion and harmonic generation of the propagating and scattering waves. Results also show that microbubbles have stronger nonlinearity than tissue, and thus improves S/N ratio. These theoretical predictions of wave phenomena provide further understanding of biomedical imaging technique and provide better system design.
NASA Astrophysics Data System (ADS)
Khatir, Samir; Dekemele, Kevin; Loccufier, Mia; Khatir, Tawfiq; Abdel Wahab, Magd
2018-02-01
In this paper, a technique is presented for the detection and localization of an open crack in beam-like structures using experimentally measured natural frequencies and the Particle Swarm Optimization (PSO) method. The technique considers the variation in local flexibility near the crack. The natural frequencies of a cracked beam are determined experimentally and numerically using the Finite Element Method (FEM). The optimization algorithm is programmed in MATLAB. The algorithm is used to estimate the location and severity of a crack by minimizing the differences between measured and calculated frequencies. The method is verified using experimentally measured data on a cantilever steel beam. The Fourier transform is adopted to improve the frequency resolution. The results demonstrate the good accuracy of the proposed technique.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
Li, Bai; Lin, Mu; Liu, Qiao; Li, Ya; Zhou, Changjun
2015-10-01
Protein folding is a fundamental topic in molecular biology. Conventional experimental techniques for protein structure identification or protein folding recognition require strict laboratory requirements and heavy operating burdens, which have largely limited their applications. Alternatively, computer-aided techniques have been developed to optimize protein structures or to predict the protein folding process. In this paper, we utilize a 3D off-lattice model to describe the original protein folding scheme as a simplified energy-optimal numerical problem, where all types of amino acid residues are binarized into hydrophobic and hydrophilic ones. We apply a balance-evolution artificial bee colony (BE-ABC) algorithm as the minimization solver, which is featured by the adaptive adjustment of search intensity to cater for the varying needs during the entire optimization process. In this work, we establish a benchmark case set with 13 real protein sequences from the Protein Data Bank database and evaluate the convergence performance of BE-ABC algorithm through strict comparisons with several state-of-the-art ABC variants in short-term numerical experiments. Besides that, our obtained best-so-far protein structures are compared to the ones in comprehensive previous literature. This study also provides preliminary insights into how artificial intelligence techniques can be applied to reveal the dynamics of protein folding. Graphical Abstract Protein folding optimization using 3D off-lattice model and advanced optimization techniques.
An overview of groundwater chemistry studies in Malaysia.
Kura, Nura Umar; Ramli, Mohammad Firuz; Sulaiman, Wan Nor Azmin; Ibrahim, Shaharin; Aris, Ahmad Zaharin
2018-03-01
In this paper, numerous studies on groundwater in Malaysia were reviewed with the aim of evaluating past trends and the current status for discerning the sustainability of the water resources in the country. It was found that most of the previous groundwater studies (44 %) focused on the islands and mostly concentrated on qualitative assessment with more emphasis being placed on seawater intrusion studies. This was then followed by inland-based studies, with Selangor state leading the studies which reflected the current water challenges facing the state. From a methodological perspective, geophysics, graphical methods, and statistical analysis are the dominant techniques (38, 25, and 25 %) respectively. The geophysical methods especially the 2D resistivity method cut across many subjects such as seawater intrusion studies, quantitative assessment, and hydraulic parameters estimation. The statistical techniques used include multivariate statistical analysis techniques and ANOVA among others, most of which are quality related studies using major ions, in situ parameters, and heavy metals. Conversely, numerical techniques like MODFLOW were somewhat less admired which is likely due to their complexity in nature and high data demand. This work will facilitate researchers in identifying the specific areas which need improvement and focus, while, at the same time, provide policymakers and managers with an executive summary and knowledge of the current situation in groundwater studies and where more work needs to be done for sustainable development.
JAXA protein crystallization in space: ongoing improvements for growing high-quality crystals
Takahashi, Sachiko; Ohta, Kazunori; Furubayashi, Naoki; Yan, Bin; Koga, Misako; Wada, Yoshio; Yamada, Mitsugu; Inaka, Koji; Tanaka, Hiroaki; Miyoshi, Hiroshi; Kobayashi, Tomoyuki; Kamigaichi, Shigeki
2013-01-01
The Japan Aerospace Exploration Agency (JAXA) started a high-quality protein crystal growth project, now called JAXA PCG, on the International Space Station (ISS) in 2002. Using the counter-diffusion technique, 14 sessions of experiments have been performed as of 2012 with 580 proteins crystallized in total. Over the course of these experiments, a user-friendly interface framework for high accessibility has been constructed and crystallization techniques improved; devices to maximize the use of the microgravity environment have been designed, resulting in some high-resolution crystal growth. If crystallization conditions were carefully fixed in ground-based experiments, high-quality protein crystals grew in microgravity in many experiments on the ISS, especially when a highly homogeneous protein sample and a viscous crystallization solution were employed. In this article, the current status of JAXA PCG is discussed, and a rational approach to high-quality protein crystal growth in microgravity based on numerical analyses is explained. PMID:24121350
Optimizing laser crater enhanced Raman scattering spectroscopy
NASA Astrophysics Data System (ADS)
Lednev, V. N.; Sdvizhenskii, P. A.; Grishin, M. Ya.; Fedorov, A. N.; Khokhlova, O. V.; Oshurko, V. B.; Pershin, S. M.
2018-05-01
The laser crater enhanced Raman scattering (LCERS) spectroscopy technique has been systematically studied for chosen sampling strategy and influence of powder material properties on spectra intensity enhancement. The same nanosecond pulsed solid state Nd:YAG laser (532 nm, 10 ns, 0.1-1.5 mJ/pulse) was used for laser crater production and Raman scattering experiments for L-aspartic acid powder. Increased sampling area inside crater cavity is the key factor for Raman signal improvement for the LCERS technique, thus Raman signal enhancement was studied as a function of numerous experimental parameters including lens-to-sample distance, wavelength (532 and 1064 nm) and laser pulse energy utilized for crater production. Combining laser pulses of 1064 and 532 nm wavelengths for crater ablation was shown to be an effective way for additional LCERS signal improvement. Powder material properties (particle size distribution, powder compactness) were demonstrated to affect LCERS measurements with better results achieved for smaller particles and lower compactness.
Fast optically sectioned fluorescence HiLo endomicroscopy.
Ford, Tim N; Lim, Daryl; Mertz, Jerome
2012-02-01
We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.
Fast optically sectioned fluorescence HiLo endomicroscopy
NASA Astrophysics Data System (ADS)
Ford, Tim N.; Lim, Daryl; Mertz, Jerome
2012-02-01
We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.
NASA Technical Reports Server (NTRS)
Salikuddin, M.; Brown, W. H.; Ramakrishnan, R.; Tanna, H. K.
1983-01-01
An improved acoustic impulse technique was developed and was used to study the transmission characteristics of duct/nozzle systems. To accomplish the above objective, various problems associated with the existing spark-discharge impulse technique were first studied. These included (1) the nonlinear behavior of high intensity pulses, (2) the contamination of the signal with flow noise, (3) low signal-to-noise ratio at high exhaust velocities, and (4) the inability to control or shape the signal generated by the source, specially when multiple spark points were used as the source. The first step to resolve these problems was the replacement of the spark-discharge source with electroacoustic driver(s). These included (1) synthesizing on acoustic impulse with acoustic driver(s) to control and shape the output signal, (2) time domain signal averaging to remove flow noise from the contaminated signal, (3) signal editing to remove unwanted portions of the time history, (4) spectral averaging, and (5) numerical smoothing. The acoustic power measurement technique was improved by taking multiple induct measurements and by a modal decomposition process to account for the contribution of higher order modes in the power computation. The improved acoustic impulse technique was then validated by comparing the results derived by an impedance tube method. The mechanism of acoustic power loss, that occurs when sound is transmitted through nozzle terminations, was investigated. Finally, the refined impulse technique was applied to obtain more accurate results for the acoustic transmission characteristics of a conical nozzle and a multi-lobe multi-tube supressor nozzle.
Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa
2008-01-01
This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
Improved modeling of turbulent forced convection heat transfer in straight ducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rokni, M.; Sunden, B.
1999-08-01
This investigation concerns numerical calculation of turbulent forced convective heat transfer and fluid flow in their fully developed state at low Reynolds number. The authors have developed a low Reynolds number version of the nonlinear {kappa}-{epsilon} model combined with the heat flux models of simple eddy diffusivity (SED), low Reynolds number version of generalized gradient diffusion hypothesis (GGDH), and wealth {proportional_to} earning {times} time (WET) in general three-dimensional geometries. The numerical approach is based on the finite volume technique with a nonstaggered grid arrangement and the SIMPLEC algorithm. Results have been obtained with the nonlinear {kappa}-{epsilon} model, combined with themore » Lam-Bremhorst and the Abe-Kondoh-Nagano damping functions for low Reynolds numbers.« less
Extension of transonic flow computational concepts in the analysis of cavitated bearings
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Keith, T. G., Jr.; Brewe, D. E.
1990-01-01
An analogy between the mathematical modeling of transonic potential flow and the flow in a cavitating bearing is described. Based on the similarities, characteristics of the cavitated region and jump conditions across the film reformation and rupture fronts are developed using the method of weak solutions. The mathematical analogy is extended by utilizing a few computational concepts of transonic flow to numerically model the cavitating bearing. Methods of shock fitting and shock capturing are discussed. Various procedures used in transonic flow computations are adapted to bearing cavitation applications, for example, type differencing, grid transformation, an approximate factorization technique, and Newton's iteration method. These concepts have proved to be successful and have vastly improved the efficiency of numerical modeling of cavitated bearings.
Numerical simulation of transient, incongruent vaporization induced by high power laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, C.H.
1981-01-01
A mathematical model and numerical calculations were developed to solve the heat and mass transfer problems specifically for uranum oxide subject to laser irradiation. It can easily be modified for other heat sources or/and other materials. In the uranium-oxygen system, oxygen is the preferentially vaporizing component, and as a result of the finite mobility of oxygen in the solid, an oxygen deficiency is set up near the surface. Because of the bivariant behavior of uranium oxide, the heat transfer problem and the oxygen diffusion problem are coupled and a numerical method of simultaneously solving the two boundary value problems ismore » studied. The temperature dependence of the thermal properties and oxygen diffusivity, as well as the highly ablative effect on the surface, leads to considerable non-linearities in both the governing differential equations and the boundary conditions. Based on the earlier work done in this laboratory by Olstad and Olander on Iron and on Zirconium hydride, the generality of the problem is expanded and the efficiency of the numerical scheme is improved. The finite difference method, along with some advanced numerical techniques, is found to be an efficient way to solve this problem.« less
Numerical Characterization of Piezoceramics Using Resonance Curves
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-01
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875
Numerical Characterization of Piezoceramics Using Resonance Curves.
Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar
2016-01-27
Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.
Formation Flying Design and Applications in Weak Stability Boundary Regions
NASA Technical Reports Server (NTRS)
Folta, David
2003-01-01
Weak Stability regions serve as superior locations for interferometric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observing efficiency. Design of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of WSB solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in WSB regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numerical methods for attaining constrained formation geometries and controlling their dynamical evolution. This paper presents a survey of formation missions in the WSB regions and a brief description of the formation design using numerical and dynamical techniques.
Formation flying design and applications in weak stability boundary regions.
Folta, David
2004-05-01
Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.
Mesoscale Assimilation of TMI Rainfall Data with 4DVAR: Sensitivity Studies
NASA Technical Reports Server (NTRS)
Tao, Wei-Kuo; Pu, Zhaoxia
2003-01-01
Sensitivity studies are performed on the assimilation of TRMM (Tropical Rainfall Measurement Mission) Microwave Imager (TMI) derived rainfall data into a mesoscale model using a four-dimensional variational data assimilation (4DVAR) technique. A series of numerical experiments is conducted to evaluate the impact of TMI rainfall data on the numerical simulation of Hurricane Bonnie (1998). The results indicate that rainfall data assimilation is sensitive to the error characteristics of the data and the inclusion of physics in the adjoint and forward models. In addition, assimilating the rainfall data alone is helpful for producing a more realistic eye and rain bands in the hurricane but does not ensure improvements in hurricane intensity forecasts. Further study indicated that it is necessary to incorporate TMI rainfall data together with other types of data such as wind data into the model, in which case the inclusion of the rainfall data further improves the intensity forecast of the hurricane. This implies that proper constraints may be needed for rainfall assimilation.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
NASA Astrophysics Data System (ADS)
Agarwal, P.; El-Sayed, A. A.
2018-06-01
In this paper, a new numerical technique for solving the fractional order diffusion equation is introduced. This technique basically depends on the Non-Standard finite difference method (NSFD) and Chebyshev collocation method, where the fractional derivatives are described in terms of the Caputo sense. The Chebyshev collocation method with the (NSFD) method is used to convert the problem into a system of algebraic equations. These equations solved numerically using Newton's iteration method. The applicability, reliability, and efficiency of the presented technique are demonstrated through some given numerical examples.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.
2016-12-01
Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Q. Y.; Fu, Ricky K. Y.; Chu, Paul K.
2009-08-10
The implantation energy and retained dose uniformity in enhanced glow discharge plasma immersion ion implantation (EGD-PIII) is investigated numerically and experimentally. Depth profiles obtained from different samples processed by EGD-PIII and traditional PIII are compared. The retained doses under different pulse widths are calculated by integrating the area under the depth profiles. Our results indicate that the improvement in the impact energy and retained dose uniformity by this technique is remarkable.
Design Issues for Traffic Management for the ATM UBR + Service for TCP Over Satellite Networks
NASA Technical Reports Server (NTRS)
Jain, Raj
1999-01-01
This project was a comprehensive research program for developing techniques for improving the performance of Internet protocols over Asynchronous Transfer Mode (ATM) based satellite networks. Among the service categories provided by ATM networks, the most commonly used category for data traffic is the unspecified bit rate (UBR) service. UBR allows sources to send data into the network without any feedback control. The project resulted in the numerous ATM Forum contributions and papers.
Evaluating waste printed circuit boards recycling: Opportunities and challenges, a mini review.
Awasthi, Abhishek Kumar; Zlamparet, Gabriel Ionut; Zeng, Xianlai; Li, Jinhui
2017-04-01
Rapid generation of waste printed circuit boards has become a very serious issue worldwide. Numerous techniques have been developed in the last decade to resolve the pollution from waste printed circuit boards, and also recover valuable metals from the waste printed circuit boards stream on a large-scale. However, these techniques have their own certain specific drawbacks that need to be rectified properly. In this review article, these recycling technologies are evaluated based on a strength, weaknesses, opportunities and threats analysis. Furthermore, it is warranted that, the substantial research is required to improve the current technologies for waste printed circuit boards recycling in the outlook of large-scale applications.
Modelling low velocity impact induced damage in composite laminates
NASA Astrophysics Data System (ADS)
Shi, Yu; Soutis, Constantinos
2017-12-01
The paper presents recent progress on modelling low velocity impact induced damage in fibre reinforced composite laminates. It is important to understand the mechanisms of barely visible impact damage (BVID) and how it affects structural performance. To reduce labour intensive testing, the development of finite element (FE) techniques for simulating impact damage becomes essential and recent effort by the composites research community is reviewed in this work. The FE predicted damage initiation and propagation can be validated by Non Destructive Techniques (NDT) that gives confidence to the developed numerical damage models. A reliable damage simulation can assist the design process to optimise laminate configurations, reduce weight and improve performance of components and structures used in aircraft construction.
NASA Technical Reports Server (NTRS)
Bozeman, Robert E.
1987-01-01
An analytic technique for accounting for the joint effects of Earth oblateness and atmospheric drag on close-Earth satellites is investigated. The technique is analytic in the sense that explicit solutions to the Lagrange planetary equations are given; consequently, no numerical integrations are required in the solution process. The atmospheric density in the technique described is represented by a rotating spherical exponential model with superposed effects of the oblate atmosphere and the diurnal variations. A computer program implementing the process is discussed and sample output is compared with output from program NSEP (Numerical Satellite Ephemeris Program). NSEP uses a numerical integration technique to account for atmospheric drag effects.
Speaker emotion recognition: from classical classifiers to deep neural networks
NASA Astrophysics Data System (ADS)
Mezghani, Eya; Charfeddine, Maha; Nicolas, Henri; Ben Amar, Chokri
2018-04-01
Speaker emotion recognition is considered among the most challenging tasks in recent years. In fact, automatic systems for security, medicine or education can be improved when considering the speech affective state. In this paper, a twofold approach for speech emotion classification is proposed. At the first side, a relevant set of features is adopted, and then at the second one, numerous supervised training techniques, involving classic methods as well as deep learning, are experimented. Experimental results indicate that deep architecture can improve classification performance on two affective databases, the Berlin Dataset of Emotional Speech and the SAVEE Dataset Surrey Audio-Visual Expressed Emotion.
Parallelized modelling and solution scheme for hierarchically scaled simulations
NASA Technical Reports Server (NTRS)
Padovan, Joe
1995-01-01
This two-part paper presents the results of a benchmarked analytical-numerical investigation into the operational characteristics of a unified parallel processing strategy for implicit fluid mechanics formulations. This hierarchical poly tree (HPT) strategy is based on multilevel substructural decomposition. The Tree morphology is chosen to minimize memory, communications and computational effort. The methodology is general enough to apply to existing finite difference (FD), finite element (FEM), finite volume (FV) or spectral element (SE) based computer programs without an extensive rewrite of code. In addition to finding large reductions in memory, communications, and computational effort associated with a parallel computing environment, substantial reductions are generated in the sequential mode of application. Such improvements grow with increasing problem size. Along with a theoretical development of general 2-D and 3-D HPT, several techniques for expanding the problem size that the current generation of computers are capable of solving, are presented and discussed. Among these techniques are several interpolative reduction methods. It was found that by combining several of these techniques that a relatively small interpolative reduction resulted in substantial performance gains. Several other unique features/benefits are discussed in this paper. Along with Part 1's theoretical development, Part 2 presents a numerical approach to the HPT along with four prototype CFD applications. These demonstrate the potential of the HPT strategy.
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
NASA Astrophysics Data System (ADS)
Wang, Zhen; Cui, Shengcheng; Yang, Jun; Gao, Haiyang; Liu, Chao; Zhang, Zhibo
2017-03-01
We present a novel hybrid scattering order-dependent variance reduction method to accelerate the convergence rate in both forward and backward Monte Carlo radiative transfer simulations involving highly forward-peaked scattering phase function. This method is built upon a newly developed theoretical framework that not only unifies both forward and backward radiative transfer in scattering-order-dependent integral equation, but also generalizes the variance reduction formalism in a wide range of simulation scenarios. In previous studies, variance reduction is achieved either by using the scattering phase function forward truncation technique or the target directional importance sampling technique. Our method combines both of them. A novel feature of our method is that all the tuning parameters used for phase function truncation and importance sampling techniques at each order of scattering are automatically optimized by the scattering order-dependent numerical evaluation experiments. To make such experiments feasible, we present a new scattering order sampling algorithm by remodeling integral radiative transfer kernel for the phase function truncation method. The presented method has been implemented in our Multiple-Scaling-based Cloudy Atmospheric Radiative Transfer (MSCART) model for validation and evaluation. The main advantage of the method is that it greatly improves the trade-off between numerical efficiency and accuracy order by order.
NASA Astrophysics Data System (ADS)
Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.
2017-12-01
Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained by the computed ones. Therefore, our technique using a depth dependent rigidity curve is worked to estimate an appropriate fault model which reproduces tsunami heights near the coast in Central America. The technique may be worked in the other subduction zones by finding a depth dependent rigidity curve in that particular subduction zone.
Lock-in thermographic inspection of squats on rail steel head
NASA Astrophysics Data System (ADS)
Peng, D.; Jones, R.
2013-03-01
The development of squat defects has become a major concern in numerous railway systems throughout the world. Infrared thermography is a relatively new non-destructive inspection technique used for a wide range of applications. However, it has not been used for rail squat detection. Lock-in thermography is a non-destructive inspection technique that utilizes an infrared camera to detect the thermal waves. A thermal image is produced, which displays the local thermal wave variation in phase or amplitude. In inhomogeneous materials, the amplitude and phase of the thermal wave carries information related to both the local thermal properties and the nature of the structure being inspected. By examining the infrared thermal signature of squat damage on the head of steel rails, it was possible to generate a relationship matching squat depth to thermal image phase angle, using appropriate experimental/numerical calibration. The results showed that with the additional data sets obtained from further experimental tests, the clarity of this relationship will be greatly improved to a level whereby infrared thermal contours can be directly translated into the precise subsurface behaviour of a squat.
Evolution and dynamics of shear-layer structures in near-wall turbulence
NASA Technical Reports Server (NTRS)
Johansson, Arne V.; Alfredsson, P. H.; Kim, John
1991-01-01
Near-wall flow structures in turbulent shear flows are analyzed, with particular emphasis on the study of their space-time evolution and connection to turbulence production. The results are obtained from investigation of a database generated from direct numerical simulation of turbulent channel flow at a Reynolds number of 180 based on half-channel width and friction velocity. New light is shed on problems associated with conditional sampling techniques, together with methods to improve these techniques, for use both in physical and numerical experiments. The results clearly indicate that earlier conceptual models of the processes associated with near-wall turbulence production, based on flow visualization and probe measurements need to be modified. For instance, the development of asymmetry in the spanwise direction seems to be an important element in the evolution of near-wall structures in general, and for shear layers in particular. The inhibition of spanwise motion of the near-wall streaky pattern may be the primary reason for the ability of small longitudinal riblets to reduce turbulent skin friction below the value for a flat surface.
NASA Technical Reports Server (NTRS)
Zhu, Shen; Li, C.; Su, Ching-Hua; Lin, B.; Ben, H.; Scripa, R. N.; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Tellurium is an element for many II-VI and I-III-VI(sub 2) compounds that are useful materials for fabricating many devices. In the melt growth techniques, the thermal properties of the molten phase are important parameter for controlling growth process to improve semiconducting crystal quality. In this study, thermal diffusivity of molten tellurium has been measured by a laser flash method in the temperature range from 500 C to 900 C. A pulsed laser with 1064 nm wavelength is focused on one side of the measured sample. The thermal diffusivity can be estimated from the temperature transient at the other side of the sample. A numerical simulation based on the thermal transport process has been also performed. By numerically fitting the experimental results, both the thermal conductivity and heat capacity can be derived. A relaxation phenomenon, which shows a slow drift of the measured thermal conductivity toward the equilibrium value after cooling of the sample, was observed for the first time. The error analysis and the comparison of the results to published data measured by other techniques will be discussed.
NASA Technical Reports Server (NTRS)
Zhu, Shen; Su, Ching-Hua; Li, C.; Lin, B.; Ben, H.; Scripa, R. N.; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Tellurium is an element for many II-VI and I-III-VI(sub 2) compounds that are useful materials for fabricating many devises. In the melt growth techniques, the thermal properties of the molten phase are important parameter for controlling growth process to improve semiconducting crystal quality. In this study, thermal diffusivity of molten tellurium has been measured by a laser flash method in the temperature range from 500 C to 900 C. A pulsed laser with 1064 nm wavelength is focused on one side of the measured sample. The thermal diffusivity can be estimated from the temperature transient at the other side of the sample. A numerical simulation based on the thermal transport process has been also performed. By numerically fitting the experimental results, both the thermal conductivity and heat capacity can be derived. A relaxation phenomenon, which shows a slow drift of the measured thermal conductivity toward the equilibrium value after cooling of the sample, was observed for the first time. The error analysis and the comparison of the results to published data measured by other techniques will be discussed in the presentation.
Geometry-constraint-scan imaging for in-line phase contrast micro-CT.
Fu, Jian; Yu, Guangyuan; Fan, Dekai
2014-01-01
X-ray phase contrast computed tomography (CT) uses the phase shift that x-rays undergo when passing through matter, rather than their attenuation, as the imaging signal and may provide better image quality in soft-tissue and biomedical materials with low atomic number. Here a geometry-constraint-scan imaging technique for in-line phase contrast micro-CT is reported. It consists of two circular-trajectory scans with x-ray detector at different positions, the phase projection extraction method with the Fresnel free-propagation theory and the filter back-projection reconstruction algorithm. This method removes the contact-detector scan and the pure phase object assumption in classical in-line phase contrast Micro-CT. Consequently it relaxes the experimental conditions and improves the image contrast. This work comprises a numerical study of this technique and its experimental verification using a biomedical composite dataset measured at an x-ray tube source Micro-CT setup. The numerical and experimental results demonstrate the validity of the presented method. It will be of interest for a wide range of in-line phase contrast Micro-CT applications in biology and medicine.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
A Robust Absorbing Boundary Condition for Compressible Flows
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; orgenson, Philip C. E.
2005-01-01
An absorbing non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented with theoretical proof. This paper is a continuation and improvement of a previous paper by the author. The absorbing NRBC technique is based on a first principle of non reflecting, which contains the essential physics that a plane wave solution of the Euler equations remains intact across the boundary. The technique is theoretically shown to work for a large class of finite volume approaches. When combined with the hyperbolic conservation laws, the NRBC is simple, robust and truly multi-dimensional; no additional implementation is needed except the prescribed physical boundary conditions. Several numerical examples in multi-dimensional spaces using two different finite volume schemes are illustrated to demonstrate its robustness in practical computations. Limitations and remedies of the technique are also discussed.
NASA Astrophysics Data System (ADS)
Gao, Lei; Shi, Zhe; Li, Donghui; Zhang, Guifang; Yang, Yindong; McLean, Alexander; Chattopadhyay, Kinnor
2016-02-01
Electromagnetic levitation (EML) is a contact-less, high-temperature technique which has had extensive application with respect to the investigation of both thermophysical and thermochemical properties of liquid alloy systems. The varying magnetic field generates an induced current inside the metal droplet, and interactions are created which produce both the Lorentz force that provides support against gravity and the Joule heating effect that melts the levitated specimen. Since metal droplets are opaque, transport phenomena inside the droplet cannot be visualized. To address this aspect, several numerical modeling techniques have been developed. The present work reviews the applications of EML techniques as well as the contributions that have been made by the use of mathematical modeling to improve understanding of the inherent processes which are characteristic features of the levitation system.
NASA Astrophysics Data System (ADS)
Chen, Shiyu; Li, Haiyang; Baoyin, Hexi
2018-06-01
This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.
ERIC Educational Resources Information Center
Sozio, Gerry
2009-01-01
Senior secondary students cover numerical integration techniques in their mathematics courses. In particular, students would be familiar with the "midpoint rule," the elementary "trapezoidal rule" and "Simpson's rule." This article derives these techniques by methods which secondary students may not be familiar with and an approach that…
Solving fractional optimal control problems within a Chebyshev-Legendre operational technique
NASA Astrophysics Data System (ADS)
Bhrawy, A. H.; Ezz-Eldien, S. S.; Doha, E. H.; Abdelkawy, M. A.; Baleanu, D.
2017-06-01
In this manuscript, we report a new operational technique for approximating the numerical solution of fractional optimal control (FOC) problems. The operational matrix of the Caputo fractional derivative of the orthonormal Chebyshev polynomial and the Legendre-Gauss quadrature formula are used, and then the Lagrange multiplier scheme is employed for reducing such problems into those consisting of systems of easily solvable algebraic equations. We compare the approximate solutions achieved using our approach with the exact solutions and with those presented in other techniques and we show the accuracy and applicability of the new numerical approach, through two numerical examples.
Intermediate-mass-ratio black-hole binaries: numerical relativity meets perturbation theory.
Lousto, Carlos O; Nakano, Hiroyuki; Zlochower, Yosef; Campanelli, Manuela
2010-05-28
We study black-hole binaries in the intermediate-mass-ratio regime 0.01≲q≲0.1 with a new technique that makes use of nonlinear numerical trajectories and efficient perturbative evolutions to compute waveforms at large radii for the leading and nonleading (ℓ, m) modes. As a proof-of-concept, we compute waveforms for q=1/10. We discuss applications of these techniques for LIGO and VIRGO data analysis and the possibility that our technique can be extended to produce accurate waveform templates from a modest number of fully nonlinear numerical simulations.
Numerical solution of potential flow about arbitrary 2-dimensional multiple bodies
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Thames, F. C.
1982-01-01
A procedure for the finite-difference numerical solution of the lifting potential flow about any number of arbitrarily shaped bodies is given. The solution is based on a technique of automatic numerical generation of a curvilinear coordinate system having coordinate lines coincident with the contours of all bodies in the field, regardless of their shapes and number. The effects of all numerical parameters involved are analyzed and appropriate values are recommended. Comparisons with analytic solutions for single Karman-Trefftz airfoils and a circular cylinder pair show excellent agreement. The technique of application of the boundary-fitted coordinate systems to the numerical solution of partial differential equations is illustrated.
Enhanced linear-array photoacoustic beamforming using modified coherence factor.
Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador
2018-02-01
Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian
2017-06-01
Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.
NASA Astrophysics Data System (ADS)
Alle, Iboukoun Christian; Descloitres, Marc; Vouillamoz, Jean-Michel; Yalo, Nicaise; Lawson, Fabrice Messan Amen; Adihou, Akonfa Consolas
2018-03-01
Hard rock aquifers are of particular importance for supplying people with drinking water in Africa and in the world. Although the common use of one-dimensional (1D) electrical resistivity techniques to locate drilling site, the failure rate of boreholes is usually high. For instance, about 40% of boreholes drilled in hard rock aquifers in Benin are unsuccessful. This study investigates why the current use of 1D techniques (e.g. electrical profiling and electrical sounding) can result in inaccurate siting of boreholes, and checks the interest and the limitations of the use of two-dimensional (2D) Electrical Resistivity Tomography (ERT). Geophysical numerical modeling and comprehensive 1D and 2D resistivity surveys were carried out in hard rock aquifers in Benin. The experiments carried out at 7 sites located in different hard rock groups confirmed the results of the numerical modeling: the current use of 1D techniques can frequently leads to inaccurate siting, and ERT better reveals hydrogeological targets such as thick weathered zone (e.g. stratiform fractured layer and preferential weathering associated with subvertical fractured zone). Moreover, a cost analysis demonstrates that the use of ERT can save money at the scale of a drilling programme if ERT improves the success rate by only 5% as compared to the success rate obtained with 1D techniques. Finally, this study demonstrates, using the example of Benin, that the use of electrical resistivity profiling and sounding for siting boreholes in weathered hard rocks of western Africa should be discarded and replaced by the use of ERT technique, more efficient.
EXPERIMENTAL MODELLING OF AORTIC ANEURYSMS
Doyle, Barry J; Corbett, Timothy J; Cloonan, Aidan J; O’Donnell, Michael R; Walsh, Michael T; Vorp, David A; McGloughlin, Timothy M
2009-01-01
A range of silicone rubbers were created based on existing commercially available materials. These silicones were designed to be visually different from one another and have distinct material properties, in particular, ultimate tensile strengths and tear strengths. In total, eleven silicone rubbers were manufactured, with the materials designed to have a range of increasing tensile strengths from approximately 2-4MPa, and increasing tear strengths from approximately 0.45-0.7N/mm. The variations in silicones were detected using a standard colour analysis technique. Calibration curves were then created relating colour intensity to individual material properties. All eleven materials were characterised and a 1st order Ogden strain energy function applied. Material coefficients were determined and examined for effectiveness. Six idealised abdominal aortic aneurysm models were also created using the two base materials of the study, with a further model created using a new mixing technique to create a rubber model with randomly assigned material properties. These models were then examined using videoextensometry and compared to numerical results. Colour analysis revealed a statistically significant linear relationship (p<0.0009) with both tensile strength and tear strength, allowing material strength to be determined using a non-destructive experimental technique. The effectiveness of this technique was assessed by comparing predicted material properties to experimentally measured methods, with good agreement in the results. Videoextensometry and numerical modelling revealed minor percentage differences, with all results achieving significance (p<0.0009). This study has successfully designed and developed a range of silicone rubbers that have unique colour intensities and material strengths. Strengths can be readily determined using a non-destructive analysis technique with proven effectiveness. These silicones may further aid towards an improved understanding of the biomechanical behaviour of aneurysms using experimental techniques. PMID:19595622
Results from Binary Black Hole Simulations in Astrophysics Applications
NASA Technical Reports Server (NTRS)
Baker, John G.
2007-01-01
Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.
Multiresolution representation and numerical algorithms: A brief review
NASA Technical Reports Server (NTRS)
Harten, Amiram
1994-01-01
In this paper we review recent developments in techniques to represent data in terms of its local scale components. These techniques enable us to obtain data compression by eliminating scale-coefficients which are sufficiently small. This capability for data compression can be used to reduce the cost of many numerical solution algorithms by either applying it to the numerical solution operator in order to get an approximate sparse representation, or by applying it to the numerical solution itself in order to reduce the number of quantities that need to be computed.
Constrained evolution in numerical relativity
NASA Astrophysics Data System (ADS)
Anderson, Matthew William
The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.
NASA Astrophysics Data System (ADS)
Harzalla, S.; Belgacem, F. Bin Muhammad; Chabaat, M.
2014-12-01
In this paper, a nondestructive technique is used as a tool to control cracks and microcracks in materials. A simulation by a numerical approach such as the finite element method is employed to detect cracks and eventually; to study their propagation using a crucial parameter such as the stress intensity factor. This approach has been used in the aircraft industry to control cracks. Besides, it makes it possible to highlight the defects of parts while preserving the integrity of the controlled products. On the other side, it is proven that the reliability of the control of defects gives convincing results for the improvement of the quality and the safety of the material. Eddy current testing (ECT) is a standard technique in industry for the detection of surface breaking flaws in magnetic materials such as steels. In this context, simulation tools can be used to improve the understanding of experimental signals, optimize the design of sensors or evaluate the performance of ECT procedures. CEA-LIST has developed for many years semi-analytical models embedded into the simulation platform CIVA dedicated to non-destructive testing. The developments presented herein address the case of flaws located inside a planar and magnetic medium. Simulation results are obtained through the application of the Volume Integral Method (VIM). When considering the ECT of a single flaw, a system of two differential equations is derived from Maxwell equations. The numerical resolution of the system is carried out using the classical Galerkin variant of the Method of Moments. Besides, a probe response is calculated by application of the Lorentz reciprocity theorem. Finally, the approach itself as well as comparisons between simulation results and measured data are presented.
Current challenges in quantifying preferential flow through the vadose zone
NASA Astrophysics Data System (ADS)
Koestel, John; Larsbo, Mats; Jarvis, Nick
2017-04-01
In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).
An improved pulse sequence and inversion algorithm of T2 spectrum
NASA Astrophysics Data System (ADS)
Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu
2017-03-01
The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.
NASA Astrophysics Data System (ADS)
Tierz, Pablo; Sandri, Laura; Ramona Stefanescu, Elena; Patra, Abani; Marzocchi, Warner; Costa, Antonio; Sulpizio, Roberto
2014-05-01
Explosive volcanoes and, especially, Pyroclastic Density Currents (PDCs) pose an enormous threat to populations living in the surroundings of volcanic areas. Difficulties in the modeling of PDCs are related to (i) very complex and stochastic physical processes, intrinsic to their occurrence, and (ii) to a lack of knowledge about how these processes actually form and evolve. This means that there are deep uncertainties (namely, of aleatory nature due to point (i) above, and of epistemic nature due to point (ii) above) associated to the study and forecast of PDCs. Consequently, the assessment of their hazard is better described in terms of probabilistic approaches rather than by deterministic ones. What is actually done to assess probabilistic hazard from PDCs is to couple deterministic simulators with statistical techniques that can, eventually, supply probabilities and inform about the uncertainties involved. In this work, some examples of both PDC numerical simulators (Energy Cone and TITAN2D) and uncertainty quantification techniques (Monte Carlo sampling -MC-, Polynomial Chaos Quadrature -PCQ- and Bayesian Linear Emulation -BLE-) are presented, and their advantages, limitations and future potential are underlined. The key point in choosing a specific method leans on the balance between its related computational cost, the physical reliability of the simulator and the pursued target of the hazard analysis (type of PDCs considered, time-scale selected for the analysis, particular guidelines received from decision-making agencies, etc.). Although current numerical and statistical techniques have brought important advances in probabilistic volcanic hazard assessment from PDCs, some of them may be further applicable to more sophisticated simulators. In addition, forthcoming improvements could be focused on three main multidisciplinary directions: 1) Validate the simulators frequently used (through comparison with PDC deposits and other simulators), 2) Decrease simulator runtimes (whether by increasing the knowledge about the physical processes or by doing more efficient programming, parallelization, ...) and 3) Improve uncertainty quantification techniques.
NASA Astrophysics Data System (ADS)
Milani, Gabriele; Milani, Federico
2012-12-01
The main problem in the industrial production process of thick EPM/EPDM elements is constituted by the different temperatures which undergo internal (cooler) and external regions. Indeed, while internal layers remain essentially under-vulcanized, external coating is always over-vulcanized, resulting in an overall average tensile strength insufficient to permit the utilization of the items in several applications where it is required a certain level of performance. Possible ways to improve rubber output mechanical properties include a careful calibration of exposition time and curing temperature in traditional heating or a vulcanization through innovative techniques, such as microwaves. In the present paper, a comprehensive numerical model able to give predictions on the optimized final mechanical properties of vulcanized 2D and 3D thick rubber items is presented and applied to a meaningful example of engineering interest. A detailed comparative numerical study is finally presented in order to establish pros and cons of traditional vulcanization vs microwaves curing.
A numerical study of steady crystal growth in a vertical Bridgman device
NASA Astrophysics Data System (ADS)
Jalics, Miklos Kalman
Electronics based on semiconductors creates an enormous demand for high quality semiconductor single crystals. The vertical Bridgman device is commonly used for growing single crystals for a variety of materials such as GaAs, InP and HgCdTe. A mathematical model is presented for steady crystal growth under conditions where crystal growth is determined strictly by heat transfer. The ends of the ampoule are chosen far away from the insulation zone to allow for steady growth. A numerical solution is sought for this mathematical model. The equations are transformed into a rectangular geometry and appropriate finite difference techniques are applied on the transformed equations. Newton's method solves the nonlinear problem. To improve efficiency GMRES with preconditioning is used to compute the Newton iterates. The numerical results are used to compare with two current asymptotic theories that assume small Biot numbers. Results indicate that one of the asymptotic theories is accurate for even moderate Biot numbers.
Numerical Investigation of Flow in a Centrifugal Compressor
NASA Astrophysics Data System (ADS)
Grishin, Yu. A.; Bakulin, V. N.
2015-09-01
With the use of the domestic software suite of computational hydrodynamics Flow Vision based on application of the method of control volumes, numerical simulation of air composition and delivery by a centrifugal compressor employed for supercharging a piston engine has been carried out. The head-flow characteristics of the compressor, as well as the 3D fields of flow velocity and pressure distributions in the elements of the compressor flow passage, including the interblade channels of the impeller, have been obtained for various regimes. In the regimes of diminished air flow rate, surging phenomena are identified, characterized by a return flow. The application of the technique of numerical experiment will make it possible from here on to carry out design optimization of the compressor flow passage profile and thus to improve its basic characteristics — the degree of pressure increase, compressed air flow rate, and the efficiency — as well as to reduce the costs of the development and production of compressors.
Finite Element Based Optimization of Material Parameters for Enhanced Ballistic Protection
NASA Astrophysics Data System (ADS)
Ramezani, Arash; Huber, Daniel; Rothe, Hendrik
2013-06-01
The threat imposed by terrorist attacks is a major hazard for military installations, vehicles and other items. The large amounts of firearms and projectiles that are available, pose serious threats to military forces and even civilian facilities. An important task for international research and development is to avert danger to life and limb. This work will evaluate the effect of modern armor with numerical simulations. It will also provide a brief overview of ballistic tests in order to offer some basic knowledge of the subject, serving as a basis for the comparison of simulation results. The objective of this work is to develop and improve the modern armor used in the security sector. Numerical simulations should replace the expensive ballistic tests and find vulnerabilities of items and structures. By progressively changing the material parameters, the armor is to be optimized. Using a sensitivity analysis, information regarding decisive variables is yielded and vulnerabilities are easily found and eliminated afterwards. To facilitate the simulation, advanced numerical techniques have been employed in the analyses.
How to Overcome Numerical Challenges to Modeling Stirling Engines
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.
2004-01-01
Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for a range of missions, including both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent in current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-FI technique is presented in detail.
NASA Astrophysics Data System (ADS)
Tambunan, D. R. S.; Sibagariang, Y. P.; Ambarita, H.; Napitupulu, F. H.; Kawai, H.
2018-03-01
The characteristics of absorber plate of a flat plate solar collector play an important role in the improvement of the performance. In this work, a numerical analysis is carried out to explore the effect of absorptivity and emissivity of absorber plate to the performance of the solar collector of a solar water heater. For a results comparison, a simple a simple solar box cooker with absorber area of 0.835 m × 0.835 m is designed and fabricated. It is employed to heat water in a container by exposing to the solar radiation in Medan city of Indonesia. The transient governing equations are developed. The governing equations are discretized and solved using the forward time step marching technique. The results reveal that the experimental and numerical results show good agreement. The absorptivity of the plate absorber and emissivity of the glass cover strongly affect the performance of the solar collector.
Optimization of porthole die geometrical variables by Taguchi method
NASA Astrophysics Data System (ADS)
Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.
2017-10-01
Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.
A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows
NASA Astrophysics Data System (ADS)
Meldi, M.; Poux, A.
2017-10-01
A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.
NASA Astrophysics Data System (ADS)
Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke
2016-05-01
Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.
Toward the S3DVAR data assimilation software for the Caspian Sea
NASA Astrophysics Data System (ADS)
Arcucci, Rossella; Celestino, Simone; Toumi, Ralf; Laccetti, Giuliano
2017-07-01
Data Assimilation (DA) is an uncertainty quantification technique used to incorporate observed data into a prediction model in order to improve numerical forecasted results. The forecasting model used for producing oceanographic prediction into the Caspian Sea is the Regional Ocean Modeling System (ROMS). Here we propose the computational issues we are facing in a DA software we are developing (we named S3DVAR) which implements a Scalable Three Dimensional Variational Data Assimilation model for assimilating sea surface temperature (SST) values collected into the Caspian Sea with observations provided by the Group of High resolution sea surface temperature (GHRSST). We present the algorithmic strategies we employ and the numerical issues on data collected in two of the months which present the most significant variability in water temperature: August and March.
Update on Postsurgical Scar Management
Commander, Sarah Jane; Chamata, Edward; Cox, Joshua; Dickey, Ryan M.; Lee, Edward I.
2016-01-01
Postoperative scar appearance is often a significant concern among patients, with many seeking advice from their surgeons regarding scar minimization. Numerous products are available that claim to decrease postoperative scar formation and improve wound healing. These products attempt to create an ideal environment for wound healing by targeting the three phases of wound healing: inflammation, proliferation, and remodeling. With that said, preoperative interventions, such as lifestyle modifications and optimization of medical comorbidities, and intraoperative interventions, such as adherence to meticulous operative techniques, are equally important for ideal scarring. In this article, the authors review the available options in postoperative scar management, addressing the benefits of multimodal perioperative intervention. Although numerous treatments exist, no single modality has been proven superior over others. Therefore, each patient should receive a personalized treatment regimen to optimize scar management. PMID:27478420
Further studies on stability analysis of nonlinear Roesser-type two-dimensional systems
NASA Astrophysics Data System (ADS)
Dai, Xiao-Lin
2014-04-01
This paper is concerned with further relaxations of the stability analysis of nonlinear Roesser-type two-dimensional (2D) systems in the Takagi-Sugeno fuzzy form. To achieve the goal, a novel slack matrix variable technique, which is homogenous polynomially parameter-dependent on the normalized fuzzy weighting functions with arbitrary degree, is developed and the algebraic properties of the normalized fuzzy weighting functions are collected into a set of augmented matrices. Consequently, more information about the normalized fuzzy weighting functions is involved and the relaxation quality of the stability analysis is significantly improved. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed result.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
NASA Astrophysics Data System (ADS)
Giannaros, Theodore; Kotroni, Vassiliki; Lagouvardos, Kostas
2015-04-01
Lightning data assimilation has been recently attracting increasing attention as a technique implemented in numerical weather prediction (NWP) models for improving precipitation forecasts. In the frame of TALOS project, we implemented a robust lightning data assimilation technique in the Weather Research and Forecasting (WRF) model with the aim to improve the precipitation prediction in Greece. The assimilation scheme employs lightning as a proxy for the presence or absence of deep convection. In essence, flash data are ingested in WRF to control the Kain-Fritsch (KF) convective parameterization scheme (CPS). When lightning is observed, indicating the occurrence of convective activity, the CPS is forced to attempt to produce convection, whereas the CPS may be optionally be prevented from producing convection when no lightning is observed. Eight two-day precipitation events were selected for assessing the performance of the lightning data assimilation technique. The ingestion of lightning in WRF was carried out during the first 6 h of each event and the evaluation focused on the consequent 24 h, constituting a realistic setup that could be used in operational weather forecasting applications. Results show that the implemented assimilation scheme can improve model performance in terms of precipitation prediction. Forecasts employing the assimilation of flash data were found to exhibit more skill than control simulations, particularly for the intense (>20 mm) 24 h rain accumulations. Analysis of results also revealed that the option not to suppress the KF scheme in the absence of observed lightning, leads to a generally better performance compared to the experiments employing the full control of the CPS' triggering. Overall, the implementation of the lightning data assimilation technique is found to improve the model's ability to represent convection, especially in situations when past convection has modified the mesoscale environment in ways that affect the occurrence and evolution of subsequent convection.
An efficient numerical technique for calculating thermal spreading resistance
NASA Technical Reports Server (NTRS)
Gale, E. H., Jr.
1977-01-01
An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.
Improved data visualization techniques for analyzing macromolecule structural changes.
Kim, Jae Hyun; Iyer, Vidyashankara; Joshi, Sangeeta B; Volkin, David B; Middaugh, C Russell
2012-10-01
The empirical phase diagram (EPD) is a colored representation of overall structural integrity and conformational stability of macromolecules in response to various environmental perturbations. Numerous proteins and macromolecular complexes have been analyzed by EPDs to summarize results from large data sets from multiple biophysical techniques. The current EPD method suffers from a number of deficiencies including lack of a meaningful relationship between color and actual molecular features, difficulties in identifying contributions from individual techniques, and a limited ability to be interpreted by color-blind individuals. In this work, three improved data visualization approaches are proposed as techniques complementary to the EPD. The secondary, tertiary, and quaternary structural changes of multiple proteins as a function of environmental stress were first measured using circular dichroism, intrinsic fluorescence spectroscopy, and static light scattering, respectively. Data sets were then visualized as (1) RGB colors using three-index EPDs, (2) equiangular polygons using radar charts, and (3) human facial features using Chernoff face diagrams. Data as a function of temperature and pH for bovine serum albumin, aldolase, and chymotrypsin as well as candidate protein vaccine antigens including a serine threonine kinase protein (SP1732) and surface antigen A (SP1650) from S. pneumoniae and hemagglutinin from an H1N1 influenza virus are used to illustrate the advantages and disadvantages of each type of data visualization technique. Copyright © 2012 The Protein Society.
Improved data visualization techniques for analyzing macromolecule structural changes
Kim, Jae Hyun; Iyer, Vidyashankara; Joshi, Sangeeta B; Volkin, David B; Middaugh, C Russell
2012-01-01
The empirical phase diagram (EPD) is a colored representation of overall structural integrity and conformational stability of macromolecules in response to various environmental perturbations. Numerous proteins and macromolecular complexes have been analyzed by EPDs to summarize results from large data sets from multiple biophysical techniques. The current EPD method suffers from a number of deficiencies including lack of a meaningful relationship between color and actual molecular features, difficulties in identifying contributions from individual techniques, and a limited ability to be interpreted by color-blind individuals. In this work, three improved data visualization approaches are proposed as techniques complementary to the EPD. The secondary, tertiary, and quaternary structural changes of multiple proteins as a function of environmental stress were first measured using circular dichroism, intrinsic fluorescence spectroscopy, and static light scattering, respectively. Data sets were then visualized as (1) RGB colors using three-index EPDs, (2) equiangular polygons using radar charts, and (3) human facial features using Chernoff face diagrams. Data as a function of temperature and pH for bovine serum albumin, aldolase, and chymotrypsin as well as candidate protein vaccine antigens including a serine threonine kinase protein (SP1732) and surface antigen A (SP1650) from S. pneumoniae and hemagglutinin from an H1N1 influenza virus are used to illustrate the advantages and disadvantages of each type of data visualization technique. PMID:22898970
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
High Order Approximations for Compressible Fluid Dynamics on Unstructured and Cartesian Meshes
NASA Technical Reports Server (NTRS)
Barth, Timothy (Editor); Deconinck, Herman (Editor)
1999-01-01
The development of high-order accurate numerical discretization techniques for irregular domains and meshes is often cited as one of the remaining challenges facing the field of computational fluid dynamics. In structural mechanics, the advantages of high-order finite element approximation are widely recognized. This is especially true when high-order element approximation is combined with element refinement (h-p refinement). In computational fluid dynamics, high-order discretization methods are infrequently used in the computation of compressible fluid flow. The hyperbolic nature of the governing equations and the presence of solution discontinuities makes high-order accuracy difficult to achieve. Consequently, second-order accurate methods are still predominately used in industrial applications even though evidence suggests that high-order methods may offer a way to significantly improve the resolution and accuracy for these calculations. To address this important topic, a special course was jointly organized by the Applied Vehicle Technology Panel of NATO's Research and Technology Organization (RTO), the von Karman Institute for Fluid Dynamics, and the Numerical Aerospace Simulation Division at the NASA Ames Research Center. The NATO RTO sponsored course entitled "Higher Order Discretization Methods in Computational Fluid Dynamics" was held September 14-18, 1998 at the von Karman Institute for Fluid Dynamics in Belgium and September 21-25, 1998 at the NASA Ames Research Center in the United States. During this special course, lecturers from Europe and the United States gave a series of comprehensive lectures on advanced topics related to the high-order numerical discretization of partial differential equations with primary emphasis given to computational fluid dynamics (CFD). Additional consideration was given to topics in computational physics such as the high-order discretization of the Hamilton-Jacobi, Helmholtz, and elasticity equations. This volume consists of five articles prepared by the special course lecturers. These articles should be of particular relevance to those readers with an interest in numerical discretization techniques which generalize to very high-order accuracy. The articles of Professors Abgrall and Shu consider the mathematical formulation of high-order accurate finite volume schemes utilizing essentially non-oscillatory (ENO) and weighted essentially non-oscillatory (WENO) reconstruction together with upwind flux evaluation. These formulations are particularly effective in computing numerical solutions of conservation laws containing solution discontinuities. Careful attention is given by the authors to implementational issues and techniques for improving the overall efficiency of these methods. The article of Professor Cockburn discusses the discontinuous Galerkin finite element method. This method naturally extends to high-order accuracy and has an interpretation as a finite volume method. Cockburn addresses two important issues associated with the discontinuous Galerkin method: controlling spurious extrema near solution discontinuities via "limiting" and the extension to second order advective-diffusive equations (joint work with Shu). The articles of Dr. Henderson and Professor Schwab consider the mathematical formulation and implementation of the h-p finite element methods using hierarchical basis functions and adaptive mesh refinement. These methods are particularly useful in computing high-order accurate solutions containing perturbative layers and corner singularities. Additional flexibility is obtained using a mortar FEM technique whereby nonconforming elements are interfaced together. Numerous examples are given by Henderson applying the h-p FEM method to the simulation of turbulence and turbulence transition.
NASA Astrophysics Data System (ADS)
Tolipov, A. A.; Elghawail, A.; Shushing, S.; Pham, D.; Essa, K.
2017-09-01
There is a growing demand for flexible manufacturing techniques that meet the rapid changes in customer needs. A finite element analysis numerical optimisation technique was used to optimise the multi-point sheet forming process. Multi-point forming (MPF) is a flexible sheet metal forming technique where the same tool can be readily changed to produce different parts. The process suffers from some geometrical defects such as wrinkling and dimpling, which have been found to be the cause of the major surface quality problems. This study investigated the influence of parameters such as the elastic cushion hardness, blank holder force, coefficient of friction, cushion thickness and radius of curvature, on the quality of parts formed in a flexible multi-point stamping die. For those reasons, in this investigation, a multipoint forming stamping process using a blank holder was carried out in order to study the effects of the wrinkling, dimpling, thickness variation and forming force. The aim was to determine the optimum values of these parameters. Finite element modelling (FEM) was employed to simulate the multi-point forming of hemispherical shapes. Using the response surface method, the effects of process parameters on wrinkling, maximum deviation from the target shape and thickness variation were investigated. The results show that elastic cushion with proper thickness and polyurethane with the hardness of Shore A90. It has also been found that the application of lubrication cans improve the shape accuracy of the formed workpiece. These final results were compared with the numerical simulation results of the multi-point forming for hemispherical shapes using a blank-holder and it was found that using cushion hardness realistic to reduce wrinkling and maximum deviation.
[Treatment of idiopathic varicocele: comparative study of three techniques about 128 cases].
Khouni, Hassen; Bouchiba, Nizar; Khelifa, Melik Melek; Ben Ali, Moez; Sebai, Akrem; Dali, Meriem; Charfi, Mehdi; Chouchene, Adnene; El Kateb, Faycel; Bouhaouala, Habib; Balti, Med Hedi
2011-12-01
Several modalities of varicocele treatment are available, however, no therapeutic technique showed its superiority with regard to the other one. To compare the results of three techniques of varicocelecomy. Retrospective Analytical and comparative study of 128 patients treated by of three techniques of varicocelectomy: the open surgery by retro peritoneal way for 42 patients (GI), the varicocelectomie coelioscopique for 41 patients (GII) and the antegrade scrotal sclerotherapy done for 45 patients (GIII), between march 2001 and January 2009.The mean age was 28 years. The main motive for consultation was represented by the painful varicocele in 67 % of the cases, followed by the hypofertility in 20.3 % of the cases and the association both in 12,5 % of the cases. The varicocele was in leftsider in 71.1 % of the cases, to the right side in 5.4 % of the cases and was bilateral in 23.43 % of the cases. Varicocele was infra-clinical at 6 patients, grade 1 in 16 sides, grade 2 in 105 sides and grade 3 in 31 sides. The numeration, the mobility as well as the morphology of sperm cells were comparable between the three groups Results: The global rate success was 81.2 %, with the highest rate found in the group III which was treated by antégrade scrotal sclerotherapy (84.4 %). The improvement of the parameters of the spermogramme was noted in the three groups, however a statistically significant difference was found only in patients treated by antégrade scrotal sclerotherapy; it mainly concerned numeration and the mobility of spermatozoides. The highest rate of pregnancy was recorded in patients treated by antégrade scrotal sclerotherapy (13.3%). The main postoperative complications were hydrocele (16%) followed by testicular hypotrophy (3 patients). Three techniques of varicocele treatment, offer either a similar success rate, and improvement of parameters of the sperm cells. However, antegrade scrotal sclerotherpy seem to be the best treatment of first intention in proposed, regarding its efficiency, of the ease of its realization, its moderate cost and its feasibility in case of recurrence if varicocele was treated with open way'GIII).
NASA Astrophysics Data System (ADS)
Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan
2015-04-01
This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the inversion techniques currently being improved is Full-Waveform Inversion (FWI) for on-ground, off-ground, and crosshole GPR configurations. In contrast to conventional inversion tools which are often based on approximations and use only part of the available data, FWI uses the complete measured data and detailed modeling tools to obtain an improved estimation of medium properties. During the first year of the Action, information was collected and shared about state-of-the-art of the available modelling, imaging, inversion, and data-processing methods. Advancements achieved by WG3 Members were presented during the TU1208 Second General Meeting (April 30 - May 2, 2014, Vienna, Austria) and the 15th International Conference on Ground Penetrating Radar (June 30 - July 4, 2014, Brussels, Belgium). Currently, a database of numerical and experimental GPR responses from natural and manmade structures is being designed. A geometrical and physical description of the scenarios, together with the available synthetic and experimental data, will be at the disposal of the scientific community. Researchers will thus have a further opportunity of testing and validating, against reliable data, their electromagnetic forward- and inverse-scattering techniques, imaging methods and data-processing algorithms. The motivation to start this database came out during TU1208 meetings and takes inspiration by successful past initiatives carried out in different areas, as the Ipswich and Fresnel databases in the field of free-space electromagnetic scattering, and the Marmousi database in seismic science. Acknowledgement The Authors thank COST, for funding the Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.'
Pulse-compression ghost imaging lidar via coherent detection.
Deng, Chenjin; Gong, Wenlin; Han, Shensheng
2016-11-14
Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.
Compound synchronization of four memristor chaotic oscillator systems and secure communication.
Sun, Junwei; Shen, Yi; Yin, Quan; Xu, Chengjie
2013-03-01
In this paper, a novel kind of compound synchronization among four chaotic systems is investigated, where the drive systems have been conceptually divided into two categories: scaling drive systems and base drive systems. Firstly, a sufficient condition is obtained to ensure compound synchronization among four memristor chaotic oscillator systems based on the adaptive technique. Secondly, a secure communication scheme via adaptive compound synchronization of four memristor chaotic oscillator systems is presented. The corresponding theoretical proofs and numerical simulations are given to demonstrate the validity and feasibility of the proposed control technique. The unpredictability of scaling drive systems can additionally enhance the security of communication. The transmitted signals can be split into several parts loaded in the drive systems to improve the reliability of communication.
Heike Kamerlingh Onnes: Master of Experimental Technique and Quantitative Research
NASA Astrophysics Data System (ADS)
Reif-Acherman, Simón
Heike Kamerlingh Onnes (1853-1926), born a century and a half ago, was a major protagonist in the so-called Second Golden Age of Dutch Science. He devoted his career to the emerging field of low-temperature physics. His particular concern was to test the theories of his older compatriot Johannes Diderik van der Waals (1837-1923) by creating a style of research that was characterized by meticulous planning, precise measurement, and constant improvement of techniques and instruments. He made numerous contributions to low-temperature physics, but I focus on his liquefaction of helium, for which he received the Nobel Prize in Physics for 1913, and on his discovery of superconductivity. He became known internationally as le gentleman du zéro absolu.
Process simulation for advanced composites production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allendorf, M.D.; Ferko, S.M.; Griffiths, S.
1997-04-01
The objective of this project is to improve the efficiency and lower the cost of chemical vapor deposition (CVD) processes used to manufacture advanced ceramics by providing the physical and chemical understanding necessary to optimize and control these processes. Project deliverables include: numerical process models; databases of thermodynamic and kinetic information related to the deposition process; and process sensors and software algorithms that can be used for process control. Target manufacturing techniques include CVD fiber coating technologies (used to deposit interfacial coatings on continuous fiber ceramic preforms), chemical vapor infiltration, thin-film deposition processes used in the glass industry, and coatingmore » techniques used to deposit wear-, abrasion-, and corrosion-resistant coatings for use in the pulp and paper, metals processing, and aluminum industries.« less
Numerical simulation of coupled electrochemical and transport processes in battery systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liaw, B.Y.; Gu, W.B.; Wang, C.Y.
1997-12-31
Advanced numerical modeling to simulate dynamic battery performance characteristics for several types of advanced batteries is being conducted using computational fluid dynamics (CFD) techniques. The CFD techniques provide efficient algorithms to solve a large set of highly nonlinear partial differential equations that represent the complex battery behavior governed by coupled electrochemical reactions and transport processes. The authors have recently successfully applied such techniques to model advanced lead-acid, Ni-Cd and Ni-MH cells. In this paper, the authors briefly discuss how the governing equations were numerically implemented, show some preliminary modeling results, and compare them with other modeling or experimental data reportedmore » in the literature. The authors describe the advantages and implications of using the CFD techniques and their capabilities in future battery applications.« less
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
Pattern recognition of satellite cloud imagery for improved weather prediction
NASA Technical Reports Server (NTRS)
Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.
1986-01-01
The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.
Parameterizing unresolved obstacles with source terms in wave modeling: A real-world application
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Kakoulaki, Georgia; Vousdoukas, Michalis; Voukouvalas, Evangelos; Feyen, Luc; Besio, Giovanni
2018-06-01
Parameterizing the dissipative effects of small, unresolved coastal features, is fundamental to improve the skills of wave models. The established technique to deal with this problem consists in reducing the amount of energy advected within the propagation scheme, and is currently available only for regular grids. To find a more general approach, Mentaschi et al., 2015b formulated a technique based on source terms, and validated it on synthetic case studies. This technique separates the parameterization of the unresolved features from the energy advection, and can therefore be applied to any numerical scheme and to any type of mesh. Here we developed an open-source library for the estimation of the transparency coefficients needed by this approach, from bathymetric data and for any type of mesh. The spectral wave model WAVEWATCH III was used to show that in a real-world domain, such as the Caribbean Sea, the proposed approach has skills comparable and sometimes better than the established propagation-based technique.
Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study
NASA Astrophysics Data System (ADS)
Yang, Huachen; Zhang, Jianzhong
2018-06-01
In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.
Polarization interferometry for real-time spectroscopic plasmonic sensing.
Otto, Lauren M; Mohr, Daniel A; Johnson, Timothy W; Oh, Sang-Hyun; Lindquist, Nathan C
2015-03-07
We present quantitative, spectroscopic polarization interferometry phase measurements on plasmonic surfaces for sensing applications. By adding a liquid crystal variable wave plate in our beam path, we are able to measure phase shifts due to small refractive index changes on the sensor surface. By scanning in a quick sequence, our technique is extended to demonstrate real-time measurements. While this optical technique is applicable to different sensor geometries-e.g., nanoparticles, nanogratings, or nanoapertures-the plasmonic sensors we use here consist of an ultrasmooth gold layer with buried linear gratings. Using these devices and our phase measurement technique, we calculate a figure of merit that shows improvement over measuring only surface plasmon resonance shifts from a reflected intensity spectrum. To demonstrate the general-purpose versatility of our phase-resolved measurements, we also show numerical simulations with another common device architecture: periodic plasmonic slits. Since our technique inherently measures both the intensity and phase of the reflected or transmitted light simultaneously, quantitative sensor device characterization is possible.
Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds
NASA Astrophysics Data System (ADS)
Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea
2013-04-01
Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.
Islam, Md Shafiqul; Khan, Kamruzzaman; Akbar, M Ali; Mastroberardino, Antonio
2014-10-01
The purpose of this article is to present an analytical method, namely the improved F-expansion method combined with the Riccati equation, for finding exact solutions of nonlinear evolution equations. The present method is capable of calculating all branches of solutions simultaneously, even if multiple solutions are very close and thus difficult to distinguish with numerical techniques. To verify the computational efficiency, we consider the modified Benjamin-Bona-Mahony equation and the modified Korteweg-de Vries equation. Our results reveal that the method is a very effective and straightforward way of formulating the exact travelling wave solutions of nonlinear wave equations arising in mathematical physics and engineering.
Islam, Md. Shafiqul; Khan, Kamruzzaman; Akbar, M. Ali; Mastroberardino, Antonio
2014-01-01
The purpose of this article is to present an analytical method, namely the improved F-expansion method combined with the Riccati equation, for finding exact solutions of nonlinear evolution equations. The present method is capable of calculating all branches of solutions simultaneously, even if multiple solutions are very close and thus difficult to distinguish with numerical techniques. To verify the computational efficiency, we consider the modified Benjamin–Bona–Mahony equation and the modified Korteweg-de Vries equation. Our results reveal that the method is a very effective and straightforward way of formulating the exact travelling wave solutions of nonlinear wave equations arising in mathematical physics and engineering. PMID:26064530
Using real options analysis to support strategic management decisions
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan
2013-12-01
Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
A 3D finite element ALE method using an approximate Riemann solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiravalle, V. P.; Morgan, N. R.
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Lakshmanan, B.
1993-01-01
A high-speed shear layer is studied using compressibility corrected Reynolds stress turbulence model which employs newly developed model for pressure-strain correlation. MacCormack explicit prediction-corrector method is used for solving the governing equations and the turbulence transport equations. The stiffness arising due to source terms in the turbulence equations is handled by a semi-implicit numerical technique. Results obtained using the new model show a sharper reduction in growth rate with increasing convective Mach number. Some improvements were also noted in the prediction of the normalized streamwise stress and Reynolds shear stress. The computed results are in good agreement with the experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, W.R.; Rechnitz, G.A.
1999-01-01
A mini review of enzyme-based electrochemical biosensors for inhibition analysis of organophosphorus and carbamate pesticides is presented. Discussion includes the most recent literature to present advances in detection limits, selectivity and real sample analysis. Recent reviews on the monitoring of pesticides and their residues suggest that the classical analytical techniques of gas and liquid chromatography are the most widely used methods of detection. These techniques, although very accurate in their determinations, can be quite time consuming and expensive and usually require extensive sample clean up and pro-concentration. For these and many other reasons, the classical techniques are very difficult tomore » adapt for field use. Numerous researchers, in the past decade, have developed and made improvements on biosensors for use in pesticide analysis. This mini review will focus on recent advances made in enzyme-based electrochemical biosensors for the determinations of organophosphorus and carbamate pesticides.« less
Markatos, Konstantinos; Karaoglanis, Georgios; Damaskos, Christos; Garmpis, Nikolaos; Tsourouflis, Gerasimos; Laios, Konstantinos; Tsoucalas, Gregory
2018-05-01
The purpose of this article is to summarize the work and pioneering achievements in the field of orthopedic surgery of the German orthopedic surgeon Karl Ludloff. Ludloff had an impact in the diagnostics, physical examination, orthopedic imaging, and orthopedic surgical technique of his era. He was a pioneer in the surgical treatment of dysplastic hip, anterior cruciate ligament reconstruction, and hallux valgus. His surgical technique for the correction of hallux valgus, initially stabilized with plaster of Paris, remained unpopular among other orthopedic surgeons for decades. In the 1990s, the advent and use of improved orthopedic materials for fixation attracted the interest of numerous orthopedic surgeons in the Ludloff osteotomy for its ability to correct the deformity in all 3 dimensions, its anatomic outcomes, and its low recurrence rate and patient satisfaction.
NASA Astrophysics Data System (ADS)
Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.
2017-11-01
Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.
FEM Techniques for High Stress Detection in Accelerated Fatigue Simulation
NASA Astrophysics Data System (ADS)
Veltri, M.
2016-09-01
This work presents the theory and a numerical validation study in support to a novel method for a priori identification of fatigue critical regions, with the aim to accelerate durability design in large FEM problems. The investigation is placed in the context of modern full-body structural durability analysis, where a computationally intensive dynamic solution could be required to identify areas with potential for fatigue damage initiation. The early detection of fatigue critical areas can drive a simplification of the problem size, leading to sensible improvement in solution time and model handling while allowing processing of the critical areas in higher detail. The proposed technique is applied to a real life industrial case in a comparative assessment with established practices. Synthetic damage prediction quantification and visualization techniques allow for a quick and efficient comparison between methods, outlining potential application benefits and boundaries.
Solvent-free melting techniques for the preparation of lipid-based solid oral formulations.
Becker, Karin; Salar-Behzadi, Sharareh; Zimmer, Andreas
2015-05-01
Lipid excipients are applied for numerous purposes such as taste masking, controlled release, improvement of swallowability and moisture protection. Several melting techniques have evolved in the last decades. Common examples are melt coating, melt granulation and melt extrusion. The required equipment ranges from ordinary glass beakers for lab scale up to large machines such as fluid bed coaters, spray dryers or extruders. This allows for upscaling to pilot or production scale. Solvent free melt processing provides a cost-effective, time-saving and eco-friendly method for the food and pharmaceutical industries. This review intends to give a critical overview of the published literature on experiences, formulations and challenges and to show possibilities for future developments in this promising field. Moreover, it should serve as a guide for selecting the best excipients and manufacturing techniques for the development of a product with specific properties using solvent free melt processing.
Efficient finite element simulation of slot spirals, slot radomes and microwave structures
NASA Technical Reports Server (NTRS)
Gong, J.; Volakis, J. L.
1995-01-01
This progress report contains the following two documents: (1) 'Efficient Finite Element Simulation of Slot Antennas using Prismatic Elements' - A hybrid finite element-boundary integral (FE-BI) simulation technique is discussed to treat narrow slot antennas etched on a planar platform. Specifically, the prismatic elements are used to reduce the redundant sampling rates and ease the mesh generation process. Numerical results for an antenna slot and frequency selective surfaces are presented to demonstrate the validity and capability of the technique; and (2) 'Application and Design Guidelines of the PML Absorber for Finite Element Simulations of Microwave Packages' - The recently introduced perfectly matched layer (PML) uniaxial absorber for frequency domain finite element simulations has several advantages. In this paper we present the application of PML for microwave circuit simulations along with design guidelines to obtain a desired level of absorption. Different feeding techniques are also investigated for improved accuracy.
Benhammouda, Brahim; Vazquez-Leal, Hector
2016-01-01
This work presents an analytical solution of some nonlinear delay differential equations (DDEs) with variable delays. Such DDEs are difficult to treat numerically and cannot be solved by existing general purpose codes. A new method of steps combined with the differential transform method (DTM) is proposed as a powerful tool to solve these DDEs. This method reduces the DDEs to ordinary differential equations that are then solved by the DTM. Furthermore, we show that the solutions can be improved by Laplace-Padé resummation method. Two examples are presented to show the efficiency of the proposed technique. The main advantage of this technique is that it possesses a simple procedure based on a few straight forward steps and can be combined with any analytical method, other than the DTM, like the homotopy perturbation method.
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi; Kodera, Masako; Suguro, Kyoichi; Miyashita, Naoto
2017-09-19
Quantitative detection of defects in atomic structures is of great significance to evaluating product quality and exploring quality improvement process. In this study, a Fourier transform filtered sampling Moire technique was proposed to visualize and detect defects in atomic arrays in a large field of view. Defect distributions, defect numbers and defect densities could be visually and quantitatively determined from a single atomic structure image at low cost. The effectiveness of the proposed technique was verified from numerical simulations. As an application, the dislocation distributions in a GaN/AlGaN atomic structure in two directions were magnified and displayed in Moire phase maps, and defect locations and densities were detected automatically. The proposed technique is able to provide valuable references to material scientists and engineers by checking the effect of various treatments for defect reduction. © 2017 IOP Publishing Ltd.
Simulation and Modeling in High Entropy Alloys
NASA Astrophysics Data System (ADS)
Toda-Caraballo, I.; Wróbel, J. S.; Nguyen-Manh, D.; Pérez, P.; Rivera-Díaz-del-Castillo, P. E. J.
2017-11-01
High entropy alloys (HEAs) is a fascinating field of research, with an increasing number of new alloys discovered. This would hardly be conceivable without the aid of materials modeling and computational alloy design to investigate the immense compositional space. The simplicity of the microstructure achieved contrasts with the enormous complexity of its composition, which, in turn, increases the variety of property behavior observed. Simulation and modeling techniques are of paramount importance in the understanding of such material performance. There are numerous examples of how different models have explained the observed experimental results; yet, there are theories and approaches developed for conventional alloys, where the presence of one element is predominant, that need to be adapted or re-developed. In this paper, we review of the current state of the art of the modeling techniques applied to explain HEAs properties, identifying the potential new areas of research to improve the predictability of these techniques.
Numerical simulation of steady cavitating flow of viscous fluid in a Francis hydroturbine
NASA Astrophysics Data System (ADS)
Panov, L. V.; Chirkov, D. V.; Cherny, S. G.; Pylev, I. M.; Sotnikov, A. A.
2012-09-01
Numerical technique was developed for simulation of cavitating flows through the flow passage of a hydraulic turbine. The technique is based on solution of steady 3D Navier—Stokes equations with a liquid phase transfer equation. The approch for setting boundary conditions meeting the requirements of cavitation testing standard was suggested. Four different models of evaporation and condensation were compared. Numerical simulations for turbines of different specific speed were compared with experiment.
NASA Astrophysics Data System (ADS)
Baniamerian, Jamaledin; Liu, Shuang; Abbas, Mahmoud Ahmed
2018-04-01
The vertical gradient is an essential tool in interpretation algorithms. It is also the primary enhancement technique to improve the resolution of measured gravity and magnetic field data, since it has higher sensitivity to changes in physical properties (density or susceptibility) of the subsurface structures than the measured field. If the field derivatives are not directly measured with the gradiometers, they can be calculated from the collected gravity or magnetic data using numerical methods such as those based on fast Fourier transform technique. The gradients behave similar to high-pass filters and enhance the short-wavelength anomalies which may be associated with either small-shallow sources or high-frequency noise content in data, and their numerical computation is susceptible to suffer from amplification of noise. This behaviour can adversely affect the stability of the derivatives in the presence of even a small level of the noise and consequently limit their application to interpretation methods. Adding a smoothing term to the conventional formulation of calculating the vertical gradient in Fourier domain can improve the stability of numerical differentiation of the field. In this paper, we propose a strategy in which the overall efficiency of the classical algorithm in Fourier domain is improved by incorporating two different smoothing filters. For smoothing term, a simple qualitative procedure based on the upward continuation of the field to a higher altitude is introduced to estimate the related parameters which are called regularization parameter and cut-off wavenumber in the corresponding filters. The efficiency of these new approaches is validated by computing the first- and second-order derivatives of noise-corrupted synthetic data sets and then comparing the results with the true ones. The filtered and unfiltered vertical gradients are incorporated into the extended Euler deconvolution to estimate the depth and structural index of a magnetic sphere, hence, quantitatively evaluating the methods. In the real case, the described algorithms are used to enhance a portion of aeromagnetic data acquired in Mackenzie Corridor, Northern Mainland, Canada.
Fast optically sectioned fluorescence HiLo endomicroscopy
Lim, Daryl; Mertz, Jerome
2012-01-01
Abstract. We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies. PMID:22463023
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Review of photorejuvenation: devices, cosmeceuticals, or both?
Rokhsar, Cameron K; Lee, Sandra; Fitzpatrick, Richard E
2005-09-01
Both the public and the medical profession have placed a lot of attention on reversal of signs of aging and photodamage, resulting in numerous cosmeceutical products and nonablative laser techniques designed to achieve these results. The purpose of this report is to briefly review both the cosmeceutical products and nonablative laser techniques that appear to be most promising based on published studies. After this review, recommendations for potential enhancement of benefits by combining cosmeceuticals and laser treatments will be explored. Pulsed dye lasers targeting microvessels, intense pulsed light targeting both melanin and microvessels, and midinfrared lasers targeting dermal water and collagen all appear to have some ability to improve skin texture, color, and wrinkling. Retinoids, vitamin C, alpha-hydroxy acids, and topical growth factors may also stimulate repair mechanisms that result in similar improvements in photodamaged skin. Although supported only by theoretic considerations and anecdotal reports, it seems logical that the concurrent use of appropriate cosmeceuticals with nonablative laser photorejuvenation should result in enhanced benefits.
Fabrication and optical characterization of silica optical fibers containing gold nanoparticles.
de Oliveira, Rafael E P; Sjödin, Niclas; Fokine, Michael; Margulis, Walter; de Matos, Christiano J S; Norin, Lars
2015-01-14
Gold nanoparticles have been used since antiquity for the production of red-colored glasses. More recently, it was determined that this color is caused by plasmon resonance, which additionally increases the material's nonlinear optical response, allowing for the improvement of numerous optical devices. Interest in silica fibers containing gold nanoparticles has increased recently, aiming at the integration of nonlinear devices with conventional optical fibers. However, fabrication is challenging due to the high temperatures required for silica processing and fibers with gold nanoparticles were solely demonstrated using sol-gel techniques. We show a new fabrication technique based on standard preform/fiber fabrication methods, where nanoparticles are nucleated by heat in a furnace or by laser exposure with unprecedented control over particle size, concentration, and distribution. Plasmon absorption peaks exceeding 800 dB m(-1) at 514-536 nm wavelengths were observed, indicating higher achievable nanoparticle concentrations than previously reported. The measured resonant nonlinear refractive index, (6.75 ± 0.55) × 10(-15) m(2) W(-1), represents an improvement of >50×.
Crystalline phases by an improved gradient expansion technique
NASA Astrophysics Data System (ADS)
Carignano, S.; Mannarelli, M.; Anzuini, F.; Benhar, O.
2018-02-01
We develop an innovative technique for studying inhomogeneous phases with a spontaneous broken symmetry. The method relies on the knowledge of the exact form of the free energy in the homogeneous phase and on a specific gradient expansion of the order parameter. We apply this method to quark matter at vanishing temperature and large chemical potential, which is expected to be relevant for astrophysical considerations. The method is remarkably reliable and fast as compared to performing the full numerical diagonalization of the quark Hamiltonian in momentum space and is designed to improve the standard Ginzburg-Landau expansion close to the phase transition points. For definiteness, we focus on inhomogeneous chiral symmetry breaking, accurately reproducing known results for one-dimensional and two-dimensional modulations and examining novel crystalline structures, as well. Consistently with previous results, we find that the energetically favored modulation is the so-called one-dimensional real-kink crystal. We propose a qualitative description of the pairing mechanism to motivate this result.
Peng, Yuyang; Choi, Jaeho
2014-01-01
Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pardini, Tom; Aquila, Andrew; Boutet, Sebastien
Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less
Pardini, Tom; Aquila, Andrew; Boutet, Sebastien; ...
2017-06-15
Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less
Noniterative, unconditionally stable numerical techniques for solving condensational and
dissolutional growth equations are given. Growth solutions are compared to Gear-code solutions for
three cases when growth is coupled to reversible equilibrium chemistry. In all cases, ...
Numerical Assessment of Four-Port Through-Flow Wave Rotor Cycles with Passage Height Variation
NASA Technical Reports Server (NTRS)
Paxson, D. E.; Lindau, Jules W.
1997-01-01
The potential for improved performance of wave rotor cycles through the use of passage height variation is examined. A Quasi-one-dimensional CFD code with experimentally validated loss models is used to determine the flowfield in the wave rotor passages. Results indicate that a carefully chosen passage height profile can produce substantial performance gains. Numerical performance data are presented for a specific profile, in a four-port, through-flow cycle design which yielded a computed 4.6% increase in design point pressure ratio over a comparably sized rotor with constant passage height. In a small gas turbine topping cycle application, this increased pressure ratio would reduce specific fuel consumption to 22% below the un-topped engine; a significant improvement over the already impressive 18% reductions predicted for the constant passage height rotor. The simulation code is briefly described. The method used to obtain rotor passage height profiles with enhanced performance is presented. Design and off-design results are shown using two different computational techniques. The paper concludes with some recommendations for further work.
Sivasankar, P; Suresh Kumar, G
2017-01-01
In present work, the influence of reservoir pH conditions on dynamics of microbial enhanced oil recovery (MEOR) processes using Pseudomonas putida was analysed numerically from the developed mathematical model for MEOR processes. Further, a new strategy to improve the MEOR performance has also been proposed. It is concluded from present study that by reversing the reservoir pH from highly acidic to low alkaline condition (pH 5-8), flow and mobility of displaced oil, displacement efficiency, and original oil in place (OOIP) recovered gets significantly enhanced, resulting from improved interfacial tension (IFT) reduction by biosurfactants. At pH 8, maximum of 26.1% of OOIP was recovered with higher displacement efficiency. The present study introduces a new strategy to increase the recovery efficiency of MEOR technique by characterizing the biosurfactants for IFT min /IFT max values for different pH conditions and subsequently, reversing the reservoir pH conditions at which the IFT min /IFT max value is minimum. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirano, Teruyuki; Winn, Joshua N.; Albrecht, Simon
We present an improved formula for the anomalous radial velocity of the star during planetary transits due to the Rossiter-McLaughlin (RM) effect. The improvement comes from a more realistic description of the stellar absorption line profiles, taking into account stellar rotation, macroturbulence, thermal broadening, pressure broadening, and instrumental broadening. Although the formula is derived for the case in which radial velocities are measured by cross-correlation, we show through numerical simulations that the formula accurately describes the cases where the radial velocities are measured with the iodine absorption-cell technique. The formula relies on prior knowledge of the parameters describing macroturbulence, instrumentalmore » broadening, and other broadening mechanisms, but even 30% errors in those parameters do not significantly change the results in typical circumstances. We show that the new analytic formula agrees with previous ones that had been computed on a case-by-case basis via numerical simulations. Finally, as one application of the new formula, we reassess the impact of the differential rotation on the RM velocity anomaly. We show that differential rotation of a rapidly rotating star may have a significant impact on future RM observations.« less
Co-processing as a tool to improve aqueous dispersibility of cellulose ethers.
Sharma, Payal; Modi, Sameer R; Bansal, Arvind K
2015-01-01
Cellulose ethers are important materials with numerous applications in pharmaceutical industry. They are widely employed as stabilizers and viscosity enhancers for dispersed systems, binders in granulation process and as film formers for tablets. These polymers, however, exhibit challenge during preparation of their aqueous dispersions. Rapid hydration of their surfaces causes formation of a gel that prevents water from reaching the inner core of the particle. Moreover, the surfaces of these particles become sticky, thus leading to agglomeration, eventually reducing their dispersion kinetics. Numerous procedures have been tested to improve dispersibility of cellulose ethers. These include the use of cross-linking agents, alteration in the synthesis process, adjustment of water content of cellulose ether, modification by attaching hydrophobic substituents and co-processing using various excipients. Among these, co-processing has provided the most encouraging results. This review focuses on the molecular mechanisms responsible for the poor dispersibility of cellulose ethers and the role of co-processing technologies in overcoming the challenge. An attempt has been made to highlight various co-processing techniques and specific role of excipients used for co-processing.
NASA Astrophysics Data System (ADS)
Gu, J.; Yang, H.; Fan, F.; Su, M.
2017-12-01
A transmission and reflection coupled ultrasonic process tomography has been developed, which is characterized by a proposed dual-mode (DM) reconstruction algorithm, as well as an adaptive search approach to determine an optimal image threshold during the image binarization. In respect of hardware, to improve the accuracy of time-of-flight (TOF) and extend the lowest detection limit of particle size, a cylindrical miniaturized transducer using polyvinylidene fluoride (PVDF) films is designed. Besides, the development of range-gating technique for the identification of transmission and reflection waves in scanning is discussed. A particle system with four iron particles is then investigated numerically and experimentally to evaluate these proposed methods. The sound pressure distribution in imaging area is predicted numerically, followed by the analysis of the relationship between the emitting surface width of transducer and particle size. After the processing of experimental data for effective waveform extraction and fusion, the comparison between reconstructed results from transmission-mode (TM), reflection-mode (RM), and dual-mode reconstructions is carried out and the latter manifests obvious improvements from the blurring reduction to the enhancement of particle boundary.
Van Dun, Bram; Wouters, Jan; Moonen, Marc
2009-07-01
Auditory steady-state responses (ASSRs) are used for hearing threshold estimation at audiometric frequencies. Hearing impaired newborns, in particular, benefit from this technique as it allows for a more precise diagnosis than traditional techniques, and a hearing aid can be better fitted at an early age. However, measurement duration of current single-channel techniques is still too long for clinical widespread use. This paper evaluates the practical performance of a multi-channel electroencephalogram (EEG) processing strategy based on a detection theory approach. A minimum electrode set is determined for ASSRs with frequencies between 80 and 110 Hz using eight-channel EEG measurements of ten normal-hearing adults. This set provides a near-optimal hearing threshold estimate for all subjects and improves response detection significantly for EEG data with numerous artifacts. Multi-channel processing does not significantly improve response detection for EEG data with few artifacts. In this case, best response detection is obtained when noise-weighted averaging is applied on single-channel data. The same test setup (eight channels, ten normal-hearing subjects) is also used to determine a minimum electrode setup for 10-Hz ASSRs. This configuration allows to record near-optimal signal-to-noise ratios for 80% of subjects.
Modulation aware cluster size optimisation in wireless sensor networks
NASA Astrophysics Data System (ADS)
Sriram Naik, M.; Kumar, Vinay
2017-07-01
Wireless sensor networks (WSNs) play a great role because of their numerous advantages to the mankind. The main challenge with WSNs is the energy efficiency. In this paper, we have focused on the energy minimisation with the help of cluster size optimisation along with consideration of modulation effect when the nodes are not able to communicate using baseband communication technique. Cluster size optimisations is important technique to improve the performance of WSNs. It provides improvement in energy efficiency, network scalability, network lifetime and latency. We have proposed analytical expression for cluster size optimisation using traditional sensing model of nodes for square sensing field with consideration of modulation effects. Energy minimisation can be achieved by changing the modulation schemes such as BPSK, 16-QAM, QPSK, 64-QAM, etc., so we are considering the effect of different modulation techniques in the cluster formation. The nodes in the sensing fields are random and uniformly deployed. It is also observed that placement of base station at centre of scenario enables very less number of modulation schemes to work in energy efficient manner but when base station placed at the corner of the sensing field, it enable large number of modulation schemes to work in energy efficient manner.
Improved analysis techniques for cylindrical and spherical double probes.
Beal, Brian; Johnson, Lee; Brown, Daniel; Blakely, Joseph; Bromaghim, Daron
2012-07-01
A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T(i)/T(e) ≪ 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 × 10(12)-1 × 10(17) m(-3) and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%∕-34% in density and +∕-30% in electron temperature.
NASA Astrophysics Data System (ADS)
Naumenko, Natalya F.
2014-09-01
A numerical technique characterized by a unified approach for the analysis of different types of acoustic waves utilized in resonators in which a periodic metal grating is used for excitation and reflection of such waves is described. The combination of the Finite Element Method analysis of the electrode domain with the Spectral Domain Analysis (SDA) applied to the adjacent upper and lower semi-infinite regions, which may be multilayered and include air as a special case of a dielectric material, enables rigorous simulation of the admittance in resonators using surface acoustic waves, Love waves, plate modes including Lamb waves, Stonely waves, and other waves propagating along the interface between two media, and waves with transient structure between the mentioned types. The matrix formalism with improved convergence incorporated into SDA provides fast and robust simulation for multilayered structures with arbitrary thickness of each layer. The described technique is illustrated by a few examples of its application to various combinations of LiNbO3, isotropic silicon dioxide and silicon with a periodic array of Cu electrodes. The wave characteristics extracted from the admittance functions change continuously with the variation of the film and plate thicknesses over wide ranges, even when the wave nature changes. The transformation of the wave nature with the variation of the layer thicknesses is illustrated by diagrams and contour plots of the displacements calculated at resonant frequencies.
Zhao, Jianxun; Lu, Hongmin; Deng, Jun
2015-02-01
The planar-scanning technique was applied to the experimental measurement of the electric field and power flux density (PFD) in the exposure area close to the millimeter-wave (MMW) radiator. In the near-field region, the field and PFD were calculated from the plane-wave spectrum of the field sampled on a scan plane far from the radiator. The measurement resolution was improved by reducing the spatial interval between the field samples to a fraction of half the wavelength and implementing multiple iterations of the fast Fourier transform. With the reference to the results from the numerical calculation, an experimental evaluation of the planar-scanning measurement was made for a 50 GHz radiator. Placing the probe 1 to 3 wavelengths from the aperture of the radiator, the direct measurement gave the near-field data with significant differences from the numerical results. The planar-scanning measurement placed the probe 9 wavelengths away from the aperture and effectively reduced the maximum and averaged differences in the near-field data by 70.6% and 65.5%, respectively. Applied to the dosimetry of an open-ended waveguide and a choke ring antenna for 60 GHz exposure, the technique proved useful to the measurement of the PFD in the near-field exposure area of MMW radiators. © 2015 Wiley Periodicals, Inc.
A Sensitivity Analysis of Circular Error Probable Approximation Techniques
1992-03-01
SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some
Upgrades for the CMS simulation
Lange, D. J.; Hildreth, M.; Ivantchenko, V. N.; ...
2015-05-22
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector configurations. These include both completely new tracker and calorimetry systems. We have completed the transition to Geant4 version 10, we have made significant progress in reducing the CPU resources required to run our Geant4 simulation. These have been achieved throughmore » both technical improvements and through numerical techniques. Substantial speed improvements have been achieved without changing the physics validation benchmarks that the experiment uses to validate our simulation application for use in production. As a result, we will discuss the methods that we implemented and the corresponding demonstrated performance improvements deployed for our 2015 simulation application.« less
NASA Astrophysics Data System (ADS)
Wu, Cheng; Zhen Yu, Jian
2018-03-01
Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.
NASA Astrophysics Data System (ADS)
Yun, Ana; Shin, Jaemin; Li, Yibao; Lee, Seunggyu; Kim, Junseok
We numerically investigate periodic traveling wave solutions for a diffusive predator-prey system with landscape features. The landscape features are modeled through the homogeneous Dirichlet boundary condition which is imposed at the edge of the obstacle domain. To effectively treat the Dirichlet boundary condition, we employ a robust and accurate numerical technique by using a boundary control function. We also propose a robust algorithm for calculating the numerical periodicity of the traveling wave solution. In numerical experiments, we show that periodic traveling waves which move out and away from the obstacle are effectively generated. We explain the formation of the traveling waves by comparing the wavelengths. The spatial asynchrony has been shown in quantitative detail for various obstacles. Furthermore, we apply our numerical technique to the complicated real landscape features.
Klein, Max; Sharma, Rati; Bohrer, Chris H; Avelis, Cameron M; Roberts, Elijah
2017-01-15
Data-parallel programming techniques can dramatically decrease the time needed to analyze large datasets. While these methods have provided significant improvements for sequencing-based analyses, other areas of biological informatics have not yet adopted them. Here, we introduce Biospark, a new framework for performing data-parallel analysis on large numerical datasets. Biospark builds upon the open source Hadoop and Spark projects, bringing domain-specific features for biology. Source code is licensed under the Apache 2.0 open source license and is available at the project website: https://www.assembla.com/spaces/roberts-lab-public/wiki/Biospark CONTACT: eroberts@jhu.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DE 102 - A numerically integrated ephemeris of the moon and planets spanning forty-four centuries
NASA Technical Reports Server (NTRS)
Newhall, X. X.; Standish, E. M.; Willams, J. G.
1983-01-01
It is pointed out that the 1960's were the turning point for the generation of lunar and planetary ephemerides. All previous measurements of the positions of solar system bodies were optical angular measurements. New technological improvements leading to immense changes in observational accuracy are related to developments concerning radar, Viking landers on Mars, and laser ranges to lunar corner cube retroreflectors. Suitable numerical integration techniques and more comprehensive physical models were developed to match the accuracy of the modern data types. The present investigation is concerned with the first integrated ephemeris, DE 102, which covers the entire span of the historical astronomical observations of usable accuracy which are known. The fit is made to modern data. The integration spans the time period from 1411 BC to 3002 AD.
NASA Technical Reports Server (NTRS)
Cardone, V. J.; Pierson, W. J.
1975-01-01
On Skylab, a combination microwave radar-radiometer (S193) made measurements in a tropical hurricane (AVA), a tropical storm, and various extratropical wind systems. The winds at each cell scanned by the instrument were determined by objective numerical analysis techniques. The measured radar backscatter is compared to the analyzed winds and shown to provide an accurate method for measuring winds from space. An operational version of the instrument on an orbiting satellite will be able to provide the kind of measurements in tropical cyclones available today only by expensive and dangerous aircraft reconnaissance. Additionally, the specifications of the wind field in the tropical boundary layer should contribute to improved accuracy of tropical cyclone forecasts made with numerical weather predictions models currently being applied to the tropical atmosphere.
Numerical modeling for an electric-field hyperthermia applicator
NASA Technical Reports Server (NTRS)
Wu, Te-Kao; Chou, C. K.; Chan, K. W.; Mcdougall, J.
1993-01-01
Hyperthermia, in conjunction with radiation and chemotherapy for treatment of cancers, is an area of current concern. Experiments have shown that hyperthermia can increase the potency of many chemotherapy drugs and the effectiveness of radiation for treating cancer. A combination of whole body or regional hyperthermia with chemotherapy or radiation should improve treatment results. Conventional methods for inducing whole body hyperthermia, such as exposing a patient in a radiant cabinet or under a hot water blanket, conduct heat very slowly from the skin to the body core. Thus a more efficient system, such as the three-plate electric-field hyperthermia applicator (EHA), is developed. This three-plate EHA has one top plate over and two lower plates beneath the patient. It is driven at 27.12 MHz with 500 Watts through a matching circuit. Using this applicator, a 50 kg pig was successfully heated to 42 C within 45 minutes. However, phantom and animal studies have indicated non-uniform heating near the side of the body. In addition, changes in the size and distance between the electrode plates can affect the heating (or electromagnetic field) pattern. Therefore, numerical models using the method of moments (MOM) or the finite difference time domain (FDTD) technique are developed to optimize the heating pattern of this EHA before it is used for human trials. The accuracy of the numerical modeling has been achieved by the good agreement between the MOM and FDTD results for the three-plate EHA without a biological body. The versatile FDTD technique is then applied to optimize the EHA design with a human body. Both the numerical and measured data in phantom blocks will be presented. The results of this study will be used to design an optimized system for whole body or regional hyperthermia.
NASA Astrophysics Data System (ADS)
Pandey, Rishi Kumar; Mishra, Hradyesh Kumar
2017-11-01
In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.
Lagrangian analysis of multiscale particulate flows with the particle finite element method
NASA Astrophysics Data System (ADS)
Oñate, Eugenio; Celigueta, Miguel Angel; Latorre, Salvador; Casas, Guillermo; Rossi, Riccardo; Rojek, Jerzy
2014-05-01
We present a Lagrangian numerical technique for the analysis of flows incorporating physical particles of different sizes. The numerical approach is based on the particle finite element method (PFEM) which blends concepts from particle-based techniques and the FEM. The basis of the Lagrangian formulation for particulate flows and the procedure for modelling the motion of small and large particles that are submerged in the fluid are described in detail. The numerical technique for analysis of this type of multiscale particulate flows using a stabilized mixed velocity-pressure formulation and the PFEM is also presented. Examples of application of the PFEM to several particulate flows problems are given.
A numerical projection technique for large-scale eigenvalue problems
NASA Astrophysics Data System (ADS)
Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang
2011-10-01
We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.
An improved least cost routing approach for WDM optical network without wavelength converters
NASA Astrophysics Data System (ADS)
Bonani, Luiz H.; Forghani-elahabad, Majid
2016-12-01
Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.
Standoff concealed weapon detection using a 350-GHz radar imaging system
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.; Severtsen, Ronald H.; McMakin, Douglas L.; Hatchell, Brian K.; Valdez, Patrick L. J.
2010-04-01
The sub-millimeter (sub-mm) wave frequency band from 300 - 1000 GHz is currently being developed for standoff concealed weapon detection imaging applications. This frequency band is of interest due to the unique combination of high resolution and clothing penetration. The Pacific Northwest National Laboratory (PNNL) is currently developing a 350 GHz, active, wideband, three-dimensional, radar imaging system to evaluate the feasibility of active sub-mm imaging for standoff detection. Standoff concealed weapon and explosive detection is a pressing national and international need for both civilian and military security, as it may allow screening at safer distances than portal screening techniques. PNNL has developed a prototype active wideband 350 GHz radar imaging system based on a wideband, heterodyne, frequency-multiplier-based transceiver system coupled to a quasi-optical focusing system and high-speed rotating conical scanner. This prototype system operates at ranges up to 10+ meters, and can acquire an image in 10 - 20 seconds, which is fast enough to scan cooperative personnel for concealed weapons. The wideband operation of this system provides accurate ranging information, and the images obtained are fully three-dimensional. During the past year, several improvements to the system have been designed and implemented, including increased imaging speed using improved balancing techniques, wider bandwidth, and improved image processing techniques. In this paper, the imaging system is described in detail and numerous imaging results are presented.
NASA Astrophysics Data System (ADS)
Deb, S. K.; Kishtawal, C. M.; Kumar, Prashant; Kiran Kumar, A. S.; Pal, P. K.; Kaushik, Nitesh; Sangar, Ghansham
2016-03-01
The advanced Indian meteorological geostationary satellite INSAT-3D was launched on 26 July 2013 with an improved imager and an infrared sounder and is placed at 82°E over the Indian Ocean region. With the advancement in retrieval techniques of different atmospheric parameters and with improved imager data have enhanced the scope for better understanding of the different tropical atmospheric processes over this region. The retrieval techniques and accuracy of one such parameter, Atmospheric Motion Vectors (AMV) has improved significantly with the availability of improved spatial resolution data along with more options of spectral channels in the INSAT-3D imager. The present work is mainly focused on providing brief descriptions of INSAT-3D data and AMV derivation processes using these data. It also discussed the initial quality assessment of INSAT-3D AMVs for a period of six months starting from 01 February 2014 to 31 July 2014 with other independent observations: i) Meteosat-7 AMVs available over this region, ii) in-situ radiosonde wind measurements, iii) cloud tracked winds from Multi-angle Imaging Spectro-Radiometer (MISR) and iv) numerical model analysis. It is observed from this study that the qualities of newly derived INSAT-3D AMVs are comparable with existing two versions of Meteosat-7 AMVs over this region. To demonstrate its initial application, INSAT-3D AMVs are assimilated in the Weather Research and Forecasting (WRF) model and it is found that the assimilation of newly derived AMVs has helped in reduction of track forecast errors of the recent cyclonic storm NANAUK over the Arabian Sea. Though, the present study is limited to its application to one case study, however, it will provide some guidance to the operational agencies for implementation of this new AMV dataset for future applications in the Numerical Weather Prediction (NWP) over the south Asia region.
NASA Astrophysics Data System (ADS)
Cultrera, Matteo; Boaga, Jacopo; Di Sipio, Eloisa; Dalla Santa, Giorgia; De Seta, Massimiliano; Galgaro, Antonio
2018-05-01
Groundwater tracer tests are often used to improve aquifer characterization, but they present several disadvantages, such as the need to pour solutions or dyes into the aquifer system and alteration of the water's chemical properties. Thus, tracers can affect the groundwater flow mechanics and data interpretation becomes more complex, hindering effective study of ground heat pumps for low enthalpy geothermal systems. This paper presents a preliminary methodology based on a multidisciplinary application of heat as a tracer for defining the main parameters of shallow aquifers. The field monitoring techniques electrical resistivity tomography (ERT) and distributed temperature sensing (DTS) are noninvasive and were applied to a shallow-aquifer test site in northeast Italy. The combination of these measurement techniques supports the definition of the main aquifer parameters and therefore the construction of a reliable conceptual model, which is then described through the numerical code FEFLOW. This model is calibrated with DTS and validated by ERT outcomes. The reliability of the numerical model in terms of fate and transport is thereby enhanced, leading to the potential for better environmental management and protection of groundwater resources through more cost-effective solutions.
NASA Astrophysics Data System (ADS)
Liang, Yabin; Li, Dongsheng; Parvasi, Seyed Mohammad; Kong, Qingzhao; Lim, Ing; Song, Gangbing
2016-09-01
Concrete-encased composite structure is a type of structure that takes the advantages of both steel and concrete materials, showing improved strength, ductility, and fire resistance compared to traditional reinforced concrete structures. The interface between concrete and steel profiles governs the interaction between these two materials under loading, however, debonding damage between these two materials may lead to severe degradation of the load transferring capacity which will affect the structural performance significantly. In this paper, the electro-mechanical impedance (EMI) technique using piezoceramic transducers was experimentally investigated to detect the bond-slip occurrence of the concrete-encased composite structure. The root-mean-square deviation is used to quantify the variations of the impedance signatures due to the presence of the bond-slip damage. In order to verify the validity of the proposed method, finite element model analysis was performed to simulate the behavior of concrete-steel debonding based on a 3D finite element concrete-steel bond model. The computed impedance signatures from the numerical results are compared with the results obtained from the experimental study, and both the numerical and experimental studies verify the proposed EMI method to detect bond slip of a concrete-encased composite structure.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Hixon, Duane; Sankar, L. N.
1993-01-01
During the past two decades, there has been significant progress in the field of numerical simulation of unsteady compressible viscous flows. At present, a variety of solution techniques exist such as the transonic small disturbance analyses (TSD), transonic full potential equation-based methods, unsteady Euler solvers, and unsteady Navier-Stokes solvers. These advances have been made possible by developments in three areas: (1) improved numerical algorithms; (2) automation of body-fitted grid generation schemes; and (3) advanced computer architectures with vector processing and massively parallel processing features. In this work, the GMRES scheme has been considered as a candidate for acceleration of a Newton iteration time marching scheme for unsteady 2-D and 3-D compressible viscous flow calculation; from preliminary calculations, this will provide up to a 65 percent reduction in the computer time requirements over the existing class of explicit and implicit time marching schemes. The proposed method has ben tested on structured grids, but is flexible enough for extension to unstructured grids. The described scheme has been tested only on the current generation of vector processor architecture of the Cray Y/MP class, but should be suitable for adaptation to massively parallel machines.
NASA Astrophysics Data System (ADS)
Hernández, Daniel; Marangoni, Rafael; Schleichert, Jan; Karcher, Christian; Fröhlich, Thomas; Wondrak, Thomas
2018-03-01
Local Lorentz force velocimetry (local LFV) is a contactless velocity measurement technique for liquid metals. Due to the relative movement between an electrically conductive fluid and a static applied magnetic field, eddy currents and a flow-braking Lorentz force are generated inside the metal melt. This force is proportional to the flow rate or to the local velocity, depending on the volume subset of the flow spanned by the magnetic field. By using small-size magnets, a localized magnetic field distribution is achieved allowing a local velocity assessment in the region adjacent to the wall. In the present study, we describe a numerical model of our experiments at a continuous caster model where the working fluid is GaInSn in eutectic composition. Our main goal is to demonstrate that this electromagnetic technique can be applied to measure vorticity distributions, i.e. to resolve velocity gradients as well. Our results show that by using a cross-shaped magnet system, the magnitude of the torque perpendicular to the surface of the mold significantly increases improving its measurement in a liquid metal flow. According to our numerical model, this torque correlates with the vorticity of the velocity in this direction. Before validating our numerical predictions, an electromagnetic dry calibration of the measurement system composed of a multicomponent force and torque sensor and a cross-shaped magnet was done using a rotating disk made of aluminum. The sensor is able to measure simultaneously all three components of force and torque, respectively. This calibration step cannot be avoided and it is used for an accurate definition of the center of the magnet with respect to the sensor’s coordinate system for torque measurements. Finally, we present the results of the experiments at the mini-LIMMCAST facility showing a good agreement with the numerical model.
NASA Astrophysics Data System (ADS)
Wilson, R. I.; Eble, M. C.
2013-12-01
The U.S. National Tsunami Hazard Mitigation Program (NTHMP) is comprised of representatives from coastal states and federal agencies who, under the guidance of NOAA, work together to develop protocols and products to help communities prepare for and mitigate tsunami hazards. Within the NTHMP are several subcommittees responsible for complimentary aspects of tsunami assessment, mitigation, education, warning, and response. The Mapping and Modeling Subcommittee (MMS) is comprised of state and federal scientists who specialize in tsunami source characterization, numerical tsunami modeling, inundation map production, and warning forecasting. Until September 2012, much of the work of the MMS was authorized through the Tsunami Warning and Education Act, an Act that has since expired but the spirit of which is being adhered to in parallel with reauthorization efforts. Over the past several years, the MMS has developed guidance and best practices for states and territories to produce accurate and consistent tsunami inundation maps for community level evacuation planning, and has conducted benchmarking of numerical inundation models. Recent tsunami events have highlighted the need for other types of tsunami hazard analyses and products for improving evacuation planning, vertical evacuation, maritime planning, land-use planning, building construction, and warning forecasts. As the program responsible for producing accurate and consistent tsunami products nationally, the NTHMP-MMS is initiating a multi-year plan to accomplish the following: 1) Create and build on existing demonstration projects that explore new tsunami hazard analysis techniques and products, such as maps identifying areas of strong currents and potential damage within harbors as well as probabilistic tsunami hazard analysis for land-use planning. 2) Develop benchmarks for validating new numerical modeling techniques related to current velocities and landslide sources. 3) Generate guidance and protocols for the production and use of new tsunami hazard analysis products. 4) Identify multistate collaborations and funding partners interested in these new products. Application of these new products will improve the overall safety and resilience of coastal communities exposed to tsunami hazards.
Kowalski, Wolfgang; Dammer, Markus; Bakczewitz, Frank; Schmitz, Klaus-Peter; Grabow, Niels; Kessler, Olaf
2015-09-01
Drug eluting stents (DES) consist of platform, coating and drug. The platform often is a balloon-expandable bare metal stent made of the CoCr alloy L-605 or stainless steel 316 L. The function of the coating, typically a permanent polymer, is to hold and release the drug, which should improve therapeutic outcome. Before implantation, DES are compressed (crimped) to allow implantation in the human body. During implantation, DES are expanded by balloon inflation. Crimping, as well as expansion, causes high stresses and high strains locally in the DES struts, as well as in the polymer coating. These stresses and strains are important design criteria of DES. Usually, they are calculated numerically by finite element analysis (FEA), but experimental results for validation are hardly available. In this work, the X-ray diffraction (XRD) sin(2)ψ-technique is applied to in-situ determination of stress conditions of bare metal L-605 stents, and Poly-(L-lactide) (PLLA) coated stents. This provides a realistic characterization of the near-surface stress state and a validation option of the numerical FEA. XRD-results from terminal stent struts of the bare metal stent show an increasing compressive load stress in tangential direction with increasing stent expansion. These findings correlate with numerical FEA results. The PLLA-coating also bears increasing compressive load stress during expansion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kishigami, Satoshi; Bui, Hong-Thuy; Wakayama, Sayaka; Tokunaga, Kenzo; Van Thuan, Nguyen; Hikichi, Takafusa; Mizutani, Eiji; Ohta, Hiroshi; Suetsugu, Rinako; Sata, Tetsutaro; Wakayama, Teruhiko
2007-02-01
Although the somatic cloning technique has been used for numerous applications and basic research of reprogramming in various species, extremely low success rates have plagued this technique for a decade. Further in mice, the "clonable" strains have been limited to mainly hybrid F1 strains such as B6D2F1. Recently, we established a new efficient cloning technique using trichostatin A (TSA) which leads to a 2-5 fold increase in success rates for mouse cloning of B6D2F1 cumulus cells. To further test the validity of this TSA cloning technique, we tried to clone the adult ICR mouse, an outbred strain, which has never been directly cloned before. Only when TSA was used did we obtain both male and female cloned mice from cumulus and fibroblast cells of adult ICR mice with 4-5% success rates, which is comparable to 5-7% of B6D2F1. Thus, the TSA treatment is the first cloning technique to allow us to successfully clone outbred mice, demonstrating that this technique not only improves the success rates of cloning from hybrid strains, but also enables mouse cloning from normally "unclonable" strains.
Reliability enhancement of Navier-Stokes codes through convergence acceleration
NASA Technical Reports Server (NTRS)
Merkle, Charles L.; Dulikravich, George S.
1995-01-01
Methods for enhancing the reliability of Navier-Stokes computer codes through improving convergence characteristics are presented. The improving of these characteristics decreases the likelihood of code unreliability and user interventions in a design environment. The problem referred to as a 'stiffness' in the governing equations for propulsion-related flowfields is investigated, particularly in regard to common sources of equation stiffness that lead to convergence degradation of CFD algorithms. Von Neumann stability theory is employed as a tool to study the convergence difficulties involved. Based on the stability results, improved algorithms are devised to ensure efficient convergence in different situations. A number of test cases are considered to confirm a correlation between stability theory and numerical convergence. The examples of turbulent and reacting flow are presented, and a generalized form of the preconditioning matrix is derived to handle these problems, i.e., the problems involving additional differential equations for describing the transport of turbulent kinetic energy, dissipation rate and chemical species. Algorithms for unsteady computations are considered. The extension of the preconditioning techniques and algorithms derived for Navier-Stokes computations to three-dimensional flow problems is discussed. New methods to accelerate the convergence of iterative schemes for the numerical integration of systems of partial differential equtions are developed, with a special emphasis on the acceleration of convergence on highly clustered grids.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
New drugs and methods of doping and manipulation.
Thevis, Mario; Kohler, Maxie; Schänzer, Wilhelm
2008-01-01
The issue of doping in sport is multifaceted. New drugs not only with anabolic properties such as selective androgen receptor modulators, synthetic insulins, blood doping with erythropoietins or homologous and autologous blood transfusions but also with sample manipulation have necessitated sensitive, comprehensive and specific detection assays allowing for the identification of cheats. New methods based on mass spectrometry, flow cytometry and immunological techniques have been introduced and improved in the past years to support and enhance the antidoping fight. Although numerous approaches are successful and promising, these methods still have some shortcomings.
Improved numerical solutions for chaotic-cancer-model
NASA Astrophysics Data System (ADS)
Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair
2017-01-01
In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.
On turbulent flows dominated by curvature effects
NASA Technical Reports Server (NTRS)
Cheng, G. C.; Farokhi, S.
1992-01-01
A technique for improving the numerical predictions of turbulent flows with the effect of streamline curvature is developed. Separated flows and the flow in a curved duct are examples of flowfields where streamline curvature plays a dominant role. New algebraic formulations for the eddy viscosity incorporating the k-epsilon turbulence model are proposed to account for various effects of streamline curvature. The loci of flow reversal of the separated flows over various backward-facing steps are employed to test the capability of the proposed turbulence model in capturing the effect of local curvature.
Multiple directed graph large-class multi-spectral processor
NASA Technical Reports Server (NTRS)
Casasent, David; Liu, Shiaw-Dong; Yoneyama, Hideyuki
1988-01-01
Numerical analysis techniques for the interpretation of high-resolution imaging-spectrometer data are described and demonstrated. The method proposed involves the use of (1) a hierarchical classifier with a tree structure generated automatically by a Fisher linear-discriminant-function algorithm and (2) a novel multiple-directed-graph scheme which reduces the local maxima and the number of perturbations required. Results for a 500-class test problem involving simulated imaging-spectrometer data are presented in tables and graphs; 100-percent-correct classification is achieved with an improvement factor of 5.
Treating convection in sequential solvers
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth
1992-01-01
The treatment of the convection terms in the sequential solver, a standard procedure found in virtually all pressure based algorithms, to compute the flow problems with sharp gradients and source terms is investigated. Both scalar model problems and one-dimensional gas dynamics equations have been used to study the various issues involved. Different approaches including the use of nonlinear filtering techniques and adoption of TVD type schemes have been investigated. Special treatments of the source terms such as pressure gradients and heat release have also been devised, yielding insight and improved accuracy of the numerical procedure adopted.
A Survey of Symplectic and Collocation Integration Methods for Orbit Propagation
NASA Technical Reports Server (NTRS)
Jones, Brandon A.; Anderson, Rodney L.
2012-01-01
Demands on numerical integration algorithms for astrodynamics applications continue to increase. Common methods, like explicit Runge-Kutta, meet the orbit propagation needs of most scenarios, but more specialized scenarios require new techniques to meet both computational efficiency and accuracy needs. This paper provides an extensive survey on the application of symplectic and collocation methods to astrodynamics. Both of these methods benefit from relatively recent theoretical developments, which improve their applicability to artificial satellite orbit propagation. This paper also details their implementation, with several tests demonstrating their advantages and disadvantages.
NASA Astrophysics Data System (ADS)
Takahashi, D.; Sawaki, S.; Mu, R.-L.
2016-06-01
A new method for improving the sound insulation performance of double-glazed windows is proposed. This technique uses viscoelastic materials as connectors between the two glass panels to ensure that the appropriate spacing is maintained. An analytical model that makes it possible to discuss the effects of spacing, contact area, and viscoelastic properties of the connectors on the performance in terms of sound insulation is developed. The validity of the model is verified by comparing its results with measured data. The numerical experiments using this analytical model showed the importance of the ability of the connectors to achieve the appropriate spacing and their viscoelastic properties, both of which are necessary for improving the sound insulation performance. In addition, it was shown that the most effective factor is damping: the stronger the damping, the more the insulation performance increases.
Improved optical efficiency of bulk laser amplifiers with femtosecond written waveguides
NASA Astrophysics Data System (ADS)
Bukharin, Mikhail A.; Lyashedko, Andrey; Skryabin, Nikolay N.; Khudyakov, Dmitriy V.; Vartapetov, Sergey K.
2016-04-01
In the paper we proposed improved technique of three-dimensional waveguides writing with direct femtosecond laser inscription technology. The technique allows, for the first time of our knowledge, production of waveguides with mode field diameter larger than 200 μm. This result broadens field of application of femtosecond writing technology into bulk laser schemes and creates an opportunity to develop novel amplifiers with increased efficiency. We proposed a novel architecture of laser amplifier that combines free-space propagation of signal beam with low divergence and propagation of pump irradiation inside femtosecond written waveguide with large mode field diameter due to total internal reflection effect. Such scheme provides constant tight confinement of pump irradiation over the full length of active laser element (3-10 cm). The novel amplifier architecture was investigated numerically and experimentally in Nd:phosphate glass. Waveguides with 200 μm mode field diameter were written with high frequency femtosecond oscillator. Proposed technique of three-dimensional waveguides writing based on decreasing and compensation of spherical aberration effect due to writing in heat cumulative regime and dynamic pulse energy adjustment at different depths of writing. It was shown, that written waveguides could increase optical efficiency of amplifier up to 4 times compared with corresponding usual free-space schemes. Novelty of the results consists in technique of femtosecond writing of waveguides with large mode field diameter. Actuality of the results consists in originally proposed architecture allows to improve up to 4 times optical efficiency of conventional bulk laser schemes and especially ultrafast pulse laser amplifiers.
An approach to unbiased subsample interpolation for motion tracking.
McCormick, Matthew M; Varghese, Tomy
2013-04-01
Accurate subsample displacement estimation is necessary for ultrasound elastography because of the small deformations that occur and the subsequent application of a derivative operation on local displacements. Many of the commonly used subsample estimation techniques introduce significant bias errors. This article addresses a reduced bias approach to subsample displacement estimations that consists of a two-dimensional windowed-sinc interpolation with numerical optimization. It is shown that a Welch or Lanczos window with a Nelder-Mead simplex or regular-step gradient-descent optimization is well suited for this purpose. Little improvement results from a sinc window radius greater than four data samples. The strain signal-to-noise ratio (SNR) obtained in a uniformly elastic phantom is compared with other parabolic and cosine interpolation methods; it is found that the strain SNR ratio is improved over parabolic interpolation from 11.0 to 13.6 in the axial direction and 0.7 to 1.1 in the lateral direction for an applied 1% axial deformation. The improvement was most significant for small strains and displacement tracking in the lateral direction. This approach does not rely on special properties of the image or similarity function, which is demonstrated by its effectiveness with the application of a previously described regularization technique.
Canine and feline blood transfusions: controversies and recent advances in administration practices.
Kisielewicz, Caroline; Self, Ian A
2014-05-01
To discuss and review blood transfusion practices in dogs and cats including collection and storage of blood and administration of products. To report new developments, controversial practices, less conventional blood product administration techniques and where applicable, describe the relevance to anaesthetists and anaesthesia. PubMed and Google Scholar using dog, cat, blood transfusion, packed red blood cells and whole blood as keywords. Blood transfusions improve oxygen carrying capacity and the clinical signs of anaemia. However there are numerous potential risks and complications possible with transfusions, which may outweigh their benefits. Storage of blood products has improved considerably over time but whilst extended storage times may improve their availability, a phenomenon known as the storage lesion has been identified which affects erythrocyte viability and survival. Leukoreduction involves removing leukocytes and platelets thereby preventing their release of cytokines and bioactive compounds which also contribute to storage lesions and certain transfusion reactions. Newer transfusion techniques are being explored such as cell salvage in surgical patients and subsequent autologous transfusion. Xenotransfusions, using blood and blood products between different species, provide an alternative to conventional blood products. © 2014 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesia and Analgesia.
Evolution of Nickel-titanium Alloys in Endodontics.
Ounsi, Hani F; Nassif, Wadih; Grandini, Simone; Salameh, Ziad; Neelakantan, Prasanna; Anil, Sukumaran
2017-11-01
To improve clinical use of nickel-titanium (NiTi) endodontic rotary instruments by better understanding the alloys that compose them. A large number of engine-driven NiTi shaping instruments already exists on the market and newer generations are being introduced regularly. While emphasis is being put on design and technique, manufacturers are more discreet about alloy characteristics that dictate instrument behavior. Along with design and technique, alloy characteristics of endodontic instruments is one of the main variables affecting clinical performance. Modification in NiTi alloys is numerous and may yield improvements, but also drawbacks. Martensitic instruments seem to display better cyclic fatigue properties at the expense of surface hardness, prompting the need for surface treatments. On the contrary, such surface treatments may improve cutting efficiency but are detrimental to the gain in cyclic fatigue resistance. Although the design of the instrument is vital, it should in no way cloud the importance of the properties of the alloy and how they influence the clinical behavior of NiTi instruments. Dentists are mostly clinicians rather than engineers. With the advances in instrumentation design and alloys, they have an obligation to deal more intimately with engineering consideration to not only take advantage of their possibilities but also acknowledge their limitations.
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
NASA Astrophysics Data System (ADS)
Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo
2014-09-01
Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.
Using phase locking for improving frequency stability and tunability of THz-band gyrotrons
NASA Astrophysics Data System (ADS)
Adilova, Asel B.; Gerasimova, Svetlana A.; Melnikova, Maria M.; Tyshkun, Alexandra V.; Rozhnev, Andrey G.; Ryskin, Nikita M.
2018-04-01
Medium-power (10-100 W) THz-band gyrotrons operating in a continuous-wave (CW) mode are of great importance for many applications such as NMR spectroscopy with dynamic nuclear polarization (DNP/NMR), plasma diagnostics, nondestructive inspection, stand-off detection of radioactive materials, biomedical applications, etc. For all these applications, high frequency stability and tunability within 1-2 GHz frequency range is typically required. Apart from different existing techniques for frequency stabilization, phase locking has recently attracted strong interest. In this paper, we present the results of theoretical analysis and numerical simulation for several phase locking techniques: (a) phase locking by injection of the external driving signal; (b) mutual phase locking of two coupled gyrotrons; and (c) selfinjection locking by a wave reflected from the remote load.
Remacha, Clément; Coëtmellec, Sébastien; Brunel, Marc; Lebrun, Denis
2013-02-01
Wavelet analysis provides an efficient tool in numerous signal processing problems and has been implemented in optical processing techniques, such as in-line holography. This paper proposes an improvement of this tool for the case of an elliptical, astigmatic Gaussian (AEG) beam. We show that this mathematical operator allows reconstructing an image of a spherical particle without compression of the reconstructed image, which increases the accuracy of the 3D location of particles and of their size measurement. To validate the performance of this operator we have studied the diffraction pattern produced by a particle illuminated by an AEG beam. This study used mutual intensity propagation, and the particle is defined as a chirped Gaussian sum. The proposed technique was applied and the experimental results are presented.
Direct imaging of isofrequency contours in photonic structures
Regan, E. C.; Igarashi, Y.; Zhen, B.; ...
2016-11-25
The isofrequency contours of a photonic crystal are important for predicting and understanding exotic optical phenomena that are not apparent from high-symmetry band structure visualizations. We demonstrate a method to directly visualize the isofrequency contours of high-quality photonic crystal slabs that show quantitatively good agreement with numerical results throughout the visible spectrum. Our technique relies on resonance-enhanced photon scattering from generic fabrication disorder and surface roughness, so it can be applied to general photonic and plasmonic crystals or even quasi-crystals. We also present an analytical model of the scattering process, which explains the observation of isofrequency contours in our technique.more » Furthermore, the isofrequency contours provide information about the characteristics of the disorder and therefore serve as a feedback tool to improve fabrication processes.« less
Bonne, Stephanie L; Turnbull, Isaiah R; Southard, Robert E
2015-06-01
Internal fixation of the ribs has been shown in numerous studies to decrease complications following traumatic rib fractures. Anterior injuries to the chest wall causing cartilaginous fractures, although rare, can cause significant disability and can lead to a variety of complications and, therefore, pose a unique clinical problem. Here, we report the surgical technique used for four patients with internal fixation of injuries to the cartilaginous portions of the chest wall treated at our center. All patients had excellent clinical outcomes and reported improvement in symptoms, with no associated complications. Patients who have injuries to the anterior portions of the chest wall should be considered for internal fixation of the chest wall when the injuries are severe and can lead to clinical disability.
NASA Astrophysics Data System (ADS)
Raghunathan, Raksha; Zhang, Jitao; Wu, Chen; Rippy, Justin; Singh, Manmohan; Larin, Kirill V.; Scarcelli, Giuliano
2017-08-01
Embryogenesis is regulated by numerous changes in mechanical properties of the cellular microenvironment. Thus, studying embryonic mechanophysiology can provide a more thorough perspective of embryonic development, potentially improving early detection of congenital abnormalities as well as evaluating and developing therapeutic interventions. A number of methods and techniques have been used to study cellular biomechanical properties during embryogenesis. While some of these techniques are invasive or involve the use of external agents, others are compromised in terms of spatial and temporal resolutions. We propose the use of Brillouin microscopy in combination with optical coherence tomography (OCT) to measure stiffness as well as structural changes in a developing embryo. While Brillouin microscopy assesses the changes in stiffness among different organs of the embryo, OCT provides the necessary structural guidance.
NASA Technical Reports Server (NTRS)
Lang, Steve; Tao, W.-K.; Simpson, J.; Ferrier, B.; Einaudi, Franco (Technical Monitor)
2001-01-01
Six different convective-stratiform separation techniques, including a new technique that utilizes the ratio of vertical and terminal velocities, are compared and evaluated using two-dimensional numerical simulations of a tropical [Tropical Ocean Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE)] and midlatitude continental [Preliminary Regional Experiment for STORM-Central (PRESTORM)] squall line. The simulations are made using two different numerical advection schemes: 4th order and positive definite advection. Comparisons are made in terms of rainfall, cloud coverage, mass fluxes, apparent heating and moistening, mean hydrometeor profiles, CFADs (Contoured Frequency with Altitude Diagrams), microphysics, and latent heating retrieval. Overall, it was found that the different separation techniques produced results that qualitatively agreed. However, the quantitative differences were significant. Observational comparisons were unable to conclusively evaluate the performance of the techniques. Latent heating retrieval was shown to be sensitive to the use of separation technique mainly due to the stratiform region for methods that found very little stratiform rain. The midlatitude PRESTORM simulation was found to be nearly invariant with respect to advection type for most quantities while for TOGA COARE fourth order advection produced numerous shallow convective cores and positive definite advection fewer cells that were both broader and deeper penetrating above the freezing level.
NASA Astrophysics Data System (ADS)
Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie
2015-04-01
Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the possible bias due to measurement uncertainties or clustering of grain optical axes in the samples.
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
NASA Technical Reports Server (NTRS)
Dieudonne, J. E.
1978-01-01
A numerical technique was developed which generates linear perturbation models from nonlinear aircraft vehicle simulations. The technique is very general and can be applied to simulations of any system that is described by nonlinear differential equations. The computer program used to generate these models is discussed, with emphasis placed on generation of the Jacobian matrices, calculation of the coefficients needed for solving the perturbation model, and generation of the solution of the linear differential equations. An example application of the technique to a nonlinear model of the NASA terminal configured vehicle is included.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
Fercher, A; Hitzenberger, C; Sticker, M; Zawadzki, R; Karamata, B; Lasser, T
2001-12-03
Dispersive samples introduce a wavelength dependent phase distortion to the probe beam. This leads to a noticeable loss of depth resolution in high resolution OCT using broadband light sources. The standard technique to avoid this consequence is to balance the dispersion of the sample byarrangingadispersive materialinthereference arm. However, the impact of dispersion is depth dependent. A corresponding depth dependent dispersion balancing technique is diffcult to implement. Here we present a numerical dispersion compensation technique for Partial Coherence Interferometry (PCI) and Optical Coherence Tomography (OCT) based on numerical correlation of the depth scan signal with a depth variant kernel. It can be used a posteriori and provides depth dependent dispersion compensation. Examples of dispersion compensated depth scan signals obtained from microscope cover glasses are presented.
Shao, Yu; Wang, Shumin
2016-12-01
The numerical simulation of acoustic scattering from elastic objects near a water-sand interface is critical to underwater target identification. Frequency-domain methods are computationally expensive, especially for large-scale broadband problems. A numerical technique is proposed to enable the efficient use of finite-difference time-domain method for broadband simulations. By incorporating a total-field/scattered-field boundary, the simulation domain is restricted inside a tightly bounded region. The incident field is further synthesized by the Fourier transform for both subcritical and supercritical incidences. Finally, the scattered far field is computed using a half-space Green's function. Numerical examples are further provided to demonstrate the accuracy and efficiency of the proposed technique.
NASA Technical Reports Server (NTRS)
Baum, J. D.; Levine, J. N.
1980-01-01
The selection of a satisfactory numerical method for calculating the propagation of steep fronted shock life waveforms in a solid rocket motor combustion chamber is discussed. A number of different numerical schemes were evaluated by comparing the results obtained for three problems: the shock tube problems; the linear wave equation, and nonlinear wave propagation in a closed tube. The most promising method--a combination of the Lax-Wendroff, Hybrid and Artificial Compression techniques, was incorporated into an existing nonlinear instability program. The capability of the modified program to treat steep fronted wave instabilities in low smoke tactical motors was verified by solving a number of motor test cases with disturbance amplitudes as high as 80% of the mean pressure.
Antibody Microarray for E. coli O157:H7 and Shiga Toxin in Microtiter Plates.
Gehring, Andrew G; Brewster, Jeffrey D; He, Yiping; Irwin, Peter L; Paoli, George C; Simons, Tawana; Tu, Shu-I; Uknalis, Joseph
2015-12-04
Antibody microarray is a powerful analytical technique because of its inherent ability to simultaneously discriminate and measure numerous analytes, therefore making the technique conducive to both the multiplexed detection and identification of bacterial analytes (i.e., whole cells, as well as associated metabolites and/or toxins). We developed a sandwich fluorescent immunoassay combined with a high-throughput, multiwell plate microarray detection format. Inexpensive polystyrene plates were employed containing passively adsorbed, array-printed capture antibodies. During sample reaction, centrifugation was the only strategy found to significantly improve capture, and hence detection, of bacteria (pathogenic Escherichia coli O157:H7) to planar capture surfaces containing printed antibodies. Whereas several other sample incubation techniques (e.g., static vs. agitation) had minimal effect. Immobilized bacteria were labeled with a red-orange-fluorescent dye (Alexa Fluor 555) conjugated antibody to allow for quantitative detection of the captured bacteria with a laser scanner. Shiga toxin 1 (Stx1) could be simultaneously detected along with the cells, but none of the agitation techniques employed during incubation improved detection of the relatively small biomolecule. Under optimal conditions, the assay had demonstrated limits of detection of ~5.8 × 10⁵ cells/mL and 110 ng/mL for E. coli O157:H7 and Stx1, respectively, in a ~75 min total assay time.
A design approach for improving the performance of single-grid planar retarding potential analyzers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, R. L.; Earle, G. D.
2011-01-15
Planar retarding potential analyzers (RPAs) have a long flight history and have been included on numerous spaceflight missions including Dynamics Explorer, the Defense Meteorological Satellite Program, and the Communications/Navigation Outage Forecast System. RPAs allow for simultaneous measurement of plasma composition, density, temperature, and the component of the velocity vector normal to the aperture plane. Internal conductive grids are used to approximate ideal potential planes within the instrument, but these grids introduce perturbations to the potential map inside the RPA and cause errors in the measurement of the parameters listed above. A numerical technique is presented herein for minimizing these gridmore » errors for a specific mission by varying the depth and spacing of the grid wires. The example mission selected concentrates on plasma dynamics near the sunset terminator in the equatorial region. The international reference ionosphere model is used to discern the average conditions expected for this mission, and a numerical model of the grid-particle interaction is used to choose a grid design that will best fulfill the mission goals.« less
An Entropy-Based Approach to Nonlinear Stability
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1989-01-01
Many numerical methods used in computational fluid dynamics (CFD) incorporate an artificial dissipation term to suppress spurious oscillations and control nonlinear instabilities. The same effect can be accomplished by using upwind techniques, sometimes augmented with limiters to form Total Variation Diminishing (TVD) schemes. An analysis based on numerical satisfaction of the second law of thermodynamics allows many such methods to be compared and improved upon. A nonlinear stability proof is given for discrete scalar equations arising from a conservation law. Solutions to such equations are bounded in the L sub 2 norm if the second law of thermodynamics is satisfied in a global sense over a periodic domain. It is conjectured that an analogous statement is true for discrete equations arising from systems of conservation laws. Analysis and numerical experiments suggest that a more restrictive condition, a positive entropy production rate in each cell, is sufficient to exclude unphysical phenomena such as oscillations and expansion shocks. Construction of schemes which satisfy this condition is demonstrated for linear and nonlinear wave equations and for the one-dimensional Euler equations.
Angus, Simon D.; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17–18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy. PMID:25460164
Angus, Simon D; Piotrowska, Monika Joanna
2014-01-01
Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means of significantly improving clinical efficacy.
The Temporal Morphology of Infrasound Propagation
NASA Astrophysics Data System (ADS)
Drob, Douglas P.; Garcés, Milton; Hedlin, Michael; Brachet, Nicolas
2010-05-01
Expert knowledge suggests that the performance of automated infrasound event association and source location algorithms could be greatly improved by the ability to continually update station travel-time curves to properly account for the hourly, daily, and seasonal changes of the atmospheric state. With the goal of reducing false alarm rates and improving network detection capability we endeavor to develop, validate, and integrate this capability into infrasound processing operations at the International Data Centre of the Comprehensive Nuclear Test-Ban Treaty Organization. Numerous studies have demonstrated that incorporation of hybrid ground-to-space (G2S) enviromental specifications in numerical calculations of infrasound signal travel time and azimuth deviation yields significantly improved results over that of climatological atmospheric specifications, specifically for tropospheric and stratospheric modes. A robust infrastructure currently exists to generate hybrid G2S vector spherical harmonic coefficients, based on existing operational and emperical models on a real-time basis (every 3- to 6-hours) (D rob et al., 2003). Thus the next requirement in this endeavor is to refine numerical procedures to calculate infrasound propagation characteristics for robust automatic infrasound arrival identification and network detection, location, and characterization algorithms. We present results from a new code that integrates the local (range-independent) τp ray equations to provide travel time, range, turning point, and azimuth deviation for any location on the globe given a G2S vector spherical harmonic coefficient set. The code employs an accurate numerical technique capable of handling square-root singularities. We investigate the seasonal variability of propagation characteristics over a five-year time series for two different stations within the International Monitoring System with the aim of understanding the capabilities of current working knowledge of the atmosphere and infrasound propagation models. The statistical behaviors or occurrence frequency of various propagation configurations are discussed. Representative examples of some of these propagation configuration states are also shown.
Design and Hardware Implementation of a New Chaotic Secure Communication Technique
Xiong, Li; Lu, Yan-Jun; Zhang, Yong-Fang; Zhang, Xin-Guo; Gupta, Parag
2016-01-01
In this paper, a scheme for chaotic modulation secure communication is proposed based on chaotic synchronization of an improved Lorenz system. For the first time, the intensity limit and stability of the transmitted signal, the characteristics of broadband and the requirements for accuracy of electronic components are presented by Multisim simulation. In addition, some improvements are made on the measurement method and the proposed experimental circuit in order to facilitate the experiments of chaotic synchronization, chaotic non-synchronization, experiment without signal and experiment with signal. To illustrate the effectiveness of the proposed scheme, some numerical simulations are presented. Then, the proposed chaotic secure communication circuit is implemented through analog electronic circuit, which is characterized by its high accuracy and good robustness. PMID:27548385
Regularization iteration imaging algorithm for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao
2018-03-01
The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.
NASA Technical Reports Server (NTRS)
Riley, Christopher J.
1993-01-01
An engineering inviscid-boundary layer method has been modified for application to slender three-dimensional (3-D) forebodies which are characteristic of transatmospheric vehicles. An improved shock description in the nose region has been added to the inviscid technique which allows the calculation of a wider range of body geometries. The modified engineering method is applied to the perfect gas solution over a slender 3-D configuration at angle of attack. The method predicts surface pressures and laminar heating rates on the windward side of the vehicle that compare favorably with numerical solutions of the thin-layer Navier-Stokes equations. These improvements extend the 3-D capabilities of the engineering method and significantly increase its design applications.
Gradient pattern analysis applied to galaxy morphology
NASA Astrophysics Data System (ADS)
Rosa, R. R.; de Carvalho, R. R.; Sautter, R. A.; Barchi, P. H.; Stalder, D. H.; Moura, T. C.; Rembold, S. B.; Morell, D. R. F.; Ferreira, N. C.
2018-06-01
Gradient pattern analysis (GPA) is a well-established technique for measuring gradient bilateral asymmetries of a square numerical lattice. This paper introduces an improved version of GPA designed for galaxy morphometry. We show the performance of the new method on a selected sample of 54 896 objects from the SDSS-DR7 in common with Galaxy Zoo 1 catalogue. The results suggest that the second gradient moment, G2, has the potential to dramatically improve over more conventional morphometric parameters. It separates early- from late-type galaxies better (˜ 90 per cent) than the CAS system (C˜ 79 per cent, A˜ 50 per cent, S˜ 43 per cent) and a benchmark test shows that it is applicable to hundreds of thousands of galaxies using typical processing systems.
NASA Technical Reports Server (NTRS)
Rahmat-Samii, Yahya
1986-01-01
Both offset and symmetric Cassegrain reflector antennas are used in satellite and ground communication systems. It is known that the subreflector diffraction can degrade the performance of these reflectors. A geometrical theory of diffraction/physical optics analysis technique is used to investigate the effects of the extended subreflector, beyond its optical rim, on the reflector efficiency and far-field patterns. Representative numerical results are shown for an offset Cassegrain reflector antenna with different feed illumination tapers and subreflector extensions. It is observed that for subreflector extensions as small as one wavelength, noticeable improvements in the overall efficiencies can be expected. Useful design data are generated for the efficiency curves and far-field patterns.
Design and Hardware Implementation of a New Chaotic Secure Communication Technique.
Xiong, Li; Lu, Yan-Jun; Zhang, Yong-Fang; Zhang, Xin-Guo; Gupta, Parag
2016-01-01
In this paper, a scheme for chaotic modulation secure communication is proposed based on chaotic synchronization of an improved Lorenz system. For the first time, the intensity limit and stability of the transmitted signal, the characteristics of broadband and the requirements for accuracy of electronic components are presented by Multisim simulation. In addition, some improvements are made on the measurement method and the proposed experimental circuit in order to facilitate the experiments of chaotic synchronization, chaotic non-synchronization, experiment without signal and experiment with signal. To illustrate the effectiveness of the proposed scheme, some numerical simulations are presented. Then, the proposed chaotic secure communication circuit is implemented through analog electronic circuit, which is characterized by its high accuracy and good robustness.
Gradient Pattern Analysis Applied to Galaxy Morphology
NASA Astrophysics Data System (ADS)
Rosa, R. R.; de Carvalho, R. R.; Sautter, R. A.; Barchi, P. H.; Stalder, D. H.; Moura, T. C.; Rembold, S. B.; Morell, D. R. F.; Ferreira, N. C.
2018-04-01
Gradient pattern analysis (GPA) is a well-established technique for measuring gradient bilateral asymmetries of a square numerical lattice. This paper introduces an improved version of GPA designed for galaxy morphometry. We show the performance of the new method on a selected sample of 54,896 objects from the SDSS-DR7 in common with Galaxy Zoo 1 catalog. The results suggest that the second gradient moment, G2, has the potential to dramatically improve over more conventional morphometric parameters. It separates early from late type galaxies better (˜90%) than the CAS system (C ˜ 79%, A ˜ 50%, S ˜ 43%) and a benchmark test shows that it is applicable to hundreds of thousands of galaxies using typical processing systems.
Density-matrix-based algorithm for solving eigenvalue problems
NASA Astrophysics Data System (ADS)
Polizzi, Eric
2009-03-01
A fast and stable numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques and takes its inspiration from the contour integration and density-matrix representation in quantum mechanics. It will be shown that this algorithm—named FEAST—exhibits high efficiency, robustness, accuracy, and scalability on parallel architectures. Examples from electronic structure calculations of carbon nanotubes are presented, and numerical performances and capabilities are discussed.
Material parameter measurements at high temperatures
NASA Technical Reports Server (NTRS)
Dominek, A.; Park, A.; Peters, L., Jr.
1988-01-01
Alternate fixtures of techniques for the measurement of the constitutive material parameters at elevated temperatures are presented. The technique utilizes scattered field data from material coated cylinders between parallel plates or material coated hemispheres over a finite size groundplane. The data acquisition is centered around the HP 8510B Network Analyzer. The parameters are then found from a numerical search algorithm using the Newton-Ralphson technique with the measured and calculated fields from these canonical scatters. Numerical and experimental results are shown.
Coviello, Joseph Paul; Kakar, Rumit Singh; Reynolds, Timothy James
2017-02-01
While there is limited evidence supporting the use of soft tissue mobilization techniques for Subacromial Pain Syndrome (SAPS), synonymous with subacromial impingement syndrome, previous studies have reported successful outcomes using soft tissue mobilization as a treatment technique. The purpose of this case report is to document the results of Instrument-Assisted Soft Tissue Mobilization (IASTM) for the treatment of SAPS. Diagnosis was reached based on the subject's history, tenderness to palpation, and four out of five positive tests in the diagnostic cluster. Treatment consisted of three visits where the IASTM technique was applied to the pectoral muscles as well as periscapular musculature followed by retesting pain-free shoulder flexion active range of motion (AROM) and Numerical Pain Rating Scale (NPRS) during active shoulder flexion. Scapulothoracic mobilization and stretching were performed after AROM measurement. The subject reported an NPRS of 0/10 and demonstrated improvements in pain free flexion AROM in each of the three treatment sessions post-IASTM: 85 ° to 181 °, 110 ° to 171 °, and 163 ° to 174 ° with some carryover in pain reduction and pain free AROM to the next treatment. Through three treatments, DASH score improved by 17.34%, Penn Shoulder Score improved 29%, worst NPRS decreased from 4/10 to 0/10, and a GROC score of 6. IASTM may have a beneficial acute effect on pain free shoulder flexion. In conjunction with scapulothoracic mobilizations and stretching, IASTM may improve function, decrease pain, and improve patient satisfaction. While this technique will not ameliorate the underlying pathomechanics contributing to SAPS, it may serve as a valuable tool to restore ROM and decrease pain allowing the patient to reap the full benefits of a multi-modal treatment approach. 5.
Improve Data Mining and Knowledge Discovery Through the Use of MatLab
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Martin, Dawn (Elliott); Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(R) (MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its enormous availability of built in functionalities and toolboxes make it suitable to perform numerical computations and simulations as well as a data mining tool. Engineers and scientists can take advantage of the readily available functions/toolboxes to gain wider insight in their perspective data mining experiments.
Improve Data Mining and Knowledge Discovery through the use of MatLab
NASA Technical Reports Server (NTRS)
Shaykahian, Gholan Ali; Martin, Dawn Elliott; Beil, Robert
2011-01-01
Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(TradeMark)(MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and its enormous availability of built in functionalities and toolboxes make it suitable to perform numerical computations and simulations as well as a data mining tool. Engineers and scientists can take advantage of the readily available functions/toolboxes to gain wider insight in their perspective data mining experiments.
NASA Technical Reports Server (NTRS)
Reese, O. W.
1972-01-01
The numerical calculation is described of the steady-state flow of electrons in an axisymmetric, spherical, electrostatic collector for a range of boundary conditions. The trajectory equations of motion are solved alternately with Poisson's equation for the potential field until convergence is achieved. A direct (noniterative) numerical technique is used to obtain the solution to Poisson's equation. Space charge effects are included for initial current densities as large as 100 A/sq cm. Ways of dealing successfully with the difficulties associated with these high densities are discussed. A description of the mathematical model, a discussion of numerical techniques, results from two typical runs, and the FORTRAN computer program are included.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.
2013-10-01
In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.
Vas, Lakshmi; Pai, Renuka; Khandagale, Nishigandha; Pattnaik, Manorama
2014-01-01
We report a new technique for pulsed radiofrequency (PRF) of the entire nerve supply of the knee as an option in treating osteoarthritis (OA) of knee. We targeted both sensory and motor nerves supplying all the structures around the knee: joint, muscles, and skin to address the entire nociception and stiffness leading to peripheral and central sensitization in osteoarthritis. Ten patients with pain, stiffness, and loss of function in both knees were treated with ultrasonography (USG) guided PRF of saphenous, tibial, and common peroneal nerves along with subsartorial, peripatellar, and popliteal plexuses. USG guided PRF of the femoral nerve was also done to address the innervation of the quadriceps muscle. Assessment of pain (Numerical Rating Scale [NRS], pain DETECT, knee function [Western Ontario and McMaster Universities Osteoarthritis Index- WOMAC]) were documented pre and post PRF at 3 and 6 months. Knee radiographs (Kellgren-Lawrence [K-L] grading) were done before PRF and one week later. All the patients showed a sustained improvement of NRS, pain DETECT, and WOMAC at 3 and 6 months. The significant improvement of patellar position and tibio-femoral joint space was concordant with the patient's reporting of improvement in stiffness and pain. The sustained pain relief and muscle relaxation enabled the patients to optimize physiotherapy thereby improving endurance training to include the daily activities of life. We conclude that OA knee pain is a product of neuromyopathy and that PRF of the sensory and motor nerves appeared to be a safe, effective, and minimally invasive technique. The reduction of pain and stiffness improved the knee function and probably reduced the peripheral and central sensitization.
NASA Astrophysics Data System (ADS)
Bergado, D. T.; Long, P. V.; Chaiyaput, S.; Balasubramaniam, A. S.
2018-04-01
Soft ground improvement techniques have become most practical and popular methods to increase soil strength, soil stiffness and reduce soil compressibility including the soft Bangkok clay. This paper focuses on comparative performances of prefabricated vertical drain (PVD) using surcharge, vacuum and heat preloading as well as the cement-admixed clay of Deep Cement Mixing (DCM) and Stiffened DCM (SDCM) methods. The Vacuum-PVD can increase the horizontal coefficient of consolidation, Ch, resulting in faster rate of settlement at the same magnitudes of settlement compared to Conventional PVD. Several field methods of applying vacuum preloading are also compared. Moreover, the Thermal PVD and Thermal Vacuum PVD can increase further the coefficient of horizontal consolidation, Ch, with the associated reduction of kh/ks values by reducing the drainage retardation effects in the smear zone around the PVD which resulted in faster rates of consolidation and higher magnitudes of settlements. Furthermore, the equivalent smear effect due to non-uniform consolidation is also discussed in addition to the smear due to the mechanical installation of PVDs. In addition, a new kind of reinforced deep mixing method, namely Stiffened Deep Cement Mixing (SDCM) pile is introduced to improve the flexural resistance, improve the field quality control, and prevent unexpected failures of the Deep Cement Mixing (DCM) pile. The SDCM pile consists of DCM pile reinforced with the insertion of precast reinforced concrete (RC) core. The full scale test embankment on soft clay improved by SDCM and DCM piles was also analysed. Numerical simulations using the 3D PLAXIS Foundation finite element software have been done to understand the behavior of SDCM and DCM piles. The simulation results indicated that the surface settlements decreased with increasing lengths of the RC cores, and, at lesser extent, increasing sectional areas of the RC cores in the SDCM piles. In addition, the lateral movements decreased by increasing the lengths (longer than 4 m) and, the sectional areas of the RC cores in the SDCM piles. The results of the numerical simulations closely agreed with the observed data and successfully verified the parameters affecting the performances and behavior of both SDCM and DCM piles.
Preliminary Evidence Supports Modification of Retraction Technique to Prevent Needlestick Injuries
Fa, Bernadette Alvear; Cuny, Eve
2016-01-01
A modified retraction technique was introduced into the DDS degree preclinical anesthesia course in 2011 with the goal of reducing needlestick exposure incidents. In numerous studies of dental exposures, injuries from dental anesthetic needles account for the highest proportion of all exposures. The purpose of this study was to assess the preliminary impact of a modified retraction technique on the incidence of blood and body fluids (BBF) exposure incidents associated with needles during injection. Data from evaluations of students from 2014 and 2015 were obtained and tracked to determine whether the modified retraction technique was “excellent,” “clinically acceptable,” or “clinically unacceptable.” Data were collected to determine if the patient perceived the modified retraction technique as “comfortable” or “correctable when addressed” to help improve student technique for future injections. Likewise, data from the blood-borne exposure database where all information related to BBF exposures is recorded were reviewed and the information separated by year and class. This study presents preliminary data only and because of the small sample size does not lend itself to validation by statistical analysis. However, the technique effectively removes the operator's hand from the field during injection, reducing the risk of accidental intraoral needlestick to the nondominant hand of the operator. PMID:27973940
DNA-based cryptographic methods for data hiding in DNA media.
Marwan, Samiha; Shawish, Ahmed; Nagaty, Khaled
2016-12-01
Information security can be achieved using cryptography, steganography or a combination of them, where data is firstly encrypted using any of the available cryptography techniques and then hid into any hiding medium. Recently, the famous genomic DNA has been introduced as a hiding medium, known as DNA steganography, due to its notable ability to hide huge data sets with a high level of randomness and hence security. Despite the numerous cryptography techniques, to our knowledge only the vigenere cipher and the DNA-based playfair cipher have been combined with the DNA steganography, which keeps space for investigation of other techniques and coming up with new improvements. This paper presents a comprehensive analysis between the DNA-based playfair, vigenere, RSA and the AES ciphers, each combined with a DNA hiding technique. The conducted analysis reports the performance diversity of each combined technique in terms of security, speed, hiding capacity in addition to both key size and data size. Moreover, this paper proposes a modification of the current combined DNA-based playfair cipher technique, which makes it not only simple and fast but also provides a significantly higher hiding capacity and security. The conducted extensive experimental studies confirm such outstanding performance in comparison with all the discussed combined techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Computational method for analysis of polyethylene biodegradation
NASA Astrophysics Data System (ADS)
Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro
2003-12-01
In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.
NASA Technical Reports Server (NTRS)
Gallardo, V. C.; Storace, A. S.; Gaffney, E. F.; Bach, L. J.; Stallone, M. J.
1981-01-01
The component element method was used to develop a transient dynamic analysis computer program which is essentially based on modal synthesis combined with a central, finite difference, numerical integration scheme. The methodology leads to a modular or building-block technique that is amenable to computer programming. To verify the analytical method, turbine engine transient response analysis (TETRA), was applied to two blade-out test vehicles that had been previously instrumented and tested. Comparison of the time dependent test data with those predicted by TETRA led to recommendations for refinement or extension of the analytical method to improve its accuracy and overcome its shortcomings. The development of working equations, their discretization, numerical solution scheme, the modular concept of engine modelling, the program logical structure and some illustrated results are discussed. The blade-loss test vehicles (rig full engine), the type of measured data, and the engine structural model are described.
Efficient Schmidt number scaling in dissipative particle dynamics
NASA Astrophysics Data System (ADS)
Krafnick, Ryan C.; García, Angel E.
2015-12-01
Dissipative particle dynamics is a widely used mesoscale technique for the simulation of hydrodynamics (as well as immersed particles) utilizing coarse-grained molecular dynamics. While the method is capable of describing any fluid, the typical choice of the friction coefficient γ and dissipative force cutoff rc yields an unacceptably low Schmidt number Sc for the simulation of liquid water at standard temperature and pressure. There are a variety of ways to raise Sc, such as increasing γ and rc, but the relative cost of modifying each parameter (and the concomitant impact on numerical accuracy) has heretofore remained undetermined. We perform a detailed search over the parameter space, identifying the optimal strategy for the efficient and accuracy-preserving scaling of Sc, using both numerical simulations and theoretical predictions. The composite results recommend a parameter choice that leads to a speed improvement of a factor of three versus previously utilized strategies.
Global stability of a multiple infected compartments model for waterborne diseases
NASA Astrophysics Data System (ADS)
Wang, Yi; Cao, Jinde
2014-10-01
In this paper, mathematical analysis is carried out for a multiple infected compartments model for waterborne diseases, such as cholera, giardia, and rotavirus. The model accounts for both person-to-person and water-to-person transmission routes. Global stability of the equilibria is studied. In terms of the basic reproduction number R0, we prove that, if R0⩽1, then the disease-free equilibrium is globally asymptotically stable and the infection always disappears; whereas if R0>1, there exists a unique endemic equilibrium which is globally asymptotically stable for the corresponding fast-slow system. Numerical simulations verify our theoretical results and present that the decay rate of waterborne pathogens has a significant impact on the epidemic growth rate. Also, we observe numerically that the unique endemic equilibrium is globally asymptotically stable for the whole system. This statement indicates that the present method need to be improved by other techniques.
Review of Computational Stirling Analysis Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.
2004-01-01
Nuclear thermal to electric power conversion carries the promise of longer duration missions and higher scientific data transmission rates back to Earth for both Mars rovers and deep space missions. A free-piston Stirling convertor is a candidate technology that is considered an efficient and reliable power conversion device for such purposes. While already very efficient, it is believed that better Stirling engines can be developed if the losses inherent its current designs could be better understood. However, they are difficult to instrument and so efforts are underway to simulate a complete Stirling engine numerically. This has only recently been attempted and a review of the methods leading up to and including such computational analysis is presented. And finally it is proposed that the quality and depth of Stirling loss understanding may be improved by utilizing the higher fidelity and efficiency of recently developed numerical methods. One such method, the Ultra HI-Fl technique is presented in detail.
Numerical prediction of algae cell mixing feature in raceway ponds using particle tracing methods.
Ali, Haider; Cheema, Taqi A; Yoon, Ho-Sung; Do, Younghae; Park, Cheol W
2015-02-01
In the present study, a novel technique, which involves numerical computation of the mixing length of algae particles in raceway ponds, was used to evaluate the mixing process. A value of mixing length that is higher than the maximum streamwise distance (MSD) of algae cells indicates that the cells experienced an adequate turbulent mixing in the pond. A coupling methodology was adapted to map the pulsating effects of a 2D paddle wheel on a 3D raceway pond in this study. The turbulent mixing was examined based on the computations of mixing length, residence time, and algae cell distribution in the pond. The results revealed that the use of particle tracing methodology is an improved approach to define the mixing phenomenon more effectively. Moreover, the algae cell distribution aided in identifying the degree of mixing in terms of mixing length and residence time. © 2014 Wiley Periodicals, Inc.
High numerical aperture multilayer Laue lenses
Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; ...
2015-06-01
The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along themore » lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.« less
Computer simulations of phase field drops on super-hydrophobic surfaces
NASA Astrophysics Data System (ADS)
Fedeli, Livio
2017-09-01
We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.
Martin, Jeff W.; Squire, Jeremy A.; Zielenska, Maria
2012-01-01
Osteosarcoma is a primary bone malignancy with a particularly high incidence rate in children and adolescents relative to other age groups. The etiology of this often aggressive cancer is currently unknown, because complicated structural and numeric genomic rearrangements in cancer cells preclude understanding of tumour development. In addition, few consistent genetic changes that may indicate effective molecular therapeutic targets have been reported. However, high-resolution techniques continue to improve knowledge of distinct areas of the genome that are more commonly associated with osteosarcomas. Copy number gains at chromosomes 1p, 1q, 6p, 8q, and 17p as well as copy number losses at chromosomes 3q, 6q, 9, 10, 13, 17p, and 18q have been detected by numerous groups, but definitive oncogenes or tumour suppressor genes remain elusive with respect to many loci. In this paper, we examine studies of the genetics of osteosarcoma to comprehensively describe the heterogeneity and complexity of this cancer. PMID:22685381
Efficient Trajectory Propagation for Orbit Determination Problems
NASA Technical Reports Server (NTRS)
Roa, Javier; Pelaez, Jesus
2015-01-01
Regularized formulations of orbital motion apply a series of techniques to improve the numerical integration of the orbit. Despite their advantages and potential applications little attention has been paid to the propagation of the partial derivatives of the corresponding set of elements or coordinates, required in many orbit-determination scenarios and optimization problems. This paper fills this gap by presenting the general procedure for integrating the state-transition matrix of the system together with the nominal trajectory using regularized formulations and different sets of elements. The main difficulty comes from introducing an independent variable different from time, because the solution needs to be synchronized. The correction of the time delay is treated from a generic perspective not focused on any particular formulation. The synchronization using time-elements is also discussed. Numerical examples include strongly-perturbed orbits in the Pluto system, motivated by the recent flyby of the New Horizons spacecraft, together with a geocentric flyby of the NEAR spacecraft.
Traceable Coulomb blockade thermometry
NASA Astrophysics Data System (ADS)
Hahtela, O.; Mykkänen, E.; Kemppinen, A.; Meschke, M.; Prunnila, M.; Gunnarsson, D.; Roschier, L.; Penttilä, J.; Pekola, J.
2017-02-01
We present a measurement and analysis scheme for determining traceable thermodynamic temperature at cryogenic temperatures using Coulomb blockade thermometry. The uncertainty of the electrical measurement is improved by utilizing two sampling digital voltmeters instead of the traditional lock-in technique. The remaining uncertainty is dominated by that of the numerical analysis of the measurement data. Two analysis methods are demonstrated: numerical fitting of the full conductance curve and measuring the height of the conductance dip. The complete uncertainty analysis shows that using either analysis method the relative combined standard uncertainty (k = 1) in determining the thermodynamic temperature in the temperature range from 20 mK to 200 mK is below 0.5%. In this temperature range, both analysis methods produced temperature estimates that deviated from 0.39% to 0.67% from the reference temperatures provided by a superconducting reference point device calibrated against the Provisional Low Temperature Scale of 2000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Michael; Ramos, Juan; Lao, Kang
Horizontal wells combined with multi-stage hydraulic fracturing have been applied to significantly increase production from low permeability formations, contributing to expanded total US production of oil and gas. Not all applications are successful, however. Field observations indicate that poorly designed or placed fracture stages in horizontal wells can result in significant well casing deformation and damage. In some instances, early fracture stages have deformed the casing enough so that it is not possible to drill out plugs in order to complete subsequent fracture stages. Improved fracture characterization techniques are required to identify potential problems early in the development of themore » field. Over the past decade, several new technologies have been presented as alternatives to characterize the fracture geometry for unconventional reservoirs. Monitoring dynamic casing strain and deformation during hydraulic fracturing represents one of these new techniques. The objective of this research is to evaluate dynamic and static strains imposed on a well casing by single and multiple stage fractures, and to use that information in combination with numerical inversion techniques to estimate fracture characteristics such as length, orientation and post treatment opening. GeoMechanics Technologies, working in cooperation with the Department of Energy, Small Business Innovation Research through DOE SBIR Grant No: DE-SC-0017746, is conducting a research project to complete an advanced analysis of dynamic and static casing strain monitoring to characterize the orientation and dimensions of hydraulic fractures. This report describes our literature review and technical approach. The following conclusions summarize our review and simulation results to date: A literature review was performed related to the fundamental theoretical and analytical developments of stress and strain imposed by hydraulic fracturing along casing completions and deformation monitoring techniques. Analytical solutions have been developed to understand the mechanisms responsible for casing deformation induced by hydraulic fracturing operations. After reviewing a range of casing deformation techniques, including fiber optic sensors, borehole ultrasonic tools and electromagnetic tools, we can state that challenges in deployment, data acquisition and interpretation must still be overcome to ensure successful application of strain measurement and inversion techniques to characterize hydraulic fractures in the field. Numerical models were developed to analyze induced strain along casing, cement and formation interfaces. The location of the monitoring sensor around the completion, mechanical properties of the cement and its condition in the annular space can impact the strain measurement. Field data from fiber optic sensors were evaluated to compare against numerical models. A reasonable match for the fracture height characterization was obtained. Discrepancies in the strain magnitude between the field data and the numerical model was observed and can be caused by temperature effects, the cement condition in the well and the perturbation at the surface during injection. To avoid damage in the fiber optic cable during the perforation (e.g. when setting up multi stage HF scenarios), oriented perforation technologies are suggested. This issue was evidenced in the analyzed field data, where it was not possible to obtain strain measurement below the top of the perforation. This presented a limitation to characterize the entire fracture geometry. The comparison results from numerical modeling and field data for fracture characterization shows that the proposed methodology should be validated with alternative field demonstration techniques using measurements in an offset observation well to monitor and measure the induced strain. We propose to expand on this research in Phase II with a further study of multi-fracture characterization and field demonstration for horizontal wells.« less
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
Evidence-based Guidelines for Interpretation of the Panic Disorder Severity Scale
Furukawa, Toshi A.; Shear, M. Katherine; Barlow, David H.; Gorman, Jack M.; Woods, Scott W.; Money, Roy; Etschel, Eva; Engel, Rolf R.; Leucht, Stefan
2008-01-01
Background The Panic Disorder Severity Scale (PDSS) is promising to be a standard global rating scale for panic disorder. In order for a clinical scale to be useful, we need a guideline for interpreting its scores and their changes, and for defining clinical change points such as response and remission. Methods We used individual patient data from two large randomized controlled trials of panic disorder (total n=568). Study participants were administered the PDSS and the Clinical Global Impression (CGI)-Severity and -Improvement. We applied equipercentile linking technique to draw correspondences between PDSS and CGI-Severity, numeric changes in PDSS and CGI-Improvement, and percent changes in PDSS and CGI-Improvement. Results The interpretation of the PDSS total score differed according to the presence or absence of agoraphobia. When the patients were not agoraphobic, score ranges 0–1 corresponded with “Normal,” 2–5 with “Borderline”, 6–9 with “Slightly ill”, 10–13 with “Moderately ill”, and 14 and above with “Markedly ill.” When the patients were agoraphobic, score ranges 3–7 meant “Borderline ill,” 8–10 “Slightly ill,” 11–15 “Moderately ill,” and 16 and above “Markedly ill.” The relationship between PDSS change and CGI-Improvement was more linear when measured as percentile change than as numeric changes, and was indistinguishable for those with or without agoraphobia. The decrease by 75–100% was considered “Very much improved,” that by 40–74% “Much improved,” and that by 10–39% “Minimally improved.” Conclusion We propose that “remission” of panic disorder be defined by PDSS scores of 5 or less and its “response” by 40% or greater reduction. PMID:19006198
Session on techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin
1993-01-01
The session on techniques and resources for storm-scale numerical weather prediction are reviewed. The recommendations of this group are broken down into three area: modeling and prediction, data requirements in support of modeling and prediction, and data management. The current status, modeling and technological recommendations, data requirements in support of modeling and prediction, and data management are addressed.
High-order conservative finite difference GLM-MHD schemes for cell-centered MHD
NASA Astrophysics Data System (ADS)
Mignone, Andrea; Tzeferacos, Petros; Bodo, Gianluigi
2010-08-01
We present and compare third- as well as fifth-order accurate finite difference schemes for the numerical solution of the compressible ideal MHD equations in multiple spatial dimensions. The selected methods lean on four different reconstruction techniques based on recently improved versions of the weighted essentially non-oscillatory (WENO) schemes, monotonicity preserving (MP) schemes as well as slope-limited polynomial reconstruction. The proposed numerical methods are highly accurate in smooth regions of the flow, avoid loss of accuracy in proximity of smooth extrema and provide sharp non-oscillatory transitions at discontinuities. We suggest a numerical formulation based on a cell-centered approach where all of the primary flow variables are discretized at the zone center. The divergence-free condition is enforced by augmenting the MHD equations with a generalized Lagrange multiplier yielding a mixed hyperbolic/parabolic correction, as in Dedner et al. [J. Comput. Phys. 175 (2002) 645-673]. The resulting family of schemes is robust, cost-effective and straightforward to implement. Compared to previous existing approaches, it completely avoids the CPU intensive workload associated with an elliptic divergence cleaning step and the additional complexities required by staggered mesh algorithms. Extensive numerical testing demonstrate the robustness and reliability of the proposed framework for computations involving both smooth and discontinuous features.
Numerical Modeling of Inclusion Behavior in Liquid Metal Processing
NASA Astrophysics Data System (ADS)
Bellot, Jean-Pierre; Descotes, Vincent; Jardy, Alain
2013-09-01
Thermomechanical performance of metallic alloys is directly related to the metal cleanliness that has always been a challenge for metallurgists. During liquid metal processing, particles can grow or decrease in size either by mass transfer with the liquid phase or by agglomeration/fragmentation mechanisms. As a function of numerical density of inclusions and of the hydrodynamics of the reactor, different numerical modeling approaches are proposed; in the case of an isolated particle, the Lagrangian technique coupled with a dissolution model is applied, whereas in the opposite case of large inclusion phase concentration, the population balance equation must be solved. Three examples of numerical modeling studies achieved at Institut Jean Lamour are discussed. They illustrate the application of the Lagrangian technique (for isolated exogenous inclusion in titanium bath) and the Eulerian technique without or with the aggregation process: for precipitation and growing of inclusions at the solidification front of a Maraging steel, and for endogenous inclusions in the molten steel bath of a gas-stirred ladle, respectively.
NASA Technical Reports Server (NTRS)
Thomas, P. D.
1979-01-01
The theoretical foundation and formulation of a numerical method for predicting the viscous flowfield in and about isolated three dimensional nozzles of geometrically complex configuration are presented. High Reynolds number turbulent flows are of primary interest for any combination of subsonic, transonic, and supersonic flow conditions inside or outside the nozzle. An alternating-direction implicit (ADI) numerical technique is employed to integrate the unsteady Navier-Stokes equations until an asymptotic steady-state solution is reached. Boundary conditions are computed with an implicit technique compatible with the ADI technique employed at interior points of the flow region. The equations are formulated and solved in a boundary-conforming curvilinear coordinate system. The curvilinear coordinate system and computational grid is generated numerically as the solution to an elliptic boundary value problem. A method is developed that automatically adjusts the elliptic system so that the interior grid spacing is controlled directly by the a priori selection of the grid spacing on the boundaries of the flow region.
Preserving Simplecticity in the Numerical Integration of Linear Beam Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
2017-07-01
Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less
Status and Perspectives of Neutron Imaging Facilities
NASA Astrophysics Data System (ADS)
Lehmann, E.; Trtik, P.; Ridikas, D.
The methodology and the application range of neutron imaging techniques have been significantly improved at numerous facilities worldwide in the last decades. This progress has been achieved by new detector systems, the setup of dedicated, optimized and flexible beam lines and the much better understanding of the complete imaging process thanks to complementary simulations. Furthermore, new applications and research topics were found and implemented. However, since the quality and the number of neutron imaging facilities depend much on the access to suitable beam ports, there is still an enormous potential to implement state-of-the-art neutron imaging techniques at many more facilities. On the one hand, there are prominent and powerful sources which do not intend/accept the implementation of neutron imaging techniques due to the priorities set for neutron scattering and irradiation techniques exclusively. On the other hand, there are modern and useful devices which remain under-utilized and have either not the capacity or not the know-how to develop attractive user programs and/or industrial partnerships. In this overview of the international status of neutron imaging facilities, we will specify details about the current situation.
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent
De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle
2018-01-01
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770
A hybrid perturbation-Galerkin technique for partial differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Anderson, Carl M.
1990-01-01
A two-step hybrid perturbation-Galerkin technique for improving the usefulness of perturbation solutions to partial differential equations which contain a parameter is presented and discussed. In the first step of the method, the leading terms in the asymptotic expansion(s) of the solution about one or more values of the perturbation parameter are obtained using standard perturbation methods. In the second step, the perturbation functions obtained in the first step are used as trial functions in a Bubnov-Galerkin approximation. This semi-analytical, semi-numerical hybrid technique appears to overcome some of the drawbacks of the perturbation and Galerkin methods when they are applied by themselves, while combining some of the good features of each. The technique is illustrated first by a simple example. It is then applied to the problem of determining the flow of a slightly compressible fluid past a circular cylinder and to the problem of determining the shape of a free surface due to a sink above the surface. Solutions obtained by the hybrid method are compared with other approximate solutions, and its possible application to certain problems associated with domain decomposition is discussed.
Evaluation of the Impact of AIRS Radiance and Profile Data Assimilation in Partly Cloudy Regions
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Jedlovec, Gary
2013-01-01
Improvements to global and regional numerical weather prediction have been demonstrated through assimilation of data from NASA s Atmospheric Infrared Sounder (AIRS). Current operational data assimilation systems use AIRS radiances, but impact on regional forecasts has been much smaller than for global forecasts. Retrieved profiles from AIRS contain much of the information that is contained in the radiances and may be able to reveal reasons for this reduced impact. Assimilating AIRS retrieved profiles in an identical analysis configuration to the radiances, tracking the quantity and quality of the assimilated data in each technique, and examining analysis increments and forecast impact from each data type can yield clues as to the reasons for the reduced impact. By doing this with regional scale models individual synoptic features (and the impact of AIRS on these features) can be more easily tracked. This project examines the assimilation of hyperspectral sounder data used in operational numerical weather prediction by comparing operational techniques used for AIRS radiances and research techniques used for AIRS retrieved profiles. Parallel versions of a configuration of the Weather Research and Forecasting (WRF) model with Gridpoint Statistical Interpolation (GSI) are run to examine the impact AIRS radiances and retrieved profiles. Statistical evaluation of a long-term series of forecast runs will be compared along with preliminary results of in-depth investigations for select case comparing the analysis increments in partly cloudy regions and short-term forecast impacts.