Sample records for artificial compressibility method

  1. An Assessment of Artificial Compressibility and Pressure Projection Methods for Incompressible Flow Simulations

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, C.; Smith, Charles A. (Technical Monitor)

    1998-01-01

    Performance of the two commonly used numerical procedures, one based on artificial compressibility method and the other pressure projection method, are compared. These formulations are selected primarily because they are designed for three-dimensional applications. The computational procedures are compared by obtaining steady state solutions of a wake vortex and unsteady solutions of a curved duct flow. For steady computations, artificial compressibility was very efficient in terms of computing time and robustness. For an unsteady flow which requires small physical time step, pressure projection method was found to be computationally more efficient than an artificial compressibility method. This comparison is intended to give some basis for selecting a method or a flow solution code for large three-dimensional applications where computing resources become a critical issue.

  2. Comparison of Artificial Compressibility Methods

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Housman, Jeffrey; Kwak, Dochan

    2004-01-01

    Various artificial compressibility methods for calculating the three-dimensional incompressible Navier-Stokes equations are compared. Each method is described and numerical solutions to test problems are conducted. A comparison based on convergence behavior, accuracy, and robustness is given.

  3. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    PubMed

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  4. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence.

    PubMed

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-04-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data.

  5. Digital Image Compression Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.

    1993-01-01

    The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.

  6. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  7. Estimation of mechanical properties of nanomaterials using artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, V.; Garg, A.; Wong, C. H.; Tai, K.

    2014-09-01

    Computational modeling tools such as molecular dynamics (MD), ab initio, finite element modeling or continuum mechanics models have been extensively applied to study the properties of carbon nanotubes (CNTs) based on given input variables such as temperature, geometry and defects. Artificial intelligence techniques can be used to further complement the application of numerical methods in characterizing the properties of CNTs. In this paper, we have introduced the application of multi-gene genetic programming (MGGP) and support vector regression to formulate the mathematical relationship between the compressive strength of CNTs and input variables such as temperature and diameter. The predictions of compressive strength of CNTs made by these models are compared to those generated using MD simulations. The results indicate that MGGP method can be deployed as a powerful method for predicting the compressive strength of the carbon nanotubes.

  8. Applications of Taylor-Galerkin finite element method to compressible internal flow problems

    NASA Technical Reports Server (NTRS)

    Sohn, Jeong L.; Kim, Yongmo; Chung, T. J.

    1989-01-01

    A two-step Taylor-Galerkin finite element method with Lapidus' artificial viscosity scheme is applied to several test cases for internal compressible inviscid flow problems. Investigations for the effect of supersonic/subsonic inlet and outlet boundary conditions on computational results are particularly emphasized.

  9. Hybrid thermal link-wise artificial compressibility method

    NASA Astrophysics Data System (ADS)

    Obrecht, Christian; Kuznik, Frédéric

    2015-10-01

    Thermal flow prediction is a subject of interest from a scientific and engineering points of view. Our motivation is to develop an accurate, easy to implement and highly scalable method for convective flows simulation. To this end, we present an extension to the link-wise artificial compressibility method (LW-ACM) for thermal simulation of weakly compressible flows. The novel hybrid formulation uses second-order finite difference operators of the energy equation based on the same stencils as the LW-ACM. For validation purposes, the differentially heated cubic cavity was simulated. The simulations remained stable for Rayleigh numbers up to Ra =108. The Nusselt numbers at isothermal walls and dynamics quantities are in good agreement with reference values from the literature. Our results show that the hybrid thermal LW-ACM is an effective and easy-to-use solution to solve convective flows.

  10. Kinetically reduced local Navier-Stokes equations for simulation of incompressible viscous flows.

    PubMed

    Borok, S; Ansumali, S; Karlin, I V

    2007-12-01

    Recently, another approach to study incompressible fluid flow was suggested [S. Ansumali, I. Karlin, and H. Ottinger, Phys. Rev. Lett. 94, 080602 (2005)]-the kinetically reduced local Navier-Stokes (KRLNS) equations. We consider a simplified two-dimensional KRLNS system and compare it with Chorin's artificial compressibility method. A comparison of the two methods for steady state computation of the flow in a lid-driven cavity at various Reynolds numbers shows that the results from both methods are in good agreement with each other. However, in the transient flow, it is demonstrated that the KRLNS equations correctly describe the time evolution of the velocity and of the pressure, unlike the artificial compressibility method.

  11. On Chorin's Method for Stationary Solutions of the Oberbeck-Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Kagei, Yoshiyuki; Nishida, Takaaki

    2017-06-01

    Stability of stationary solutions of the Oberbeck-Boussinesq system (OB) and the corresponding artificial compressible system is considered. The latter system is obtained by adding the time derivative of the pressure with small parameter ɛ > 0 to the continuity equation of (OB), which was proposed by A. Chorin to find stationary solutions of (OB) numerically. Both systems have the same sets of stationary solutions and the system (OB) is obtained from the artificial compressible one as the limit ɛ \\to 0 which is a singular limit. It is proved that if a stationary solution of the artificial compressible system is stable for sufficiently small ɛ > 0, then it is also stable as a solution of (OB). The converse is proved provided that the velocity field of the stationary solution satisfies some smallness condition.

  12. A Semi-implicit Method for Time Accurate Simulation of Compressible Flow

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2001-11-01

    A semi-implicit method for time accurate simulation of compressible flow is presented. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity. Centered discretization in both time and space allows the method to achieve zero artificial attenuation of acoustic waves. The method is an extension of the standard low Mach number pressure correction method to the compressible Navier-Stokes equations, and the main feature of the method is the solution of a Helmholtz type pressure correction equation similar to that of Demirdžić et al. (Int. J. Num. Meth. Fluids, Vol. 16, pp. 1029-1050, 1993). The method is attractive for simulation of acoustic combustion instabilities in practical combustors. In these flows, the Mach number is low; therefore the time step allowed by the convective CFL limitation is significantly larger than that allowed by the acoustic CFL limitation, resulting in significant efficiency gains. Also, the method's property of zero artificial attenuation of acoustic waves is important for accurate simulation of the interaction between acoustic waves and the combustion process. The method has been implemented in a large eddy simulation code, and results from several test cases will be presented.

  13. Numerical Simulation of Flow Through an Artificial Heart

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Kutler, Paul; Kwak, Dochan; Kiris, Cetin

    1989-01-01

    A solution procedure was developed that solves the unsteady, incompressible Navier-Stokes equations, and was used to numerically simulate viscous incompressible flow through a model of the Pennsylvania State artificial heart. The solution algorithm is based on the artificial compressibility method, and uses flux-difference splitting to upwind the convective terms; a line-relaxation scheme is used to solve the equations. The time-accuracy of the method is obtained by iteratively solving the equations at each physical time step. The artificial heart geometry involves a piston-type action with a moving solid wall. A single H-grid is fit inside the heart chamber. The grid is continuously compressed and expanded with a constant number of grid points to accommodate the moving piston. The computational domain ends at the valve openings where nonreflective boundary conditions based on the method of characteristics are applied. Although a number of simplifing assumptions were made regarding the geometry, the computational results agreed reasonably well with an experimental picture. The computer time requirements for this flow simulation, however, are quite extensive. Computational study of this type of geometry would benefit greatly from improvements in computer hardware speed and algorithm efficiency enhancements.

  14. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  15. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  16. High speed inviscid compressible flow by the finite element method

    NASA Technical Reports Server (NTRS)

    Zienkiewicz, O. C.; Loehner, R.; Morgan, K.

    1984-01-01

    The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.

  17. Stability of Bifurcating Stationary Solutions of the Artificial Compressible System

    NASA Astrophysics Data System (ADS)

    Teramoto, Yuka

    2018-02-01

    The artificial compressible system gives a compressible approximation of the incompressible Navier-Stokes system. The latter system is obtained from the former one in the zero limit of the artificial Mach number ɛ which is a singular limit. The sets of stationary solutions of both systems coincide with each other. It is known that if a stationary solution of the incompressible system is asymptotically stable and the velocity field of the stationary solution satisfies an energy-type stability criterion, then it is also stable as a solution of the artificial compressible one for sufficiently small ɛ . In general, the range of ɛ shrinks when the spectrum of the linearized operator for the incompressible system approaches to the imaginary axis. This can happen when a stationary bifurcation occurs. It is proved that when a stationary bifurcation from a simple eigenvalue occurs, the range of ɛ can be taken uniformly near the bifurcation point to conclude the stability of the bifurcating solution as a solution of the artificial compressible system.

  18. The effect of artificial bulk viscosity in simulations of forced compressible turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campos, A.; Morgan, B.

    The use of an artificial bulk viscosity for shock stabilization is a common approach employed in turbulence simulations with high-order numerics. The effect of the artificial bulk viscosity is analyzed in the context of large eddy simulations by using as a test case simulations of linearly-forced compressible homogeneous turbulence (Petersen and Livescu, 2010 [12]). This case is unique in that it allows for the specification of a priori target values for total dissipation and ratio of solenoidal to dilatational dissipation. A comparison between these target values and the true predicted levels of dissipation is thus used to investigate the performancemore » of the artificial bulk viscosity. Results show that the artificial bulk viscosity is effective at achieving stable solutions, but also leads to large values of artificial dissipation that outweigh the physical dissipation caused by fluid viscosity. An alternate approach, which employs the artificial thermal conductivity only, shows that the dissipation of dilatational modes is entirely due to the fluid viscosity. However, this method leads to unwanted Gibbs oscillations around the shocklets. The use of shock sensors that further localize the artificial bulk viscosity did not reduce the amount of artificial dissipation introduced by the artificial bulk viscosity. Finally, an improved forcing function that explicitly accounts for the role of the artificial bulk viscosity in the budget of turbulent kinetic energy was explored.« less

  19. The effect of artificial bulk viscosity in simulations of forced compressible turbulence

    DOE PAGES

    Campos, A.; Morgan, B.

    2018-05-17

    The use of an artificial bulk viscosity for shock stabilization is a common approach employed in turbulence simulations with high-order numerics. The effect of the artificial bulk viscosity is analyzed in the context of large eddy simulations by using as a test case simulations of linearly-forced compressible homogeneous turbulence (Petersen and Livescu, 2010 [12]). This case is unique in that it allows for the specification of a priori target values for total dissipation and ratio of solenoidal to dilatational dissipation. A comparison between these target values and the true predicted levels of dissipation is thus used to investigate the performancemore » of the artificial bulk viscosity. Results show that the artificial bulk viscosity is effective at achieving stable solutions, but also leads to large values of artificial dissipation that outweigh the physical dissipation caused by fluid viscosity. An alternate approach, which employs the artificial thermal conductivity only, shows that the dissipation of dilatational modes is entirely due to the fluid viscosity. However, this method leads to unwanted Gibbs oscillations around the shocklets. The use of shock sensors that further localize the artificial bulk viscosity did not reduce the amount of artificial dissipation introduced by the artificial bulk viscosity. Finally, an improved forcing function that explicitly accounts for the role of the artificial bulk viscosity in the budget of turbulent kinetic energy was explored.« less

  20. Video data compression using artificial neural network differential vector quantization

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Ashok K.; Bibyk, Steven B.; Ahalt, Stanley C.

    1991-01-01

    An artificial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Differential Vector Quantization is used to preserve edge features, and a new adaptive algorithm, known as Frequency-Sensitive Competitive Learning, is used to develop the vector quantizer codebook. To develop real time performance, a custom Very Large Scale Integration Application Specific Integrated Circuit (VLSI ASIC) is being developed to realize the associative memory functions needed in the vector quantization algorithm. By using vector quantization, the need for Huffman coding can be eliminated, resulting in superior performance against channel bit errors than methods that use variable length codes.

  1. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  2. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  3. Inverse transonic airfoil design methods including boundary layer and viscous interaction effects

    NASA Technical Reports Server (NTRS)

    Carlson, L. A.

    1979-01-01

    The development and incorporation into TRANDES of a fully conservative analysis method utilizing the artificial compressibility approach is described. The method allows for lifting cases and finite thickness airfoils and utilizes a stretched coordinate system. Wave drag and massive separation studies are also discussed.

  4. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    PubMed

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.

  5. DEVELOPMENT OF LOW-DIFFUSION FLUX-SPLITTING METHODS FOR DENSE GAS-SOLID FLOWS

    EPA Science Inventory

    The development of a class of low-diffusion upwinding methods for computing dense gas-solid flows is presented in this work. An artificial compressibility/low-Mach preconditioning strategy is developed for a hyperbolic two-phase flow equation system consisting of separate solids ...

  6. Effects of Cold and Compression on Edema.

    ERIC Educational Resources Information Center

    Sloan, J. P.; And Others

    1988-01-01

    Investigation of ways to treat artificially induced acute inflammatory reactions in human tissue found that neither cooling or pressure alone reduced the swelling, while a combination of the two methods produced a significant reduction in swelling. (Author/CB)

  7. A Comparative Evaluation of Sorption, Solubility, and Compressive Strength of Three Different Glass Ionomer Cements in Artificial Saliva: An in vitro Study

    PubMed Central

    Bhatia, Hind P; Sood, Shveta; Sharma, Naresh

    2017-01-01

    Aim To evaluate and compare the sorption, solubility, and compressive strength of three different glass ionomer cements in artificial saliva - type IX glass ionomer cement, silver-reinforced glass ionomer cement, and zirconia-reinforced glass ionomer cement, so as to determine the material of choice for stress-bearing areas. Materials and methods A total of 90 cylindrical specimens (4 mm diameter and 6 mm height) were prepared for each material following the manufacturer’s instructions. After subjecting the specimens to thermocycling, 45 specimens were immersed in artificial saliva for 24 hours for compressive strength testing under a universal testing machine, and the other 45 were evaluated for sorption and solubility, by first weighing them by a precision weighing scale (W1), then immersing them in artificial saliva for 28 days and weighing them (W2), and finally dehydrating in an oven for 24 hours and weighing them (W3). Results Group III (zirconomer) shows the highest compressive strength followed by group II (Miracle Mix) and least compressive strength is seen in group I (glass ionomer cement type IX-Extra) with statistically significant differences between the groups. The sorption and solubility values in artificial saliva were highest for glass ionomer cement type IX - Extra-GC (group I) followed by zirconomer-Shofu (group III), and the least value was seen for Miracle Mix-GC (group II). Conclusion Zirconia-reinforced glass ionomer cement is a promising dental material and can be used as a restoration in stress-bearing areas due to its high strength and low solubility and sorption rate. It may be a substitute for silver-reinforced glass ionomer cement due to the added advantage of esthetics. Clinical significance This study provides vital information to pediatric dental surgeons on relatively new restorative materials as physical and mechanical properties of the new material are compared with conventional materials to determine the best suited material in terms of durability, strength and dimensional stability. This study will boost confidence among dental surgeons in terms of handling characteristics, cost effectiveness and success rate. This study will help clinically and scientifically; pediatric dental surgeons to use this material in stress-bearing areas in pediatric patients. How to cite this article Bhatia HP, Singh S, Sood S, Sharma N. A Comparative Evaluation of Sorption, Solubility, and Com-pressive Strength of Three Different Glass Ionomer Cements in Artificial Saliva: An in vitro Study. Int J Clin Pediatr Dent 2017;10(1):49-54. PMID:28377656

  8. Neural network-based landmark detection for mobile robot

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Minoru; Okada, Hiroyuki; Watanabe, Nobuo

    1996-03-01

    The mobile robot can essentially have only the relative position data for the real world. However, there are many cases that the robot has to know where it is located. In those cases, the useful method is to detect landmarks in the real world and adjust its position using detected landmarks. In this point of view, it is essential to develop a mobile robot that can accomplish the path plan successfully using natural or artificial landmarks. However, artificial landmarks are often difficult to construct and natural landmarks are very complicated to detect. In this paper, the method of acquiring landmarks by using the sensor data from the mobile robot necessary for planning the path is described. The landmark we discuss here is the natural one and is composed of the compression of sensor data from the robot. The sensor data is compressed and memorized by using five layered neural network that is called a sand glass model. The input and output data that neural network should learn is the sensor data of the robot that are exactly the same. Using the intermediate output data of the network, a compressed data is obtained, which expresses a landmark data. If the sensor data is ambiguous or enormous, it is easy to detect the landmark because the data is compressed and classified by the neural network. Using the backward three layers, the compressed landmark data is expanded to original data at some level. The studied neural network categorizes the detected sensor data to the known landmark.

  9. A comparison of artificial compressibility and fractional step methods for incompressible flow computations

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Darian, Armen; Sindir, Munir

    1992-01-01

    We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).

  10. Multigrid Acceleration of Time-Accurate DNS of Compressible Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Broeze, Jan; Geurts, Bernard; Kuerten, Hans; Streng, Martin

    1996-01-01

    An efficient scheme for the direct numerical simulation of 3D transitional and developed turbulent flow is presented. Explicit and implicit time integration schemes for the compressible Navier-Stokes equations are compared. The nonlinear system resulting from the implicit time discretization is solved with an iterative method and accelerated by the application of a multigrid technique. Since we use central spatial discretizations and no artificial dissipation is added to the equations, the smoothing method is less effective than in the more traditional use of multigrid in steady-state calculations. Therefore, a special prolongation method is needed in order to obtain an effective multigrid method. This simulation scheme was studied in detail for compressible flow over a flat plate. In the laminar regime and in the first stages of turbulent flow the implicit method provides a speed-up of a factor 2 relative to the explicit method on a relatively coarse grid. At increased resolution this speed-up is enhanced correspondingly.

  11. Prediction of compressibility parameters of the soils using artificial neural network.

    PubMed

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan

    2016-01-01

    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  12. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    NASA Astrophysics Data System (ADS)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  13. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  14. Artificial fluid properties for large-eddy simulation of compressible turbulent mixing

    NASA Astrophysics Data System (ADS)

    Cook, Andrew W.

    2007-05-01

    An alternative methodology is described for large-eddy simulation (LES) of flows involving shocks, turbulence, and mixing. In lieu of filtering the governing equations, it is postulated that the large-scale behavior of a LES fluid, i.e., a fluid with artificial properties, will be similar to that of a real fluid, provided the artificial properties obey certain constraints. The artificial properties consist of modifications to the shear viscosity, bulk viscosity, thermal conductivity, and species diffusivity of a fluid. The modified transport coefficients are designed to damp out high wavenumber modes, close to the resolution limit, without corrupting lower modes. Requisite behavior of the artificial properties is discussed and results are shown for a variety of test problems, each designed to exercise different aspects of the models. When combined with a tenth-order compact scheme, the overall method exhibits excellent resolution characteristics for turbulent mixing, while capturing shocks and material interfaces in a crisp fashion.

  15. Preconditioning for Numerical Simulation of Low Mach Number Three-Dimensional Viscous Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Tweedt, Daniel L.; Chima, Rodrick V.; Turkel, Eli

    1997-01-01

    A preconditioning scheme has been implemented into a three-dimensional viscous computational fluid dynamics code for turbomachine blade rows. The preconditioning allows the code, originally developed for simulating compressible flow fields, to be applied to nearly-incompressible, low Mach number flows. A brief description is given of the compressible Navier-Stokes equations for a rotating coordinate system, along with the preconditioning method employed. Details about the conservative formulation of artificial dissipation are provided, and different artificial dissipation schemes are discussed and compared. The preconditioned code was applied to a well-documented case involving the NASA large low-speed centrifugal compressor for which detailed experimental data are available for comparison. Performance and flow field data are compared for the near-design operating point of the compressor, with generally good agreement between computation and experiment. Further, significant differences between computational results for the different numerical implementations, revealing different levels of solution accuracy, are discussed.

  16. Artificial acoustic stiffness reduction in fully compressible, direct numerical simulation of combustion

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Trouvé, Arnaud

    2004-09-01

    A pseudo-compressibility method is proposed to modify the acoustic time step restriction found in fully compressible, explicit flow solvers. The method manipulates terms in the governing equations of order Ma2, where Ma is a characteristic flow Mach number. A decrease in the speed of acoustic waves is obtained by adding an extra term in the balance equation for total energy. This term is proportional to flow dilatation and uses a decomposition of the dilatational field into an acoustic component and a component due to heat transfer. The present method is a variation of the pressure gradient scaling (PGS) method proposed in Ramshaw et al (1985 Pressure gradient scaling method for fluid flow with nearly uniform pressure J. Comput. Phys. 58 361-76). It achieves gains in computational efficiencies similar to PGS: at the cost of a slightly more involved right-hand-side computation, the numerical time step increases by a full order of magnitude. It also features the added benefit of preserving the hydrodynamic pressure field. The original and modified PGS methods are implemented into a parallel direct numerical simulation solver developed for applications to turbulent reacting flows with detailed chemical kinetics. The performance of the pseudo-compressibility methods is illustrated in a series of test problems ranging from isothermal sound propagation to laminar premixed flame problems.

  17. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  18. A Hermite-based lattice Boltzmann model with artificial viscosity for compressible viscous flows

    NASA Astrophysics Data System (ADS)

    Qiu, Ruofan; Chen, Rongqian; Zhu, Chenxiang; You, Yancheng

    2018-05-01

    A lattice Boltzmann model on Hermite basis for compressible viscous flows is presented in this paper. The model is developed in the framework of double-distribution-function approach, which has adjustable specific-heat ratio and Prandtl number. It contains a density distribution function for the flow field and a total energy distribution function for the temperature field. The equilibrium distribution function is determined by Hermite expansion, and the D3Q27 and D3Q39 three-dimensional (3D) discrete velocity models are used, in which the discrete velocity model can be replaced easily. Moreover, an artificial viscosity is introduced to enhance the model for capturing shock waves. The model is tested through several cases of compressible flows, including 3D supersonic viscous flows with boundary layer. The effect of artificial viscosity is estimated. Besides, D3Q27 and D3Q39 models are further compared in the present platform.

  19. Redistribution of Welding Residual Stresses of Crack Tip Opening Displacement Specimen by Local Compression.

    PubMed

    Kim, Young-Gon; Song, Kuk-Hyun; Lee, Dong-Hoon; Joo, Sung-Min

    2018-03-01

    The demand of crack tip opening displacement (CTOD) test which evaluates fracture toughness of a cracked material is very important to ensure the stability of structure under severe service environment. The validity of the CTOD test result is judged using several criterions of the specification standards. One of them is the artificially generated fatigue pre-crack length inside the specimen. For acceptable CTOD test results, fatigue pre-crack must have a reasonable sharp crack front. The propagation of fatigue crack started from the tip of the machined notch, which might have propagated irregularly due to residual stress field. To overcome this problem, test codes suggest local compression method, reversed bending method and stepwise high-R ratio method to reduce the disparity of residual stress distribution inside the specimen. In this paper, the relation between the degree of local compression and distribution of welding residual stress has been analyzed by finite element analyses in order to determine the amount of effective local compression of the test piece. Analysis results show that initial welding residual stress is dramatically varied three-dimensionally while cutting, notch machining and local compressing due to the change of internal restraint force. From the simulation result, the authors find that there is an optimum amount of local compression to modify regularly for generating fatigue pre-crack propagation. In the case of 0.5% compressions of the model width is the most effective for uniforming residual stress distribution.

  20. Symposium on Turbulent Shear Flows (8th) Held in Munich, Germany on 9-11 September 1991. Volume 2. Sessions 19-31, Poster Sessions

    DTIC Science & Technology

    1991-09-01

    FLAMES IN THE ATrR)S1]1 IERE USING A SECOND MOMNT TURBULENCE MODEL Hermilo RAMIREZ -LEON, Claude REY and Jean-Frangois SINI LABORATOIRE DE MECANIQUE DES...65 directly satisfied by an extended version of the C= F , ax6 aX5 artificial compressibility implicit method ( Ramirez -Leon et al,, 1991). 08 2 IR ap...the isotropization-of-production concept. For the compressible fluid case, Ramirez -Leon et al (1990) With regard to Eq (2), the right-hand side of

  1. Free-Lagrange methods for compressible hydrodynamics in two space dimensions

    NASA Astrophysics Data System (ADS)

    Crowley, W. E.

    1985-03-01

    Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.

  2. Preconditioned characteristic boundary conditions based on artificial compressibility method for solution of incompressible flows

    NASA Astrophysics Data System (ADS)

    Hejranfar, Kazem; Parseh, Kaveh

    2017-09-01

    The preconditioned characteristic boundary conditions based on the artificial compressibility (AC) method are implemented at artificial boundaries for the solution of two- and three-dimensional incompressible viscous flows in the generalized curvilinear coordinates. The compatibility equations and the corresponding characteristic variables (or the Riemann invariants) are mathematically derived and then applied as suitable boundary conditions in a high-order accurate incompressible flow solver. The spatial discretization of the resulting system of equations is carried out by the fourth-order compact finite-difference (FD) scheme. In the preconditioning applied here, the value of AC parameter in the flow field and also at the far-field boundary is automatically calculated based on the local flow conditions to enhance the robustness and performance of the solution algorithm. The code is fully parallelized using the Concurrency Runtime standard and Parallel Patterns Library (PPL) and its performance on a multi-core CPU is analyzed. The incompressible viscous flows around a 2-D circular cylinder, a 2-D NACA0012 airfoil and also a 3-D wavy cylinder are simulated and the accuracy and performance of the preconditioned characteristic boundary conditions applied at the far-field boundaries are evaluated in comparison to the simplified boundary conditions and the non-preconditioned characteristic boundary conditions. It is indicated that the preconditioned characteristic boundary conditions considerably improve the convergence rate of the solution of incompressible flows compared to the other boundary conditions and the computational costs are significantly decreased.

  3. Mechanically activated artificial cell by using microfluidics

    NASA Astrophysics Data System (ADS)

    Ho, Kenneth K. Y.; Lee, Lap Man; Liu, Allen P.

    2016-09-01

    All living organisms sense mechanical forces. Engineering mechanosensitive artificial cell through bottom-up in vitro reconstitution offers a way to understand how mixtures of macromolecules assemble and organize into a complex system that responds to forces. We use stable double emulsion droplets (aqueous/oil/aqueous) to prototype mechanosensitive artificial cells. In order to demonstrate mechanosensation in artificial cells, we develop a novel microfluidic device that is capable of trapping double emulsions into designated chambers, followed by compression and aspiration in a parallel manner. The microfluidic device is fabricated using multilayer soft lithography technology, and consists of a control layer and a deformable flow channel. Deflections of the PDMS membrane above the main microfluidic flow channels and trapping chamber array are independently regulated pneumatically by two sets of integrated microfluidic valves. We successfully compress and aspirate the double emulsions, which result in transient increase and permanent decrease in oil thickness, respectively. Finally, we demonstrate the influx of calcium ions as a response of our mechanically activated artificial cell through thinning of oil. The development of a microfluidic device to mechanically activate artificial cells creates new opportunities in force-activated synthetic biology.

  4. Characterization of synthetic foam structures used to manufacture artificial vertebral trabecular bone.

    PubMed

    Fürst, David; Senck, Sascha; Hollensteiner, Marianne; Esterer, Benjamin; Augat, Peter; Eckstein, Felix; Schrempf, Andreas

    2017-07-01

    Artificial materials reflecting the mechanical properties of human bone are essential for valid and reliable implant testing and design. They also are of great benefit for realistic simulation of surgical procedures. The objective of this study was therefore to characterize two groups of self-developed synthetic foam structures by static compressive testing and by microcomputed tomography. Two mineral fillers and varying amounts of a blowing agent were used to create different expansion behavior of the synthetic open-cell foams. The resulting compressive and morphometric properties thus differed within and also slightly between both groups. Apart from the structural anisotropy, the compressive and morphometric properties of the synthetic foam materials were shown to mirror the respective characteristics of human vertebral trabecular bone in good approximation. In conclusion, the artificial materials created can be used to manufacture valid synthetic bones for surgical training. Further, they provide novel possibilities for studying the relationship between trabecular bone microstructure and biomechanical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. An activated energy approach for accelerated testing of the deformation of UHMWPE in artificial joints.

    PubMed

    Galetz, Mathias Christian; Glatzel, Uwe

    2010-05-01

    The deformation behavior of ultrahigh molecular polyethylene (UHMWPE) is studied in the temperature range of 23-80 degrees C. Samples are examined in quasi-static compression, tensile and creep tests to determine the accelerated deformation of UHMWPE at elevated temperatures. The deformation mechanisms under compression load can be described by one strain rate and temperature dependent Eyring process. The activation energy and volume of that process do not change between 23 degrees C and 50 degrees C. This suggests that the deformation mechanism under compression remains stable within this temperature range. Tribological tests are conducted to transfer this activated energy approach to the deformation behavior under loading typical for artificial knee joints. While this approach does not cover the wear mechanisms close to the surface, testing at higher temperatures is shown to have a significant potential to reduce the testing time for lifetime predictions in terms of the macroscopic creep and deformation behavior of artificial joints. Copyright 2010. Published by Elsevier Ltd.

  6. Interior Fluid Dynamics of Liquid-Filled Projectiles

    DTIC Science & Technology

    1989-12-01

    the Sandia code. The previous codes are primarily based on finite-difference approximations with relatively coarse grid and were designed without...exploits Chorin’s method of artificial compressibility. The steady solution at 11 X 24 X 21 grid points in r, 0, z-direction is obtained by integrating...differences in radial and axial direction and pseudoepectral differencing in the azimuthal direction. Nonuniform grids are introduced for increased

  7. Numerical solution of the two-dimensional time-dependent incompressible Euler equations

    NASA Technical Reports Server (NTRS)

    Whitfield, David L.; Taylor, Lafayette K.

    1994-01-01

    A numerical method is presented for solving the artificial compressibility form of the 2D time-dependent incompressible Euler equations. The approach is based on using an approximate Riemann solver for the cell face numerical flux of a finite volume discretization. Characteristic variable boundary conditions are developed and presented for all boundaries and in-flow out-flow situations. The system of algebraic equations is solved using the discretized Newton-relaxation (DNR) implicit method. Numerical results are presented for both steady and unsteady flow.

  8. Artificial limb connection

    NASA Technical Reports Server (NTRS)

    Owens, L. J.

    1974-01-01

    Connection simplifies and eases donning and removing artificial limb; eliminates harnesses and clamps; and reduces skin pressures by allowing bone to carry all tensile and part of compressive loads between prosthesis and stump. Because connection is modular, it is easily modified to suit individual needs.

  9. Mechanical properties of silorane-based and methacrylate-based composite resins after artificial aging.

    PubMed

    de Castro, Denise Tornavoi; Lepri, César Penazzo; Valente, Mariana Lima da Costa; dos Reis, Andréa Cândido

    2016-01-01

    The aim of this study was to compare the compressive strength of a silorane-based composite resin (Filtek P90) to that of conventional composite resins (Charisma, Filtek Z250, Fill Magic, and NT Premium) before and after accelerated artificial aging (AAA). For each composite resin, 16 cylindrical specimens were prepared and divided into 2 groups. One group underwent analysis of compressive strength in a universal testing machine 24 hours after preparation, and the other was subjected first to 192 hours of AAA and then the compressive strength test. Data were analyzed by analysis of variance, followed by the Tukey HSD post hoc test (α = 0.05). Some statistically significant differences in compressive strength were found among the commercial brands (P < 0.001). The conventional composite resin Fill Magic presented the best performance before (P < 0.05) and after AAA (P < 0.05). Values for compressive strength of the silorane-based composite were among the lowest obtained, both before and after aging. Comparison of each material before and after AAA revealed that the aging process did not influence the compressive strength of the tested resins (P = 0.785).

  10. Hemodynamic deterioration during extracorporeal membrane oxygenation weaning in a patient with a total artificial heart.

    PubMed

    Hosseinian, Leila; Levin, Matthew A; Fischer, Gregory W; Anyanwu, Anelechi C; Torregrossa, Gianluca; Evans, Adam S

    2015-01-01

    The Total Artificial Heart (Syncardia, Tucson, AZ) is approved for use as a bridge-to-transplant or destination therapy in patients who have irreversible end-stage biventricular heart failure. We present a unique case, in which the inferior vena cava compression by a total artificial heart was initially masked for days by the concurrent placement of an extracorporeal membrane oxygenation cannula. This is the case of a 33-year-old man admitted to our institution with recurrent episodes of ventricular tachycardia requiring emergent total artificial heart and venovenous extracorporeal membrane oxygenation placement. This interesting scenario highlights the importance for critical care physicians to have an understanding of exact anatomical localization of a total artificial heart, extracorporeal membrane oxygenation, and their potential interactions. In total artificial heart patients with hemodynamic compromise or reduced device filling, consideration should always be given to venous inflow compression, particularly in those with smaller body surface area. Transesophageal echocardiogram is a readily available diagnostic tool that must be considered standard of care, not only in the operating room but also in the ICU, when dealing with this complex subpopulation of cardiac patients.

  11. High-strength mineralized collagen artificial bone

    NASA Astrophysics Data System (ADS)

    Qiu, Zhi-Ye; Tao, Chun-Sheng; Cui, Helen; Wang, Chang-Ming; Cui, Fu-Zhai

    2014-03-01

    Mineralized collagen (MC) is a biomimetic material that mimics natural bone matrix in terms of both chemical composition and microstructure. The biomimetic MC possesses good biocompatibility and osteogenic activity, and is capable of guiding bone regeneration as being used for bone defect repair. However, mechanical strength of existing MC artificial bone is too low to provide effective support at human load-bearing sites, so it can only be used for the repair at non-load-bearing sites, such as bone defect filling, bone graft augmentation, and so on. In the present study, a high strength MC artificial bone material was developed by using collagen as the template for the biomimetic mineralization of the calcium phosphate, and then followed by a cold compression molding process with a certain pressure. The appearance and density of the dense MC were similar to those of natural cortical bone, and the phase composition was in conformity with that of animal's cortical bone demonstrated by XRD. Mechanical properties were tested and results showed that the compressive strength was comparable to human cortical bone, while the compressive modulus was as low as human cancellous bone. Such high strength was able to provide effective mechanical support for bone defect repair at human load-bearing sites, and the low compressive modulus can help avoid stress shielding in the application of bone regeneration. Both in vitro cell experiments and in vivo implantation assay demonstrated good biocompatibility of the material, and in vivo stability evaluation indicated that this high-strength MC artificial bone could provide long-term effective mechanical support at human load-bearing sites.

  12. Quality by design approach: application of artificial intelligence techniques of tablets manufactured by direct compression.

    PubMed

    Aksu, Buket; Paradkar, Anant; de Matas, Marcel; Ozer, Ozgen; Güneri, Tamer; York, Peter

    2012-12-01

    The publication of the International Conference of Harmonization (ICH) Q8, Q9, and Q10 guidelines paved the way for the standardization of quality after the Food and Drug Administration issued current Good Manufacturing Practices guidelines in 2003. "Quality by Design", mentioned in the ICH Q8 guideline, offers a better scientific understanding of critical process and product qualities using knowledge obtained during the life cycle of a product. In this scope, the "knowledge space" is a summary of all process knowledge obtained during product development, and the "design space" is the area in which a product can be manufactured within acceptable limits. To create the spaces, artificial neural networks (ANNs) can be used to emphasize the multidimensional interactions of input variables and to closely bind these variables to a design space. This helps guide the experimental design process to include interactions among the input variables, along with modeling and optimization of pharmaceutical formulations. The objective of this study was to develop an integrated multivariate approach to obtain a quality product based on an understanding of the cause-effect relationships between formulation ingredients and product properties with ANNs and genetic programming on the ramipril tablets prepared by the direct compression method. In this study, the data are generated through the systematic application of the design of experiments (DoE) principles and optimization studies using artificial neural networks and neurofuzzy logic programs.

  13. Artificial neural network does better spatiotemporal compressive sampling

    NASA Astrophysics Data System (ADS)

    Lee, Soo-Young; Hsu, Charles; Szu, Harold

    2012-06-01

    Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.

  14. Crack Damage Parameters and Dilatancy of Artificially Jointed Granite Samples Under Triaxial Compression

    NASA Astrophysics Data System (ADS)

    Walton, G.; Alejano, L. R.; Arzua, J.; Markley, T.

    2018-06-01

    A database of post-peak triaxial test results was created for artificially jointed planes introduced in cylindrical compression samples of a Blanco Mera granite. Aside from examining the artificial jointing effect on major rock and rock mass parameters such as stiffness, peak strength and residual strength, other strength parameters related to brittle cracking and post-yield dilatancy were analyzed. Crack initiation and crack damage values for both the intact and artificially jointed samples were determined, and these damage envelopes were found to be notably impacted by the presence of jointing. The data suggest that with increased density of jointing, the samples transition from a combined matrix damage and joint slip yielding mechanism to yield dominated by joint slip. Additionally, post-yield dilation data were analyzed in the context of a mobilized dilation angle model, and the peak dilation angle was found to decrease significantly when there were joints in the samples. These dilatancy results are consistent with hypotheses in the literature on rock mass dilatancy.

  15. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  16. Elastic MCF Rubber with Photovoltaics and Sensing for Use as Artificial or Hybrid Skin (H-Skin): 1st Report on Dry-Type Solar Cell Rubber with Piezoelectricity for Compressive Sensing.

    PubMed

    Shimada, Kunio

    2018-06-05

    Ordinary solar cells are very difficult to bend, squash by compression, or extend by tensile strength. However, if they were to possess elastic, flexible, and extensible properties, in addition to piezo-electricity and resistivity, they could be put to effective use as artificial skin installed over human-like robots or humanoids. Further, it could serve as a husk that generates electric power from solar energy and perceives any force or temperature changes. Therefore, we propose a new type of artificial skin, called hybrid skin (H-Skin), for a humanoid robot having hybrid functions. In this study, a novel elastic solar cell is developed from natural rubber that is electrolytically polymerized with a configuration of magnetic clusters of metal particles incorporated into the rubber, by applying a magnetic field. The material thus produced is named magnetic compound fluid rubber (MCF rubber) that is elastic, flexible, and extensible. The present report deals with a dry-type MCF rubber solar cell that uses photosensitized dye molecules. First, the photovoltaic mechanism in the material is investigated. Next, the changes in the photovoltaic properties of its molecules due to irradiation by visible light are measured under compression. The effect of the compression on its piezoelectric properties is investigated.

  17. An Exploratory Compressive Strength Of Concrete Containing Modified Artificial Polyethylene Aggregate (MAPEA)

    NASA Astrophysics Data System (ADS)

    Hadipramana, J.; Mokhatar, S. N.; Samad, A. A. A.; Hakim, N. F. A.

    2016-11-01

    Concrete is widely used in the world as building and construction material. However, the constituent materials used in concrete are high cost when associated with the global economic recession. This exploratory aspires to have an alternative source of replacing natural aggregate with plastic wastes. An investigation of the Modified Artificial Polyethylene Aggregate (MAPEA) as natural aggregate replacement in concrete through an experimental work was conducted in this study. The MAPEA was created to improve the bonding ability of Artificial Polyethylene Aggregate (APEA) with the cement paste. The concrete was mixed with 3%, 6%, 9%, and 12% of APEA and MAPEA for 14 and 28 curing days, respectively. Furthermore, the compressive strength test was conducted to find out the optimum composition of MAPEA in concrete and compared to the APEA concrete. Besides, this study observed the influence and behaviour of MAPEA in concrete. Therefore, the Scanning Electron Microscopy was applied to observe the microstructure of MAPEA and APEA concrete. The results showed the use of high composition of an artificial aggregate resulted inferior strength on the concrete and 3% MAPEA in the concrete mix was highest compressive strength than other content. The modification of APEA (MAPEA) concrete increased its strength due to its surface roughness. However, the interfacial zone cracking was still found and decreased the strength of MAPEA concrete especially when it was age 28 days.

  18. Spectrum recovery method based on sparse representation for segmented multi-Gaussian model

    NASA Astrophysics Data System (ADS)

    Teng, Yidan; Zhang, Ye; Ti, Chunli; Su, Nan

    2016-09-01

    Hyperspectral images can realize crackajack features discriminability for supplying diagnostic characteristics with high spectral resolution. However, various degradations may generate negative influence on the spectral information, including water absorption, bands-continuous noise. On the other hand, the huge data volume and strong redundancy among spectrums produced intense demand on compressing HSIs in spectral dimension, which also leads to the loss of spectral information. The reconstruction of spectral diagnostic characteristics has irreplaceable significance for the subsequent application of HSIs. This paper introduces a spectrum restoration method for HSIs making use of segmented multi-Gaussian model (SMGM) and sparse representation. A SMGM is established to indicating the unsymmetrical spectral absorption and reflection characteristics, meanwhile, its rationality and sparse property are discussed. With the application of compressed sensing (CS) theory, we implement sparse representation to the SMGM. Then, the degraded and compressed HSIs can be reconstructed utilizing the uninjured or key bands. Finally, we take low rank matrix recovery (LRMR) algorithm for post processing to restore the spatial details. The proposed method was tested on the spectral data captured on the ground with artificial water absorption condition and an AVIRIS-HSI data set. The experimental results in terms of qualitative and quantitative assessments demonstrate that the effectiveness on recovering the spectral information from both degradations and loss compression. The spectral diagnostic characteristics and the spatial geometry feature are well preserved.

  19. Videographic assessment of cardiopulmonary resuscitation quality in the pediatric emergency department.

    PubMed

    Donoghue, Aaron; Hsieh, Ting-Chang; Myers, Sage; Mak, Allison; Sutton, Robert; Nadkarni, Vinay

    2015-06-01

    To describe the adherence to guidelines for CPR in a tertiary pediatric emergency department (ED) where resuscitations are reviewed by videorecording. Resuscitations in a tertiary pediatric ED are videorecorded as part of a quality improvement project. Patients receiving CPR under videorecorded conditions were eligible for inclusion. CPR parameters were quantified by retrospective review. Data were described by 30-s epoch (compression rate, ventilation rate, compression:ventilation ratio), by segment (duration of single providers' compressions) and by overall event (compression fraction). Duration of interruptions in compressions was measured; tasks completed during pauses were tabulated. 33 children received CPR under videorecorded conditions. A total of 650 min of CPR were analyzed. Chest compressions were performed at <100/min in 90/714 (13%) of epochs; 100-120/min in 309/714 (43%); >120/min in 315/714 (44%). Ventilations were 6-12 breaths/min in 201/708 (23%) of epochs and >12/min in 489/708 (70%). During CPR without an artificial airway, compression:ventilation coordination (15:2) was done in 93/234 (40%) of epochs. 178 pauses in CPR occurred; 120 (67%) were <10s in duration. Of 370 segments of compressions by individual providers, 282/370 (76%) were <2 min in duration. Median compression fraction was 91% (range 88-100%). CPR in a tertiary pediatric ED frequently met recommended parameters for compression rate, pause duration, and compression fraction. Hyperventilation and failure of C:V coordination were very common. Future studies should focus on the impact of training methods on CPR performance as documented by videorecording. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Fluid-driven origami-inspired artificial muscles.

    PubMed

    Li, Shuguang; Vogt, Daniel M; Rus, Daniela; Wood, Robert J

    2017-12-12

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg-all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. Copyright © 2017 the Author(s). Published by PNAS.

  1. Fluid-driven origami-inspired artificial muscles

    PubMed Central

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-01-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ∼600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration. PMID:29180416

  2. Fluid-driven origami-inspired artificial muscles

    NASA Astrophysics Data System (ADS)

    Li, Shuguang; Vogt, Daniel M.; Rus, Daniela; Wood, Robert J.

    2017-12-01

    Artificial muscles hold promise for safe and powerful actuation for myriad common machines and robots. However, the design, fabrication, and implementation of artificial muscles are often limited by their material costs, operating principle, scalability, and single-degree-of-freedom contractile actuation motions. Here we propose an architecture for fluid-driven origami-inspired artificial muscles. This concept requires only a compressible skeleton, a flexible skin, and a fluid medium. A mechanical model is developed to explain the interaction of the three components. A fabrication method is introduced to rapidly manufacture low-cost artificial muscles using various materials and at multiple scales. The artificial muscles can be programed to achieve multiaxial motions including contraction, bending, and torsion. These motions can be aggregated into systems with multiple degrees of freedom, which are able to produce controllable motions at different rates. Our artificial muscles can be driven by fluids at negative pressures (relative to ambient). This feature makes actuation safer than most other fluidic artificial muscles that operate with positive pressures. Experiments reveal that these muscles can contract over 90% of their initial lengths, generate stresses of ˜600 kPa, and produce peak power densities over 2 kW/kg—all equal to, or in excess of, natural muscle. This architecture for artificial muscles opens the door to rapid design and low-cost fabrication of actuation systems for numerous applications at multiple scales, ranging from miniature medical devices to wearable robotic exoskeletons to large deployable structures for space exploration.

  3. Automated information-analytical system for thunderstorm monitoring and early warning alarms using modern physical sensors and information technologies with elements of artificial intelligence

    NASA Astrophysics Data System (ADS)

    Boldyreff, Anton S.; Bespalov, Dmitry A.; Adzhiev, Anatoly Kh.

    2017-05-01

    Methods of artificial intelligence are a good solution for weather phenomena forecasting. They allow to process a large amount of diverse data. Recirculation Neural Networks is implemented in the paper for the system of thunderstorm events prediction. Large amounts of experimental data from lightning sensors and electric field mills networks are received and analyzed. The average recognition accuracy of sensor signals is calculated. It is shown that Recirculation Neural Networks is a promising solution in the forecasting of thunderstorms and weather phenomena, characterized by the high efficiency of the recognition elements of the sensor signals, allows to compress images and highlight their characteristic features for subsequent recognition.

  4. Improving Non-Destructive Concrete Strength Tests Using Support Vector Machines

    PubMed Central

    Shih, Yi-Fan; Wang, Yu-Ren; Lin, Kuo-Liang; Chen, Chin-Wen

    2015-01-01

    Non-destructive testing (NDT) methods are important alternatives when destructive tests are not feasible to examine the in situ concrete properties without damaging the structure. The rebound hammer test and the ultrasonic pulse velocity test are two popular NDT methods to examine the properties of concrete. The rebound of the hammer depends on the hardness of the test specimen and ultrasonic pulse travelling speed is related to density, uniformity, and homogeneity of the specimen. Both of these two methods have been adopted to estimate the concrete compressive strength. Statistical analysis has been implemented to establish the relationship between hammer rebound values/ultrasonic pulse velocities and concrete compressive strength. However, the estimated results can be unreliable. As a result, this research proposes an Artificial Intelligence model using support vector machines (SVMs) for the estimation. Data from 95 cylinder concrete samples are collected to develop and validate the model. The results show that combined NDT methods (also known as SonReb method) yield better estimations than single NDT methods. The results also show that the SVMs model is more accurate than the statistical regression model. PMID:28793627

  5. Artificial viscosity in Godunov-type schemes to cure the carbuncle phenomenon

    NASA Astrophysics Data System (ADS)

    Rodionov, Alexander V.

    2017-09-01

    This work presents a new approach for curing the carbuncle instability. The idea underlying the approach is to introduce some dissipation in the form of right-hand sides of the Navier-Stokes equations into the basic method of solving Euler equations; in so doing, we replace the molecular viscosity coefficient by the artificial viscosity coefficient and calculate heat conductivity assuming that the Prandtl number is constant. For the artificial viscosity coefficient we have chosen a formula that is consistent with the von Neumann and Richtmyer artificial viscosity, but has its specific features (extension to multidimensional simulations, introduction of a threshold compression intensity that restricts additional dissipation to the shock layer only). The coefficients and the expression for the characteristic mesh size in this formula are chosen from a large number of Quirk-type problem computations. The new cure for the carbuncle flaw has been tested on first-order schemes (Godunov, Roe, HLLC and AUSM+ schemes) as applied to one- and two-dimensional simulations on smooth structured grids. Its efficiency has been demonstrated on several well-known test problems.

  6. Computation of incompressible viscous flows through artificial heart devices with moving boundaries

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Rogers, Stuart; Kwak, Dochan; Chang, I.-DEE

    1991-01-01

    The extension of computational fluid dynamics techniques to artificial heart flow simulations is illustrated. Unsteady incompressible Navier-Stokes equations written in 3-D generalized curvilinear coordinates are solved iteratively at each physical time step until the incompressibility condition is satisfied. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. The efficiency and robustness of the time accurate formulation of the algorithm are tested by computing the flow through model geometries. A channel flow with a moving indentation is computed and validated with experimental measurements and other numerical solutions. In order to handle the geometric complexity and the moving boundary problems, a zonal method and an overlapping grid embedding scheme are used, respectively. Steady state solutions for the flow through a tilting disk heart valve was compared against experimental measurements. Good agreement was obtained. The flow computation during the valve opening and closing is carried out to illustrate the moving boundary capability.

  7. A Semi-implicit Method for Resolution of Acoustic Waves in Low Mach Number Flows

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles D.; Moin, Parviz

    2002-09-01

    A semi-implicit numerical method for time accurate simulation of compressible flow is presented. By extending the low Mach number pressure correction method, a Helmholtz equation for pressure is obtained in the case of compressible flow. The method avoids the acoustic CFL limitation, allowing a time step restricted only by the convective velocity, resulting in significant efficiency gains. Use of a discretization that is centered in both time and space results in zero artificial damping of acoustic waves. The method is attractive for problems in which Mach numbers are low, and the acoustic waves of most interest are those having low frequency, such as acoustic combustion instabilities. Both of these characteristics suggest the use of time steps larger than those allowable by an acoustic CFL limitation. In some cases it may be desirable to include a small amount of numerical dissipation to eliminate oscillations due to small-wavelength, high-frequency, acoustic modes, which are not of interest; therefore, a provision for doing this in a controlled manner is included in the method. Results of the method for several model problems are presented, and the performance of the method in a large eddy simulation is examined.

  8. A method of automatic control procedures cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Bureev, A. Sh.; Zhdanov, D. S.; Kiseleva, E. Yu.; Kutsov, M. S.; Trifonov, A. Yu.

    2015-11-01

    The study is to present the results of works on creation of methods of automatic control procedures of cardiopulmonary resuscitation (CPR). A method of automatic control procedure of CPR by evaluating the acoustic data of the dynamics of blood flow in the bifurcation of carotid arteries and the dynamics of air flow in a trachea according to the current guidelines for CPR is presented. Evaluation of the patient is carried out by analyzing the respiratory noise and blood flow in the interspaces between the chest compressions and artificial pulmonary ventilation. The device operation algorithm of automatic control procedures of CPR and its block diagram has been developed.

  9. Prediction of shear wave velocity using empirical correlations and artificial intelligence methods

    NASA Astrophysics Data System (ADS)

    Maleki, Shahoo; Moradzadeh, Ali; Riabi, Reza Ghavami; Gholami, Raoof; Sadeghzadeh, Farhad

    2014-06-01

    Good understanding of mechanical properties of rock formations is essential during the development and production phases of a hydrocarbon reservoir. Conventionally, these properties are estimated from the petrophysical logs with compression and shear sonic data being the main input to the correlations. This is while in many cases the shear sonic data are not acquired during well logging, which may be for cost saving purposes. In this case, shear wave velocity is estimated using available empirical correlations or artificial intelligent methods proposed during the last few decades. In this paper, petrophysical logs corresponding to a well drilled in southern part of Iran were used to estimate the shear wave velocity using empirical correlations as well as two robust artificial intelligence methods knows as Support Vector Regression (SVR) and Back-Propagation Neural Network (BPNN). Although the results obtained by SVR seem to be reliable, the estimated values are not very precise and considering the importance of shear sonic data as the input into different models, this study suggests acquiring shear sonic data during well logging. It is important to note that the benefits of having reliable shear sonic data for estimation of rock formation mechanical properties will compensate the possible additional costs for acquiring a shear log.

  10. Analysis of the microstructure and mechanical performance of composite resins after accelerated artificial aging.

    PubMed

    De Oliveira Daltoé, M; Lepri, C Penazzo; Wiezel, J Guilherme G; Tornavoi, D Cremonezzi; Agnelli, J A Marcondes; Reis, A Cândido Dos

    2013-03-01

    Researches that assess the behavior of dental materials are important for scientific and industrial development especially when they are tested under conditions that simulate the oral environment, so this work analyzed the compressive strength and microstructure of three composite resins subjected to accelerated artificial aging (AAA). Three composites resins of 3M (P90, P60 and Z100) were analyzed and were obtained 16 specimens for each type (N.=48). Half of each type were subjected to UV-C system AAA and then were analyzed the surfaces of three aged specimens and three not aged of each type through the scanning electron microscope (SEM). After, eight specimens of each resin, aged and not aged, were subjected to compression test. After statistical analysis of compressive strength values, it was found that there was difference between groups (α <0.05). The resin specimens aged P60 presented lower values of compressive strength statistically significant when compared to the not subject to the AAA. For the other composite resins, there was no difference, regardless of aging, a fact confirmed by SEM. The results showed that the AAA influenced the compressive strength of the resin aged P60; confirmed by surface analysis by SEM, which showed greater structural disarrangement on surface material.

  11. Fatigue characteristics of carbon nanotube blocks under compression

    NASA Astrophysics Data System (ADS)

    Suhr, J.; Ci, L.; Victor, P.; Ajayan, P. M.

    2008-03-01

    In this paper we investigate the mechanical response from repeated high compressive strains on freestanding, long, vertically aligned multiwalled carbon nanotube membranes and show that the arrays of nanotubes under compression behave very similar to soft tissue and exhibit viscoelastic behavior. Under compressive cyclic loading, the mechanical response of nanotube blocks shows initial preconditioning and hysteresis characteristic of viscoeleastic materials. Furthermore, no fatigue failure is observed even at high strain amplitudes up to half million cycles. The outstanding fatigue life and extraordinary soft tissue-like mechanical behavior suggest that properly engineered carbon nanotube structures could mimic artificial muscles.

  12. Lagrangian transported MDF methods for compressible high speed flows

    NASA Astrophysics Data System (ADS)

    Gerlinger, Peter

    2017-06-01

    This paper deals with the application of thermochemical Lagrangian MDF (mass density function) methods for compressible sub- and supersonic RANS (Reynolds Averaged Navier-Stokes) simulations. A new approach to treat molecular transport is presented. This technique on the one hand ensures numerical stability of the particle solver in laminar regions of the flow field (e.g. in the viscous sublayer) and on the other hand takes differential diffusion into account. It is shown in a detailed analysis, that the new method correctly predicts first and second-order moments on the basis of conventional modeling approaches. Moreover, a number of challenges for MDF particle methods in high speed flows is discussed, e.g. high cell aspect ratio grids close to solid walls, wall heat transfer, shock resolution, and problems from statistical noise which may cause artificial shock systems in supersonic flows. A Mach 2 supersonic mixing channel with multiple shock reflection and a model rocket combustor simulation demonstrate the eligibility of this technique to practical applications. Both test cases are simulated successfully for the first time with a hybrid finite-volume (FV)/Lagrangian particle solver (PS).

  13. Cavitation of intercellular spaces is critical to establishment of hydraulic properties of compression wood of Chamaecyparis obtusa seedlings

    PubMed Central

    Nakaba, Satoshi; Hirai, Asami; Kudo, Kayo; Yamagishi, Yusuke; Yamane, Kenichi; Kuroda, Katsushi; Nugroho, Widyanto Dwi; Kitin, Peter; Funada, Ryo

    2016-01-01

    Background and Aims When the orientation of the stems of conifers departs from the vertical as a result of environmental influences, conifers form compression wood that results in restoration of verticality. It is well known that intercellular spaces are formed between tracheids in compression wood, but the function of these spaces remains to be clarified. In the present study, we evaluated the impact of these spaces in artificially induced compression wood in Chamaecyparis obtusa seedlings. Methods We monitored the presence or absence of liquid in the intercellular spaces of differentiating xylem by cryo-scanning electron microscopy. In addition, we analysed the relationship between intercellular spaces and the hydraulic properties of the compression wood. Key Results Initially, we detected small intercellular spaces with liquid in regions in which the profiles of tracheids were not rounded in transverse surfaces, indicating that the intercellular spaces had originally contained no gases. In the regions where tracheids had formed secondary walls, we found that some intercellular spaces had lost their liquid. Cavitation of intercellular spaces would affect hydraulic conductivity as a consequence of the induction of cavitation in neighbouring tracheids. Conclusions Our observations suggest that cavitation of intercellular spaces is the critical event that affects not only the functions of intercellular spaces but also the hydraulic properties of compression wood. PMID:26818592

  14. Modeling of Compressive Strength for Self-Consolidating High-Strength Concrete Incorporating Palm Oil Fuel Ash

    PubMed Central

    Safiuddin, Md.; Raman, Sudharshan N.; Abdus Salam, Md.; Jumaat, Mohd. Zamin

    2016-01-01

    Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination (R2) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN. PMID:28773520

  15. Modeling of Compressive Strength for Self-Consolidating High-Strength Concrete Incorporating Palm Oil Fuel Ash.

    PubMed

    Safiuddin, Md; Raman, Sudharshan N; Abdus Salam, Md; Jumaat, Mohd Zamin

    2016-05-20

    Modeling is a very useful method for the performance prediction of concrete. Most of the models available in literature are related to the compressive strength because it is a major mechanical property used in concrete design. Many attempts were taken to develop suitable mathematical models for the prediction of compressive strength of different concretes, but not for self-consolidating high-strength concrete (SCHSC) containing palm oil fuel ash (POFA). The present study has used artificial neural networks (ANN) to predict the compressive strength of SCHSC incorporating POFA. The ANN model has been developed and validated in this research using the mix proportioning and experimental strength data of 20 different SCHSC mixes. Seventy percent (70%) of the data were used to carry out the training of the ANN model. The remaining 30% of the data were used for testing the model. The training of the ANN model was stopped when the root mean square error (RMSE) and the percentage of good patterns was 0.001 and ≈100%, respectively. The predicted compressive strength values obtained from the trained ANN model were much closer to the experimental values of compressive strength. The coefficient of determination ( R ²) for the relationship between the predicted and experimental compressive strengths was 0.9486, which shows the higher degree of accuracy of the network pattern. Furthermore, the predicted compressive strength was found very close to the experimental compressive strength during the testing process of the ANN model. The absolute and percentage relative errors in the testing process were significantly low with a mean value of 1.74 MPa and 3.13%, respectively, which indicated that the compressive strength of SCHSC including POFA can be efficiently predicted by the ANN.

  16. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    PubMed

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Elastic MCF Rubber with Photovoltaics and Sensing on Hybrid Skin (H-Skin) for Artificial Skin by Utilizing Natural Rubber: Third Report on Electric Charge and Storage under Tension and Compression †.

    PubMed

    Shimada, Kunio

    2018-06-06

    In the series of studies on new types of elastic and compressible artificial skins with hybrid sensing functions, photovoltaics, and battery, we have proposed a hybrid skin (H-Skin) by utilizing an electrolytically polymerized magnetic compound fluid (MCF) made of natural rubber latex (NR-latex). By using the experimental results in the first and second reports, we have clarified the feasibility of electric charge at irradiation, and that without illumination under compression and elongation. The former was explained in a wet-type MCF rubber solar cell by developing a tunneling theory together with an equivalent electric circuit model. The latter corresponds to the battery rather than to the solar cell. As for the MCF rubber battery, depending on the selected agent type, we can make the MCF rubber have higher electricity and lighter weight. Therefore, the MCF rubber has an electric charge and storage whether at irradiation or not.

  18. Compressional Wave Speed and Absorption Measurements in a Saturated Kaolinite-Water Artificial Sediment.

    DTIC Science & Technology

    OCEAN BOTTOM, ULTRASONIC PROPERTIES), (*UNDERWATER SOUND, SOUND TRANSMISSION), KAOLINITE , ABSORPTION, COMPRESSIVE PROPERTIES, POROSITY, VELOCITY, VISCOELASTICITY, MATHEMATICAL MODELS, THESES, SEDIMENTATION

  19. A contourlet transform based algorithm for real-time video encoding

    NASA Astrophysics Data System (ADS)

    Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris

    2012-06-01

    In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.

  20. Typing SNP based on the near-infrared spectroscopy and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ren, Li; Wang, Wei-Peng; Gao, Yu-Zhen; Yu, Xiao-Wei; Xie, Hong-Ping

    2009-07-01

    Based on the near-infrared spectra (NIRS) of the measured samples as the discriminant variables of their genotypes, the genotype discriminant model of SNP has been established by using back-propagation artificial neural network (BP-ANN). Taking a SNP (857G > A) of N-acetyltransferase 2 (NAT2) as an example, DNA fragments containing the SNP site were amplified by the PCR method based on a pair of primers to obtain the three-genotype (GG, AA, and GA) modeling samples. The NIRS-s of the amplified samples were directly measured in transmission by using quartz cell. Based on the sample spectra measured, the two BP-ANN-s were combined to obtain the stronger ability of the three-genotype classification. One of them was established to compress the measured NIRS variables by using the resilient back-propagation algorithm, and another network established by Levenberg-Marquardt algorithm according to the compressed NIRS-s was used as the discriminant model of the three-genotype classification. For the established model, the root mean square error for the training and the prediction sample sets were 0.0135 and 0.0132, respectively. Certainly, this model could rightly predict the three genotypes (i.e. the accuracy of prediction samples was up to100%) and had a good robust for the prediction of unknown samples. Since the three genotypes of SNP could be directly determined by using the NIRS-s without any preprocessing for the analyzed samples after PCR, this method is simple, rapid and low-cost.

  1. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  2. Discontinuous Galerkin finite element method for the nonlinear hyperbolic problems with entropy-based artificial viscosity stabilization

    NASA Astrophysics Data System (ADS)

    Zingan, Valentin Nikolaevich

    This work develops a discontinuous Galerkin finite element discretization of non- linear hyperbolic conservation equations with efficient and robust high order stabilization built on an entropy-based artificial viscosity approximation. The solutions of equations are represented by elementwise polynomials of an arbitrary degree p > 0 which are continuous within each element but discontinuous on the boundaries. The discretization of equations in time is done by means of high order explicit Runge-Kutta methods identified with respective Butcher tableaux. To stabilize a numerical solution in the vicinity of shock waves and simultaneously preserve the smooth parts from smearing, we add some reasonable amount of artificial viscosity in accordance with the physical principle of entropy production in the interior of shock waves. The viscosity coefficient is proportional to the local size of the residual of an entropy equation and is bounded from above by the first-order artificial viscosity defined by a local wave speed. Since the residual of an entropy equation is supposed to be vanishingly small in smooth regions (of the order of the Local Truncation Error) and arbitrarily large in shocks, the entropy viscosity is almost zero everywhere except the shocks, where it reaches the first-order upper bound. One- and two-dimensional benchmark test cases are presented for nonlinear hyperbolic scalar conservation laws and the system of compressible Euler equations. These tests demonstrate the satisfactory stability properties of the method and optimal convergence rates as well. All numerical solutions to the test problems agree well with the reference solutions found in the literature. We conclude that the new method developed in the present work is a valuable alternative to currently existing techniques of viscous stabilization.

  3. Unified approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Chang, Tyne-Hsien

    1993-12-01

    An unified approach for solving both compressible and incompressible flows was investigated in this study. The difference in CFD code development between incompressible and compressible flows is due to the mathematical characteristics. However, if one can modify the continuity equation for incompressible flows by introducing pseudocompressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of a compressible flow code to solve incompressible flows becomes feasible. Among numerical algorithms developed for compressible flows, the Centered Total Variation Diminishing (CTVD) schemes possess better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that CTVD schemes can equally well solve incompressible flows. In this study, the governing equations for incompressible flows include the continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the boundary conditions including physical and numerical boundary conditions must be properly specified to obtain accurate solution. The CFD code for this research is currently in progress. Flow past a circular cylinder will be used for numerical experiments to determine the accuracy and efficiency of the code before applying this code to more specific applications.

  4. MacCormack's technique-based pressure reconstruction approach for PIV data in compressible flows with shocks

    NASA Astrophysics Data System (ADS)

    Liu, Shun; Xu, Jinglei; Yu, Kaikai

    2017-06-01

    This paper proposes an improved approach for extraction of pressure fields from velocity data, such as obtained by particle image velocimetry (PIV), especially for steady compressible flows with strong shocks. The principle of this approach is derived from Navier-Stokes equations, assuming adiabatic condition and neglecting viscosity of flow field boundaries measured by PIV. The computing method is based on MacCormack's technique in computational fluid dynamics. Thus, this approach is called the MacCormack method. Moreover, the MacCormack method is compared with several approaches proposed in previous literature, including the isentropic method, the spatial integration and the Poisson method. The effects of velocity error level and PIV spatial resolution on these approaches are also quantified by using artificial velocity data containing shock waves. The results demonstrate that the MacCormack method has higher reconstruction accuracy than other approaches, and its advantages become more remarkable with shock strengthening. Furthermore, the performance of the MacCormack method is also validated by using synthetic PIV images with an oblique shock wave, confirming the feasibility and advantage of this approach in real PIV experiments. This work is highly significant for the studies on aerospace engineering, especially the outer flow fields of supersonic aircraft and the internal flow fields of ramjets.

  5. Retention of cardiopulmonary resuscitation skills after hands-only training versus conventional training in novices: a randomized controlled trial

    PubMed Central

    Kim, Young Joon; Cho, Youngsuk; Cho, Gyu Chong; Ji, Hyun Kyung; Han, Song Yi; Lee, Jin Hyuck

    2017-01-01

    Objective Cardiopulmonary resuscitation (CPR) training can improve performance during simulated cardiac arrest; however, retention of skills after training remains uncertain. Recently, hands-only CPR has been shown to be as effective as conventional CPR. The purpose of this study is to compare the retention rate of CPR skills in laypersons after hands-only or conventional CPR training. Methods Participants were randomly assigned to 1 of 2 CPR training methods: 80 minutes of hands-only CPR training or 180 minutes of conventional CPR training. Each participant’s CPR skills were evaluated at the end of training and 3 months thereafter using the Resusci Anne manikin with a skill-reporting software. Results In total, 252 participants completed training; there were 125 in the hands-only CPR group and 127 in the conventional CPR group. After 3 months, 118 participants were randomly selected to complete a post-training test. The hands-only CPR group showed a significant decrease in average compression rate (P=0.015), average compression depth (P=0.031), and proportion of adequate compression depth (P=0.011). In contrast, there was no difference in the skills of the conventional CPR group after 3 months. Conclusion Conventional CPR training appears to be more effective for the retention of chest compression skills than hands-only CPR training; however, the retention of artificial ventilation skills after conventional CPR training is poor. PMID:28717778

  6. Extruded Bread Classification on the Basis of Acoustic Emission Signal With Application of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Świetlicka, Izabela; Muszyński, Siemowit; Marzec, Agata

    2015-04-01

    The presented work covers the problem of developing a method of extruded bread classification with the application of artificial neural networks. Extruded flat graham, corn, and rye breads differening in water activity were used. The breads were subjected to the compression test with simultaneous registration of acoustic signal. The amplitude-time records were analyzed both in time and frequency domains. Acoustic emission signal parameters: single energy, counts, amplitude, and duration acoustic emission were determined for the breads in four water activities: initial (0.362 for rye, 0.377 for corn, and 0.371 for graham bread), 0.432, 0.529, and 0.648. For classification and the clustering process, radial basis function, and self-organizing maps (Kohonen network) were used. Artificial neural networks were examined with respect to their ability to classify or to cluster samples according to the bread type, water activity value, and both of them. The best examination results were achieved by the radial basis function network in classification according to water activity (88%), while the self-organizing maps network yielded 81% during bread type clustering.

  7. Prediction of Flow Stress in Cadmium Using Constitutive Equation and Artificial Neural Network Approach

    NASA Astrophysics Data System (ADS)

    Sarkar, A.; Chakravartty, J. K.

    2013-10-01

    A model is developed to predict the constitutive flow behavior of cadmium during compression test using artificial neural network (ANN). The inputs of the neural network are strain, strain rate, and temperature, whereas flow stress is the output. Experimental data obtained from compression tests in the temperature range -30 to 70 °C, strain range 0.1 to 0.6, and strain rate range 10-3 to 1 s-1 are employed to develop the model. A three-layer feed-forward ANN is trained with Levenberg-Marquardt training algorithm. It has been shown that the developed ANN model can efficiently and accurately predict the deformation behavior of cadmium. This trained network could predict the flow stress better than a constitutive equation of the type.

  8. Lightweight, compressible and electrically conductive polyurethane sponges coated with synergistic multiwalled carbon nanotubes and graphene for piezoresistive sensors.

    PubMed

    Ma, Zhonglei; Wei, Ajing; Ma, Jianzhong; Shao, Liang; Jiang, Huie; Dong, Diandian; Ji, Zhanyou; Wang, Qian; Kang, Songlei

    2018-04-19

    Lightweight, compressible and highly sensitive pressure/strain sensing materials are highly desirable for the development of health monitoring, wearable devices and artificial intelligence. Herein, a very simple, low-cost and solution-based approach is presented to fabricate versatile piezoresistive sensors based on conductive polyurethane (PU) sponges coated with synergistic multiwalled carbon nanotubes (MWCNTs) and graphene. These sensor materials are fabricated by convenient dip-coating layer-by-layer (LBL) electrostatic assembly followed by in situ reduction without using any complicated microfabrication processes. The resultant conductive MWCNT/RGO@PU sponges exhibit very low densities (0.027-0.064 g cm-3), outstanding compressibility (up to 75%) and high electrical conductivity benefiting from the porous PU sponges and synergistic conductive MWCNT/RGO structures. In addition, the MWCNT/RGO@PU sponges present larger relative resistance changes and superior sensing performances under external applied pressures (0-5.6 kPa) and a wide range of strains (0-75%) compared with the RGO@PU and MWCNT@PU sponges, due to the synergistic effect of multiple mechanisms: "disconnect-connect" transition of nanogaps, microcracks and fractured skeletons at low compression strain and compressive contact of the conductive skeletons at high compression strain. The electrical and piezoresistive properties of MWCNT/RGO@PU sponges are strongly associated with the dip-coating cycle, suspension concentration, and the applied pressure and strain. Fully functional applications of MWCNT/RGO@PU sponge-based piezoresistive sensors in lighting LED lamps and detecting human body movements are demonstrated, indicating their excellent potential for emerging applications such as health monitoring, wearable devices and artificial intelligence.

  9. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    PubMed

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  10. Venous ulcer review

    PubMed Central

    Bevis, Paul; Earnshaw, Jonothan

    2011-01-01

    Clinical question: What is the best treatment for venous ulcers? Results: Compression aids ulcer healing. Pentoxifylline can aid ulcer healing. Artificial skin grafts are more effective than other skin grafts in helping ulcer healing. Correction of underlying venous incompetence reduces ulcer recurrence. Implementation: Potential pitfalls to avoid are: Failure to exclude underlying arterial disease before application of compression.Unusual-looking ulcers or those slow to heal should be biopsied to exclude malignant transformation. PMID:21673869

  11. Strengthening of Aluminum Alloy 2219 by Thermo-mechanical Treatment

    NASA Astrophysics Data System (ADS)

    Li, Xifeng; Lei, Kun; Song, Peng; Liu, Xinqin; Zhang, Fei; Li, Jianfei; Chen, Jun

    2015-10-01

    Strengthening of aluminum alloy 2219 by thermo-mechanical treatment has been compared with artificial aging. Three simple deformation modes including pre-stretching, compression, and rolling have been used in thermo-mechanical treatment. The tensile strength, elongation, fracture feature, and precipitated phase have been investigated. The results show that the strengthening effect of thermo-mechanical treatment is better than the one of artificial aging. Especially, the yield strength significantly increases with a small decrease of elongation. When the specimen is pre-stretched to 8.0%, the yield strength reaches 385.0 MPa and increases by 22.2% in comparison to the one obtained in aging condition. The maximum tensile strength of 472.4 MPa is achieved with 4.0% thickness reduction by compression. The fracture morphology reveals locally ductile and brittle failure mechanism, while the coarse second-phase particles distribute on the fracture surface. The intermediate phases θ″ or θ' orthogonally precipitate in the matrix after thermo-mechanical treatment. As compared to artificial aging, the cold plastic deformation increases distribution homogeneity and the volume fraction of θ'' or θ' precipitates. These result in a better strengthening effect.

  12. WIND: Computer program for calculation of three dimensional potential compressible flow about wind turbine rotor blades

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    A computer program is presented which numerically solves an exact, full potential equation (FPE) for three dimensional, steady, inviscid flow through an isolated wind turbine rotor. The program automatically generates a three dimensional, boundary conforming grid and iteratively solves the FPE while fully accounting for both the rotating cascade and Coriolis effects. The numerical techniques incorporated involve rotated, type dependent finite differencing, a finite volume method, artificial viscosity in conservative form, and a successive line overrelaxation combined with the sequential grid refinement procedure to accelerate the iterative convergence rate. Consequently, the WIND program is capable of accurately analyzing incompressible and compressible flows, including those that are locally transonic and terminated by weak shocks. The program can also be used to analyze the flow around isolated aircraft propellers and helicopter rotors in hover as long as the total relative Mach number of the oncoming flow is subsonic.

  13. Global Artificial Boundary Conditions for Computation of External Flow Problems with Propulsive Jets

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon; Abarbanel, Saul; Nordstrom, Jan; Ryabenkii, Viktor; Vatsa, Veer

    1998-01-01

    We propose new global artificial boundary conditions (ABC's) for computation of flows with propulsive jets. The algorithm is based on application of the difference potentials method (DPM). Previously, similar boundary conditions have been implemented for calculation of external compressible viscous flows around finite bodies. The proposed modification substantially extends the applicability range of the DPM-based algorithm. In the paper, we present the general formulation of the problem, describe our numerical methodology, and discuss the corresponding computational results. The particular configuration that we analyze is a slender three-dimensional body with boat-tail geometry and supersonic jet exhaust in a subsonic external flow under zero angle of attack. Similarly to the results obtained earlier for the flows around airfoils and wings, current results for the jet flow case corroborate the superiority of the DPM-based ABC's over standard local methodologies from the standpoints of accuracy, overall numerical performance, and robustness.

  14. Incompressible viscous flow computations for the pump components and the artificial heart

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin

    1992-01-01

    A finite difference, three dimensional incompressible Navier-Stokes formulation to calculate the flow through turbopump components is utilized. The solution method is based on the pseudo compressibility approach and uses an implicit upwind differencing scheme together with the Gauss-Seidel line relaxation method. Both steady and unsteady flow calculations can be performed using the current algorithm. Here, equations are solved in steadily rotating reference frames by using the steady state formulation in order to simulate the flow through a turbopump inducer. Eddy viscosity is computed by using an algebraic mixing-length turbulence model. Numerical results are compared with experimental measurements and a good agreement is found between the two.

  15. Loss tolerant speech decoder for telecommunications

    NASA Technical Reports Server (NTRS)

    Prieto, Jr., Jaime L. (Inventor)

    1999-01-01

    A method and device for extrapolating past signal-history data for insertion into missing data segments in order to conceal digital speech frame errors. The extrapolation method uses past-signal history that is stored in a buffer. The method is implemented with a device that utilizes a finite-impulse response (FIR) multi-layer feed-forward artificial neural network that is trained by back-propagation for one-step extrapolation of speech compression algorithm (SCA) parameters. Once a speech connection has been established, the speech compression algorithm device begins sending encoded speech frames. As the speech frames are received, they are decoded and converted back into speech signal voltages. During the normal decoding process, pre-processing of the required SCA parameters will occur and the results stored in the past-history buffer. If a speech frame is detected to be lost or in error, then extrapolation modules are executed and replacement SCA parameters are generated and sent as the parameters required by the SCA. In this way, the information transfer to the SCA is transparent, and the SCA processing continues as usual. The listener will not normally notice that a speech frame has been lost because of the smooth transition between the last-received, lost, and next-received speech frames.

  16. On Multi-Dimensional Unstructured Mesh Adaption

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    1999-01-01

    Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.

  17. Novel approach to the fabrication of an artificial small bone using a combination of sponge replica and electrospinning methods

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Hee; Lee, Byong-Taek

    2011-06-01

    In this study, a novel artificial small bone consisting of ZrO2-biphasic calcium phosphate/polymethylmethacrylate-polycaprolactone-hydroxyapatite (ZrO2-BCP/PMMA-PCL-HAp) was fabricated using a combination of sponge replica and electrospinning methods. To mimic the cancellous bone, the ZrO2/BCP scaffold was composed of three layers, ZrO2, ZrO2/BCP and BCP, fabricated by the sponge replica method. The PMMA-PCL fibers loaded with HAp powder were wrapped around the ZrO2/BCP scaffold using the electrospinning process. To imitate the Haversian canal region of the bone, HAp-loaded PMMA-PCL fibers were wrapped around a steel wire of 0.3 mm diameter. As a result, the bundles of fiber wrapped around the wires imitated the osteon structure of the cortical bone. Finally, the ZrO2/BCP scaffold was surrounded by HAp-loaded PMMA-PCL composite bundles. After removal of the steel wires, the ZrO2/BCP scaffold and bundles of HAp-loaded PMMA-PCL formed an interconnected structure resembling the human bone. Its diameter, compressive strength and porosity were approximately 12 mm, 5 MPa and 70%, respectively, and the viability of MG-63 osteoblast-like cells was determined to be over 90% by the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay. This artificial bone shows excellent cytocompatibility and is a promising bone regeneration material.

  18. Novel approach to the fabrication of an artificial small bone using a combination of sponge replica and electrospinning methods

    PubMed Central

    Kim, Yang-Hee; Lee, Byong-Taek

    2011-01-01

    In this study, a novel artificial small bone consisting of ZrO2-biphasic calcium phosphate/polymethylmethacrylate-polycaprolactone-hydroxyapatite (ZrO2-BCP/PMMA-PCL-HAp) was fabricated using a combination of sponge replica and electrospinning methods. To mimic the cancellous bone, the ZrO2/BCP scaffold was composed of three layers, ZrO2, ZrO2/BCP and BCP, fabricated by the sponge replica method. The PMMA-PCL fibers loaded with HAp powder were wrapped around the ZrO2/BCP scaffold using the electrospinning process. To imitate the Haversian canal region of the bone, HAp-loaded PMMA-PCL fibers were wrapped around a steel wire of 0.3 mm diameter. As a result, the bundles of fiber wrapped around the wires imitated the osteon structure of the cortical bone. Finally, the ZrO2/BCP scaffold was surrounded by HAp-loaded PMMA-PCL composite bundles. After removal of the steel wires, the ZrO2/BCP scaffold and bundles of HAp-loaded PMMA-PCL formed an interconnected structure resembling the human bone. Its diameter, compressive strength and porosity were approximately 12 mm, 5 MPa and 70%, respectively, and the viability of MG-63 osteoblast-like cells was determined to be over 90% by the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay. This artificial bone shows excellent cytocompatibility and is a promising bone regeneration material. PMID:27877406

  19. Incorporation of omics analyses into artificial gravity research for space exploration countermeasure development.

    PubMed

    Schmidt, Michael A; Goodwin, Thomas J; Pelligra, Ralph

    The next major steps in human spaceflight include flyby, orbital, and landing missions to the Moon, Mars, and near earth asteroids. The first crewed deep space mission is expected to launch in 2022, which affords less than 7 years to address the complex question of whether and how to apply artificial gravity to counter the effects of prolonged weightlessness. Various phenotypic changes are demonstrated during artificial gravity experiments. However, the molecular dynamics (genotype and molecular phenotypes) that underlie these morphological, physiological, and behavioral phenotypes are far more complex than previously understood. Thus, targeted molecular assessment of subjects under various G conditions can be expected to miss important patterns of molecular variance that inform the more general phenotypes typically being measured. Use of omics methods can help detect changes across broad molecular networks, as various G-loading paradigms are applied. This will be useful in detecting off-target, or unanticipated effects of the different gravity paradigms applied to humans or animals. Insights gained from these approaches may eventually be used to inform countermeasure development or refine the deployment of existing countermeasures. This convergence of the omics and artificial gravity research communities may be critical if we are to develop the proper artificial gravity solutions under the severely compressed timelines currently established. Thus, the omics community may offer a unique ability to accelerate discovery, provide new insights, and benefit deep space missions in ways that have not been previously considered.

  20. Review of some vortex relations

    NASA Technical Reports Server (NTRS)

    Krause, E.

    1984-01-01

    The evaluation of the circulation from numerical solutions of the momentum and energy equations is discussed for incompressible and compressible flows. It is shown how artificial damping directly influences the time ratio of change of the circulation.

  1. Increasing Lift by Releasing Compressed Air on Suction Side of Airfoil

    NASA Technical Reports Server (NTRS)

    Seewald, F

    1927-01-01

    The investigation was limited chiefly to the region of high angles of attack since it is only in this region that any considerable change in the character of the flow can be expected from such artificial aids. The slot, through which compressed air was blown, was formed by two pieces of sheet steel connected by screws at intervals of about 5 cm. It was intended to regulate the width of the slot by means of these screws. Much more compressed air was required than was originally supposed, hence all the delivery pipes were much too small. This experiment, therefore, is to be regarded as only a preliminary one.

  2. [Novel artificial lamina for prevention of epidural adhesions after posterior cervical laminectomy].

    PubMed

    Lü, Chaoliang; Song, Yueming; Liu, Hao; Liu, Limin; Gong, Quan; Li, Tao; Zeng, Jiancheng; Kong, Qingquan; Pei, Fuxing; Tu, Chongqi; Duan, Hong

    2013-07-01

    To evaluate the application of artificial lamina of multi-amino-acid copolymer (MAACP)/nano-hydroxyapatite (n-HA) in prevention of epidural adhesion and compression of scar tissue after posterior cervical laminectomy. Fifteen 2-year-old male goats [weighing, (30 +/- 2) kg] were randomly divided into experimental group (n=9) and control group (n=6). In the experimental group, C4 laminectomy was performed, followed by MAACP/n-HA artificial lamina implantations; in the control group, only C4 laminectomy was performed. At 4, 12, and 24 weeks after operation, 2, 2, and 5 goats in the experimental group and 2, 2, and 2 goats in the control group were selected for observation of wound infection, artificial laminar fragmentation and displacement, and its shape; Rydell's degree of adhesion criteria was used to evaluate the adhesion degree between 2 groups. X-ray and CT images were observed; at 24 weeks after operation, CT scan was used to measure the spinal canal area and the sagittal diameter of C3, C4, and C5 vertebrea, 2 normal goats served as normal group; and MRI was used to assess adhesion and compression of scar tissue on the dura and the nerve root. Then goats were sacrificed and histological observation was carried out. After operation, the wound healed well; no toxicity or elimination reaction was observed. According to Rydell's degree of adhesion criteria, adhesion in the experimental group was significantly slighter than that in the control group (Z= -2.52, P=0.00). X-ray and CT scan showed that no dislocation of artificial lamina occurred, new cervical bone formed in the defect, and bony spinal canal was rebuilt in the experimental group. Defects of C4 vertebral plate and spinous process were observed in the control group. At 24 weeks, the spinal canal area and sagittal diameter of C4 in the experimental group and normal group were significantly larger than those in the control group (P < 0.05), but no significant difference was found between experimental group and normal group (P > 0.05). MRI showed cerebrospinal fluid signal was unobstructed and no soft tissue projected into the spinal canal in the experimental group; scar tissue projected into the spinal canal and the dura were compressed by scar tissue in the control group. HE staining and Masson trichrome staining showed that artificial lamina had no obvious degradation with high integrity, some new bone formed at interface between the artificial material and bone in the experimental group; fibrous tissue grew into defect in the control group. The MAACP/n-HA artificial lamina could maintaine good biomechanical properties for a long time in vivo and could effectively prevent the epidural scar from growing in the lamina defect area.

  3. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    NASA Astrophysics Data System (ADS)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

  4. A Framework of Hyperspectral Image Compression using Neural Networks

    DOE PAGES

    Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...

    2015-01-01

    Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less

  5. Effect of different sintering temperature on fly ash based geopolymer artificial aggregate

    NASA Astrophysics Data System (ADS)

    Abdullah, Alida; Abdullah, Mohd Mustafa Al Bakri; Hussin, Kamarudin; Tahir, Muhammad Faheem Mohd

    2017-04-01

    This research was conducted to study the mechanical and morphology of fly ash based geopolymer as artificial aggregate at different sintering temperature. The raw material that are used is fly ash, sodium hydroxide, sodium silicate, geopolymer artificial aggregate, Ordinary Portland Cement (OPC), coarse aggregate and fine aggregate. The research starts with the preparation of geopolymer artificial aggregate. Then, geopolymer artificial aggregate will be sintered at six difference temperature that is 400°C, 500°C, 600°C, 700°C, 800°C and 900°C to known at which temperature the geopolymer artificial aggregate will become a lightweight aggregate. In order to characterize the geopolymer artificial aggregate the X-ray Diffraction (XRD) and X-Ray Fluorescence (XRF) was done. The testing and analyses involve for the artificial aggregate is aggregate impact test, specific gravity test and Scanning Electron Microscopy (SEM). After that the process will proceed to produce concrete with two type of different aggregate that is course aggregate and geopolymer artificial aggregate. The testing for concrete is compressive strength test, water absorption test and density test. The result obtained will be compared and analyse.

  6. Entropy Splitting for High Order Numerical Simulation of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    A stable high order numerical scheme for direct numerical simulation (DNS) of shock-free compressible turbulence is presented. The method is applicable to general geometries. It contains no upwinding, artificial dissipation, or filtering. Instead the method relies on the stabilizing mechanisms of an appropriate conditioning of the governing equations and the use of compatible spatial difference operators for the interior points (interior scheme) as well as the boundary points (boundary scheme). An entropy splitting approach splits the inviscid flux derivatives into conservative and non-conservative portions. The spatial difference operators satisfy a summation by parts condition leading to a stable scheme (combined interior and boundary schemes) for the initial boundary value problem using a generalized energy estimate. A Laplacian formulation of the viscous and heat conduction terms on the right hand side of the Navier-Stokes equations is used to ensure that any tendency to odd-even decoupling associated with central schemes can be countered by the fluid viscosity. A special formulation of the continuity equation is used, based on similar arguments. The resulting methods are able to minimize spurious high frequency oscillation producing nonlinear instability associated with pure central schemes, especially for long time integration simulation such as DNS. For validation purposes, the methods are tested in a DNS of compressible turbulent plane channel flow at a friction Mach number of 0.1 where a very accurate turbulence data base exists. It is demonstrated that the methods are robust in terms of grid resolution, and in good agreement with incompressible channel data, as expected at this Mach number. Accurate turbulence statistics can be obtained with moderate grid sizes. Stability limits on the range of the splitting parameter are determined from numerical tests.

  7. Plasma waves associated with the AMPTE artificial comet

    NASA Technical Reports Server (NTRS)

    Gurnett, D. A.; Anderson, R. R.; Haeusler, B.; Haerendel, G.; Bauer, O. H.

    1985-01-01

    Numerous plasma wave effects were detected by the AMPTE/IRM spacecraft during the artificial comet experiment on December 27, 1984. As the barium ion cloud produced by the explosion expanded over the spacecraft, emissions at the electron plasma frequency and ion plasma frequency provided a determination of the local electron density. The electron density in the diamagnetic cavity produced by the ion cloud reached a peak of more than 5 x 10 to the 5th per cu cm, then decayed smoothly as the cloud expanded, varying approximately as t exp-2. As the cloud began to move due to interactions with the solar wind, a region of compressed plasma was encountered on the upstream side of the diamagnetic cavity. The peak electron density in the compression region was about 1.5 x 10 to the 4th per cu cm. Later, a very intense (140 mVolt/m) broadband burst of electrostatic noise was encountered on the sunward side of the compression region. This noise has characteristics very similar to noise observed in the earth's bow shock, and is believed to be a shocklike interaction produced by an ion beam-plasma instability between the nearly stationary barium ions and the streaming solar wind protons.

  8. Retention of cardiopulmonary resuscitation skills after hands-only training versus conventional training in novices: a randomized controlled trial.

    PubMed

    Kim, Young Joon; Cho, Youngsuk; Cho, Gyu Chong; Ji, Hyun Kyung; Han, Song Yi; Lee, Jin Hyuck

    2017-06-01

    Cardiopulmonary resuscitation (CPR) training can improve performance during simulated cardiac arrest; however, retention of skills after training remains uncertain. Recently, hands-only CPR has been shown to be as effective as conventional CPR. The purpose of this study is to compare the retention rate of CPR skills in laypersons after hands-only or conventional CPR training. Participants were randomly assigned to 1 of 2 CPR training methods: 80 minutes of hands-only CPR training or 180 minutes of conventional CPR training. Each participant's CPR skills were evaluated at the end of training and 3 months thereafter using the Resusci Anne manikin with a skill-reporting software. In total, 252 participants completed training; there were 125 in the hands-only CPR group and 127 in the conventional CPR group. After 3 months, 118 participants were randomly selected to complete a post-training test. The hands-only CPR group showed a significant decrease in average compression rate (P=0.015), average compression depth (P=0.031), and proportion of adequate compression depth (P=0.011). In contrast, there was no difference in the skills of the conventional CPR group after 3 months. Conventional CPR training appears to be more effective for the retention of chest compression skills than hands-only CPR training; however, the retention of artificial ventilation skills after conventional CPR training is poor.

  9. [Preparation of nano-nacre artificial bone].

    PubMed

    Chen, Jian-ting; Tang, Yong-zhi; Zhang, Jian-gang; Wang, Jian-jun; Xiao, Ying

    2008-12-01

    To assess the improvements in the properties of nano-nacre artificial bone prepared on the basis of nacre/polylactide acid composite artificial bone and its potential for clinical use. The compound of nano-scale nacre powder and poly-D, L-lactide acid (PDLLA) was used to prepare the cylindrical hollow artificial bone, whose properties including raw material powder scale, pore size, porosity and biomechanical characteristics were compared with another artificial bone made of micron-scale nacre powder and PDLLA. Scanning electron microscope showed that the average particle size of the nano-nacre powder was 50.4-/+12.4 nm, and the average pore size of the artificial bone prepared using nano-nacre powder was 215.7-/+77.5 microm, as compared with the particle size of the micron-scale nacre powder of 5.0-/+3.0 microm and the pore size of the resultant artificial bone of 205.1-/+72.0 microm. The porosities of nano-nacre artificial bone and the micron-nacre artificial bone were (65.4-/+2.9)% and (53.4-/+2.2)%, respectively, and the two artificial bones had comparable compressive strength and Young's modulus, but the flexural strength of the nano-nacre artificial bone was lower than that of the micro-nacre artificial bone. The nano-nacre artificial bone allows better biodegradability and possesses appropriate pore size, porosity and biomechanical properties for use as a promising material in bone tissue engineering.

  10. Neural network wavelet technology: A frontier of automation

    NASA Technical Reports Server (NTRS)

    Szu, Harold

    1994-01-01

    Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.

  11. 3D Printed Prisms with Tunable Dispersion for the THz Frequency Range

    NASA Astrophysics Data System (ADS)

    Busch, Stefan F.; Castro-Camus, Enrique; Beltran-Mejia, Felipe; Balzer, Jan C.; Koch, Martin

    2018-04-01

    Here, we present a 3D printed prism for THz waves made out of an artificial dielectric material in which the dispersion can be tuned by external compression. The artificial material consists of thin dielectric layers with variable air spacings which has been produced using a fused deposition molding process. The material properties are carefully characterized and the functionality of the prisms is in a good agreement with the underlying theory. These prisms are durable, lightweight, inexpensive, and easy to produce.

  12. 3D Printed Prisms with Tunable Dispersion for the THz Frequency Range

    NASA Astrophysics Data System (ADS)

    Busch, Stefan F.; Castro-Camus, Enrique; Beltran-Mejia, Felipe; Balzer, Jan C.; Koch, Martin

    2018-06-01

    Here, we present a 3D printed prism for THz waves made out of an artificial dielectric material in which the dispersion can be tuned by external compression. The artificial material consists of thin dielectric layers with variable air spacings which has been produced using a fused deposition molding process. The material properties are carefully characterized and the functionality of the prisms is in a good agreement with the underlying theory. These prisms are durable, lightweight, inexpensive, and easy to produce.

  13. Numerical techniques for solving nonlinear instability problems in smokeless tactical solid rocket motors. [finite difference technique

    NASA Technical Reports Server (NTRS)

    Baum, J. D.; Levine, J. N.

    1980-01-01

    The selection of a satisfactory numerical method for calculating the propagation of steep fronted shock life waveforms in a solid rocket motor combustion chamber is discussed. A number of different numerical schemes were evaluated by comparing the results obtained for three problems: the shock tube problems; the linear wave equation, and nonlinear wave propagation in a closed tube. The most promising method--a combination of the Lax-Wendroff, Hybrid and Artificial Compression techniques, was incorporated into an existing nonlinear instability program. The capability of the modified program to treat steep fronted wave instabilities in low smoke tactical motors was verified by solving a number of motor test cases with disturbance amplitudes as high as 80% of the mean pressure.

  14. Inflammatory cascades mediate synapse elimination in spinal cord compression

    PubMed Central

    2014-01-01

    Background Cervical compressive myelopathy (CCM) is caused by chronic spinal cord compression due to spondylosis, a degenerative disc disease, and ossification of the ligaments. Tip-toe walking Yoshimura (twy) mice are reported to be an ideal animal model for CCM-related neuronal dysfunction, because they develop spontaneous spinal cord compression without any artificial manipulation. Previous histological studies showed that neurons are lost due to apoptosis in CCM, but the mechanism underlying this neurodegeneration was not fully elucidated. The purpose of this study was to investigate the pathophysiology of CCM by evaluating the global gene expression of the compressed spinal cord and comparing the transcriptome analysis with the physical and histological findings in twy mice. Methods Twenty-week-old twy mice were divided into two groups according to the magnetic resonance imaging (MRI) findings: a severe compression (S) group and a mild compression (M) group. The transcriptome was analyzed by microarray and RT-PCR. The cellular pathophysiology was examined by immunohistological analysis and immuno-electron microscopy. Motor function was assessed by Rotarod treadmill latency and stride-length tests. Results Severe cervical calcification caused spinal canal stenosis and low functional capacity in twy mice. The microarray analysis revealed 215 genes that showed significantly different expression levels between the S and the M groups. Pathway analysis revealed that genes expressed at higher levels in the S group were enriched for terms related to the regulation of inflammation in the compressed spinal cord. M1 macrophage-dominant inflammation was present in the S group, and cysteine-rich protein 61 (Cyr61), an inducer of M1 macrophages, was markedly upregulated in these spinal cords. Furthermore, C1q, which initiates the classical complement cascade, was more upregulated in the S group than in the M group. The confocal and electron microscopy observations indicated that classically activated microglia/macrophages had migrated to the compressed spinal cord and eliminated synaptic terminals. Conclusions We revealed the detailed pathophysiology of the inflammatory response in an animal model of chronic spinal cord compression. Our findings suggest that complement-mediated synapse elimination is a central mechanism underlying the neurodegeneration in CCM. PMID:24589419

  15. Low Dissipative High Order Shock-Capturing Methods Using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Oisson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  16. Low Dissipative High Order Shock-Capturing Methods using Characteristic-Based Filters

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sandham, N. D.; Djomehri, M. J.

    1998-01-01

    An approach which closely maintains the non-dissipative nature of classical fourth or higher- order spatial differencing away from shock waves and steep gradient regions while being capable of accurately capturing discontinuities, steep gradient and fine scale turbulent structures in a stable and efficient manner is described. The approach is a generalization of the method of Gustafsson and Olsson and the artificial compression method (ACM) of Harten. Spatially non-dissipative fourth or higher-order compact and non-compact spatial differencings are used as the base schemes. Instead of applying a scalar filter as in Gustafsson and Olsson, an ACM like term is used to signal the appropriate amount of second or third-order TVD or ENO types of characteristic based numerical dissipation. This term acts as a characteristic filter to minimize numerical dissipation for the overall scheme. For time-accurate computations, time discretizations with low dissipation are used. Numerical experiments on 2-D vortical flows, vortex-shock interactions and compressible spatially and temporally evolving mixing layers showed that the proposed schemes have the desired property with only a 10% increase in operations count over standard second-order TVD schemes. Aside from the ability to accurately capture shock-turbulence interaction flows, this approach is also capable of accurately preserving vortex convection. Higher accuracy is achieved with fewer grid points when compared to that of standard second-order TVD or ENO schemes. To demonstrate the applicability of these schemes in sustaining turbulence where shock waves are absent, a simulation of 3-D compressible turbulent channel flow in a small domain is conducted.

  17. Experimental Study on Artificial Cemented Sand Prepared with Ordinary Portland Cement with Different Contents.

    PubMed

    Li, Dongliang; Liu, Xinrong; Liu, Xianshan

    2015-07-02

    Artificial cemented sand test samples were prepared by using ordinary Portland cement (OPC) as the cementing agent. Through uniaxial compression tests and consolidated drained triaxial compression tests, the stress-strain curves of the artificial cemented sand with different cementing agent contents (0.01, 0.03, 0.05 and 0.08) under various confining pressures (0.00 MPa, 0.25 MPa, 0.50 MPa and 1.00 MPa) were obtained. Based on the test results, the effect of the cementing agent content ( C v ) on the physical and mechanical properties of the artificial cemented sand were analyzed and the Mohr-Coulomb strength theory was modified by using C v . The research reveals that when C v is high (e.g., C v = 0.03, 0.05 or 0.08), the stress-strain curves of the samples indicate a strain softening behavior; under the same confining pressure, as C v increases, both the peak strength and residual strength of the samples show a significant increase. When C v is low (e.g., C v = 0.01), the stress-strain curves of the samples indicate strain hardening behavior. From the test data, a function of C v (the cementing agent content) with c ' (the cohesion force of the sample) and Δϕ' (the increment of the angle of shearing resistance) is obtained. Furthermore, through modification of the Mohr-Coulomb strength theory, the effect of cementing agent content on the strength of the cemented sand is demonstrated.

  18. [Evaluation of mechanical properties of four kinds of composite resins for inlay].

    PubMed

    Jiang, Ling-ling; Liu, Hong; Wang, Jin-rui

    2011-04-01

    To evaluate the compressive strength, wear resistance, hardness, and soaking fatigue of four composite resins for inlay, which were Ceramage, Surefil, Solitaire 2, and Filtek(TM) Z350. Scanning electron microscope (SEM) was used to analyze the microstructures of the wear surface of the samples. The samples for the compression test, hardness test and wear were prepared. The samples were respectively immersed in the artificial saliva for 2 months for immersed test. The electronic universal testing machine was used to test the compression strength. Hardness was quantified by micro-Vickers hardness test. The wear tester was used for the wear test. SEM was used to analyze the microstructures of the wear surface of samples. All the data was analyzed by using SPSS17.0 software package. The compressive strength of Surefil was the biggest which was significantly higher than the other three resins before soaking (P<0.05). After soaking, there was no significant difference between the composite resins (P>0.05). The hardness of Surefil was the best, and significant difference was found between the hardness of the materials before soaking (P<0.05). After soaking, no significant difference was obtained between the hardness of Surefil and Filtek(TM) Z350 (P>0.05).The compressive strength and hardness of 4 materials decreased after soaking in artificial saliva. But only the compressive strength of Filtek(TM) Z350 had no significant change after immersion (P>0.05). Except Filtek(TM) Z350, there was significant difference between the other three materials (P<0.05). Significant relationship was observed between wear and hardness of three materials (P<0.05). According to SEM observation, abrasive wear occurred in four materials. In addition to Ceramage, other composite resins had adhesive wear. The mechanical property of Surefil is the best, and it is suitable for fabrication of posterior inlay. Filtek(TM) Z350's ability to resist fatigue is the best.

  19. The effect of texture granularity on texture synthesis quality

    NASA Astrophysics Data System (ADS)

    Golestaneh, S. Alireza; Subedar, Mahesh M.; Karam, Lina J.

    2015-09-01

    Natural and artificial textures occur frequently in images and in video sequences. Image/video coding systems based on texture synthesis can make use of a reliable texture synthesis quality assessment method in order to improve the compression performance in terms of perceived quality and bit-rate. Existing objective visual quality assessment methods do not perform satisfactorily when predicting the synthesized texture quality. In our previous work, we showed that texture regularity can be used as an attribute for estimating the quality of synthesized textures. In this paper, we study the effect of another texture attribute, namely texture granularity, on the quality of synthesized textures. For this purpose, subjective studies are conducted to assess the quality of synthesized textures with different levels (low, medium, high) of perceived texture granularity using different types of texture synthesis methods.

  20. A transcutaneous energy transmission system for artificial heart adapting to changing impedance.

    PubMed

    Fu, Yang; Hu, Liang; Ruan, Xiaodong; Fu, Xin

    2015-04-01

    This article presents a coil-coupling-based transcutaneous energy transmission system (TETS) for wirelessly powering an implanted artificial heart. Keeping high efficiency is especially important for TETS, which is usually difficult due to transmission impedance changes in practice, which are commonly caused by power requirement variation for different body movements and coil-couple malposition accompanying skin peristalsis. The TETS introduced in this article is designed based on a class-E power amplifier (E-PA), of which efficiency is over 95% when its load is kept in a certain range. A resonance matching and impedance compressing functions coupled network based on parallel-series capacitors is proposed in the design, to enhance the energy transmission efficiency and capacity of the coil-couple through resonating, and meanwhile compress the changing range of the transmission impedance to meet the load requirements of the E-PA and thus keep the high efficiency of TETS. An analytical model of the designed TETS is built to analyze the effect of the network and also provide bases for following parameters determination. Then, according algorithms are provided to determine the optimal parameters required in the TETS for good performance both in resonance matching and impedance compressing. The design is tested by a series of experiments, which validate that the TETS can transmit a wide range of power with a total efficiency of at least 70% and commonly beyond 80%, even when the coil-couple is seriously malpositioned. The design methodology proposed in this article can be applied to any existing TETS based on E-PA to improve their performance in actual applications. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Recycling ground granulated blast furnace slag as cold bonded artificial aggregate partially used in self-compacting concrete.

    PubMed

    Gesoğlu, Mehmet; Güneyisi, Erhan; Mahmood, Swara Fuad; Öz, Hatice Öznur; Mermerdaş, Kasım

    2012-10-15

    Ground granulated blast furnace slag (GGBFS), a by-product from iron industry, was recycled as artificial coarse aggregate through cold bonding pelletization process. The artificial slag aggregates (ASA) replaced partially the natural coarse aggregates in production of self-compacting concrete (SCC). Moreover, as being one of the most widely used mineral admixtures in concrete industry, fly ash (FA) was incorporated as a part of total binder content to impart desired fluidity to SCCs. A total of six concrete mixtures having various ASA replacement levels (0%, 20%, 40%, 60%, and 100%) were designed with a water-to-binder (w/b) ratio of 0.32. Fresh properties of self-compacting concretes (SCC) were observed through slump flow time, flow diameter, V-funnel flow time, and L-box filling height ratio. Compressive strength of hardened SCCs was also determined at 28 days of curing. It was observed that increasing the replacement level of ASA resulted in decrease in the amount of superplasticizer to achieve a constant slump flow diameter. Moreover, passing ability and viscosity of SCC's enhanced with increasing the amount of ASA in the concrete. The maximum compressive strength was achieved for the SCC having 60% ASA replacement. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Finite-Difference Lattice Boltzmann Scheme for High-Speed Compressible Flow: Two-Dimensional Case

    NASA Astrophysics Data System (ADS)

    Gan, Yan-Biao; Xu, Ai-Guo; Zhang, Guang-Cai; Zhang, Ping; Zhang, Lei; Li, Ying-Jun

    2008-07-01

    Lattice Boltzmann (LB) modeling of high-speed compressible flows has long been attempted by various authors. One common weakness of most of previous models is the instability problem when the Mach number of the flow is large. In this paper we present a finite-difference LB model, which works for flows with flexible ratios of specific heats and a wide range of Mach number, from 0 to 30 or higher. Besides the discrete-velocity-model by Watari [Physica A 382 (2007) 502], a modified Lax Wendroff finite difference scheme and an artificial viscosity are introduced. The combination of the finite-difference scheme and the adding of artificial viscosity must find a balance of numerical stability versus accuracy. The proposed model is validated by recovering results of some well-known benchmark tests: shock tubes and shock reflections. The new model may be used to track shock waves and/or to study the non-equilibrium procedure in the transition between the regular and Mach reflections of shock waves, etc.

  3. Merging of the Dirac points in electronic artificial graphene

    NASA Astrophysics Data System (ADS)

    Feilhauer, J.; Apel, W.; Schweitzer, L.

    2015-12-01

    Theory predicts that graphene under uniaxial compressive strain in an armchair direction should undergo a topological phase transition from a semimetal into an insulator. Due to the change of the hopping integrals under compression, both Dirac points shift away from the corners of the Brillouin zone towards each other. For sufficiently large strain, the Dirac points merge and an energy gap appears. However, such a topological phase transition has not yet been observed in normal graphene (due to its large stiffness) neither in any other electronic system. We show numerically and analytically that such a merging of the Dirac points can be observed in electronic artificial graphene created from a two-dimensional electron gas by application of a triangular lattice of repulsive antidots. Here, the effect of strain is modeled by tuning the distance between the repulsive potentials along the armchair direction. Our results show that the merging of the Dirac points should be observable in a recent experiment with molecular graphene.

  4. First North American 50 cc Total Artificial Heart Experience: Conversion from a 70 cc Total Artificial Heart.

    PubMed

    Khalpey, Zain; Kazui, Toshinobu; Ferng, Alice S; Connell, Alana; Tran, Phat L; Meyer, Mark; Rawashdeh, Badi; Smith, Richard G; Sweitzer, Nancy K; Friedman, Mark; Lick, Scott; Slepian, Marvin J; Copeland, Jack G

    2016-01-01

    The 70 cc total artificial heart (TAH) has been utilized as bridge to transplant (BTT) for biventricular failure. However, the utilization of 70 cc TAH has been limited to large patients for the low output from the pulmonary as well as systemic vein compression after chest closure. Therefore, the 50 cc TAH was developed by SynCardia (Tucson, AZ) to accommodate smaller chest cavity. We report the first TAH exchange from a 70 to 50 cc due to a fit difficulty. The patient failed to be closed with a 70 cc TAH, although the patient met the conventional 70 cc TAH fit criteria. We successfully closed the chest with a 50 cc TAH.

  5. Ultrasonic Phased Array Compressive Imaging in Time and Frequency Domain: Simulation, Experimental Verification and Real Application

    PubMed Central

    Bai, Zhiliang; Chen, Shili; Jia, Lecheng; Zeng, Zhoumo

    2018-01-01

    Embracing the fact that one can recover certain signals and images from far fewer measurements than traditional methods use, compressive sensing (CS) provides solutions to huge amounts of data collection in phased array-based material characterization. This article describes how a CS framework can be utilized to effectively compress ultrasonic phased array images in time and frequency domains. By projecting the image onto its Discrete Cosine transform domain, a novel scheme was implemented to verify the potentiality of CS for data reduction, as well as to explore its reconstruction accuracy. The results from CIVA simulations indicate that both time and frequency domain CS can accurately reconstruct array images using samples less than the minimum requirements of the Nyquist theorem. For experimental verification of three types of artificial flaws, although a considerable data reduction can be achieved with defects clearly preserved, it is currently impossible to break Nyquist limitation in the time domain. Fortunately, qualified recovery in the frequency domain makes it happen, meaning a real breakthrough for phased array image reconstruction. As a case study, the proposed CS procedure is applied to the inspection of an engine cylinder cavity containing different pit defects and the results show that orthogonal matching pursuit (OMP)-based CS guarantees the performance for real application. PMID:29738452

  6. Supercomputer implementation of finite element algorithms for high speed compressible flows

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Ramakrishnan, R.

    1986-01-01

    Prediction of compressible flow phenomena using the finite element method is of recent origin and considerable interest. Two shock capturing finite element formulations for high speed compressible flows are described. A Taylor-Galerkin formulation uses a Taylor series expansion in time coupled with a Galerkin weighted residual statement. The Taylor-Galerkin algorithms use explicit artificial dissipation, and the performance of three dissipation models are compared. A Petrov-Galerkin algorithm has as its basis the concepts of streamline upwinding. Vectorization strategies are developed to implement the finite element formulations on the NASA Langley VPS-32. The vectorization scheme results in finite element programs that use vectors of length of the order of the number of nodes or elements. The use of the vectorization procedure speeds up processing rates by over two orders of magnitude. The Taylor-Galerkin and Petrov-Galerkin algorithms are evaluated for 2D inviscid flows on criteria such as solution accuracy, shock resolution, computational speed and storage requirements. The convergence rates for both algorithms are enhanced by local time-stepping schemes. Extension of the vectorization procedure for predicting 2D viscous and 3D inviscid flows are demonstrated. Conclusions are drawn regarding the applicability of the finite element procedures for realistic problems that require hundreds of thousands of nodes.

  7. Modeling the Effects of Cu Content and Deformation Variables on the High-Temperature Flow Behavior of Dilute Al-Fe-Si Alloys Using an Artificial Neural Network.

    PubMed

    Shakiba, Mohammad; Parson, Nick; Chen, X-Grant

    2016-06-30

    The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002-0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C-550 °C) and strain rates (0.01-10 s -1 ). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress.

  8. Modeling the Effects of Cu Content and Deformation Variables on the High-Temperature Flow Behavior of Dilute Al-Fe-Si Alloys Using an Artificial Neural Network

    PubMed Central

    Shakiba, Mohammad; Parson, Nick; Chen, X.-Grant

    2016-01-01

    The hot deformation behavior of Al-0.12Fe-0.1Si alloys with varied amounts of Cu (0.002–0.31 wt %) was investigated by uniaxial compression tests conducted at different temperatures (400 °C–550 °C) and strain rates (0.01–10 s−1). The results demonstrated that flow stress decreased with increasing deformation temperature and decreasing strain rate, while flow stress increased with increasing Cu content for all deformation conditions studied due to the solute drag effect. Based on the experimental data, an artificial neural network (ANN) model was developed to study the relationship between chemical composition, deformation variables and high-temperature flow behavior. A three-layer feed-forward back-propagation artificial neural network with 20 neurons in a hidden layer was established in this study. The input parameters were Cu content, temperature, strain rate and strain, while the flow stress was the output. The performance of the proposed model was evaluated using the K-fold cross-validation method. The results showed excellent generalization capability of the developed model. Sensitivity analysis indicated that the strain rate is the most important parameter, while the Cu content exhibited a modest but significant influence on the flow stress. PMID:28773658

  9. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Pierce, Charles; Moin, Parviz

    2002-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustic combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Both of these characteristics suggest the use of larger time steps than those allowed by an acoustic CFL condition. The turbulent combustion model used is the Combined Conserved Scalar/Level Set Flamelet model of Duchamp de Lageneste and Pitsch for partially premixed combustion. Comparison of LES results to the experiments of Besson et al will be presented.

  10. Application of complex discrete wavelet transform in classification of Doppler signals using complex-valued artificial neural network.

    PubMed

    Ceylan, Murat; Ceylan, Rahime; Ozbay, Yüksel; Kara, Sadik

    2008-09-01

    In biomedical signal classification, due to the huge amount of data, to compress the biomedical waveform data is vital. This paper presents two different structures formed using feature extraction algorithms to decrease size of feature set in training and test data. The proposed structures, named as wavelet transform-complex-valued artificial neural network (WT-CVANN) and complex wavelet transform-complex-valued artificial neural network (CWT-CVANN), use real and complex discrete wavelet transform for feature extraction. The aim of using wavelet transform is to compress data and to reduce training time of network without decreasing accuracy rate. In this study, the presented structures were applied to the problem of classification in carotid arterial Doppler ultrasound signals. Carotid arterial Doppler ultrasound signals were acquired from left carotid arteries of 38 patients and 40 healthy volunteers. The patient group included 22 males and 16 females with an established diagnosis of the early phase of atherosclerosis through coronary or aortofemoropopliteal (lower extremity) angiographies (mean age, 59 years; range, 48-72 years). Healthy volunteers were young non-smokers who seem to not bear any risk of atherosclerosis, including 28 males and 12 females (mean age, 23 years; range, 19-27 years). Sensitivity, specificity and average detection rate were calculated for comparison, after training and test phases of all structures finished. These parameters have demonstrated that training times of CVANN and real-valued artificial neural network (RVANN) were reduced using feature extraction algorithms without decreasing accuracy rate in accordance to our aim.

  11. Scalar conservation and boundedness in simulations of compressible flow

    NASA Astrophysics Data System (ADS)

    Subbareddy, Pramod K.; Kartha, Anand; Candler, Graham V.

    2017-11-01

    With the proper combination of high-order, low-dissipation numerical methods, physics-based subgrid-scale models, and boundary conditions it is becoming possible to simulate many combustion flows at relevant conditions. However, non-premixed flows are a particular challenge because the thickness of the fuel/oxidizer interface scales inversely with Reynolds number. Sharp interfaces can also be present in the initial or boundary conditions. When higher-order numerical methods are used, there are often aphysical undershoots and overshoots in the scalar variables (e.g. passive scalars, species mass fractions or progress variable). These numerical issues are especially prominent when low-dissipation methods are used, since sharp jumps in flow variables are not always coincident with regions of strong variation in the scalar fields: consequently, special detection mechanisms and dissipative fluxes are needed. Most numerical methods diffuse the interface, resulting in artificial mixing and spurious reactions. In this paper, we propose a numerical method that mitigates this issue. We present methods for passive and active scalars, and demonstrate their effectiveness with several examples.

  12. Autonomous learning derived from experimental modeling of physical laws.

    PubMed

    Grabec, Igor

    2013-05-01

    This article deals with experimental description of physical laws by probability density function of measured data. The Gaussian mixture model specified by representative data and related probabilities is utilized for this purpose. The information cost function of the model is described in terms of information entropy by the sum of the estimation error and redundancy. A new method is proposed for searching the minimum of the cost function. The number of the resulting prototype data depends on the accuracy of measurement. Their adaptation resembles a self-organized, highly non-linear cooperation between neurons in an artificial NN. A prototype datum corresponds to the memorized content, while the related probability corresponds to the excitability of the neuron. The method does not include any free parameters except objectively determined accuracy of the measurement system and is therefore convenient for autonomous execution. Since representative data are generally less numerous than the measured ones, the method is applicable for a rather general and objective compression of overwhelming experimental data in automatic data-acquisition systems. Such compression is demonstrated on analytically determined random noise and measured traffic flow data. The flow over a day is described by a vector of 24 components. The set of 365 vectors measured over one year is compressed by autonomous learning to just 4 representative vectors and related probabilities. These vectors represent the flow in normal working days and weekends or holidays, while the related probabilities correspond to relative frequencies of these days. This example reveals that autonomous learning yields a new basis for interpretation of representative data and the optimal model structure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Improved finite difference schemes for transonic potential calculations

    NASA Technical Reports Server (NTRS)

    Hafez, M.; Osher, S.; Whitlow, W., Jr.

    1984-01-01

    Engquist and Osher (1980) have introduced a finite difference scheme for solving the transonic small disturbance equation, taking into account cases in which only compression shocks are admitted. Osher et al. (1983) studied a class of schemes for the full potential equation. It is proved that these schemes satisfy a new discrete 'entropy inequality' which rules out expansion shocks. However, the conducted analysis is restricted to steady two-dimensional flows. The present investigation is concerned with the adoption of a heuristic approach. The full potential equation in conservation form is solved with the aid of a modified artificial density method, based on flux biasing. It is shown that, with the current scheme, expansion shocks are not possible.

  14. Visualization Techniques in Space and Atmospheric Sciences

    NASA Technical Reports Server (NTRS)

    Szuszczewicz, E. P. (Editor); Bredekamp, Joseph H. (Editor)

    1995-01-01

    Unprecedented volumes of data will be generated by research programs that investigate the Earth as a system and the origin of the universe, which will in turn require analysis and interpretation that will lead to meaningful scientific insight. Providing a widely distributed research community with the ability to access, manipulate, analyze, and visualize these complex, multidimensional data sets depends on a wide range of computer science and technology topics. Data storage and compression, data base management, computational methods and algorithms, artificial intelligence, telecommunications, and high-resolution display are just a few of the topics addressed. A unifying theme throughout the papers with regards to advanced data handling and visualization is the need for interactivity, speed, user-friendliness, and extensibility.

  15. IMNN: Information Maximizing Neural Networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  16. Examination of Buckling Behavior of Thin-Walled Al-Mg-Si Alloy Extrusions

    NASA Astrophysics Data System (ADS)

    Vazdirvanidis, Athanasios; Koumarioti, Ioanna; Pantazopoulos, George; Rikos, Andreas; Toulfatzis, Anagnostis; Kostazos, Protesilaos; Manolakos, Dimitrios

    To achieve the combination of improved crash tolerance and maximum strength in aluminium automotive extrusions, a research program was carried out. The main objective was to study AA6063 alloy thin-walled square tubes' buckling behavior under axial quasi-static load after various artificial aging treatments. Variables included cooling rate after solid solution treatment, duration of the 1st stage of artificial aging and time and temperature of the 2nd stage of artificial aging. Metallography and tensile testing were employed for developing deeper knowledge on the effect of the aging process parameters. FEM analysis with the computer code LS-DYNA was supplementary applied for deformation mode investigation and crashworthiness prediction. Results showed that data from actual compression tests and numerical modeling were in considerable agreement.

  17. Evaluation of a technique to generate artificially thickened boundary layers in supersonic and hypersonic flows

    NASA Technical Reports Server (NTRS)

    Porro, A. R.; Hingst, W. R.; Davis, D. O.; Blair, A. B., Jr.

    1991-01-01

    The feasibility of using a contoured honeycomb model to generate a thick boundary layer in high-speed, compressible flow was investigated. The contour of the honeycomb was tailored to selectively remove momentum in a minimum of streamwise distance to create an artificially thickened turbulent boundary layer. Three wind tunnel experiments were conducted to verify the concept. Results indicate that this technique is a viable concept, especially for high-speed inlet testing applications. In addition, the compactness of the honeycomb boundary layer simulator allows relatively easy integration into existing wind tunnel model hardware.

  18. Macroscopic crack formation and extension in pristine and artificially aged PBX 9501

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Cheng; Thompson, Darla G

    2010-01-01

    A technique has been developed to quantitatively describe macroscopic cracks, both their location and extent, in heterogeneous high explosive and mock materials. By combining such a technique with the deformation field measurement using digital image correlation (DIC), we conduct observation and measurement of the initiation, extension, and coalescence of internal cracks in the compression of Brazilian disk made of pristine and artificially aged PBX 9501 hjgh explosives. Our results conclude quantitatively that aged PBX 9501 is not only weaker but also much more brittle than the pristine one, thus is more susceptible to macroscopic cracking.

  19. Probabilistic machine learning and artificial intelligence.

    PubMed

    Ghahramani, Zoubin

    2015-05-28

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  20. Probabilistic machine learning and artificial intelligence

    NASA Astrophysics Data System (ADS)

    Ghahramani, Zoubin

    2015-05-01

    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.

  1. Tuning the superstructure of ultrahigh-molecular-weight polyethylene/low-molecular-weight polyethylene blend for artificial joint application.

    PubMed

    Xu, Ling; Chen, Chen; Zhong, Gan-Ji; Lei, Jun; Xu, Jia-Zhuang; Hsiao, Benjamin S; Li, Zhong-Ming

    2012-03-01

    An easy approach was reported to achieve high mechanical properties of ultrahigh-molecular-weight polyethylene (UHMWPE)-based polyethylene (PE) blend for artificial joint application without the sacrifice of the original excellent wear and fatigue behavior of UHMWPE. The PE blend with desirable fluidity was obtained by melt mixing UHMWPE and low molecular weight polyethylene (LMWPE), and then was processed by a modified injection molding technology-oscillatory shear injection molding (OSIM). Morphological observation of the OSIM PE blend showed LMWPE contained well-defined interlocking shish-kebab self-reinforced superstructure. Addition of a small amount of long chain polyethylene (2 wt %) to LMWPE greatly induced formation of rich shish-kebabs. The ultimate tensile strength considerably increased from 27.6 MPa for conventional compression molded UHMWPE up to 78.4 MPa for OSIM PE blend along the flow direction and up to 33.5 MPa in its transverse direction. The impact strength of OSIM PE blend was increased by 46% and 7% for OSIM PE blend in the direction parallel and vertical to the shear flow, respectively. Wear and fatigue resistance were comparable to conventional compression molded UHMWPE. The superb performance of the OSIM PE blend was originated from formation of rich interlocking shish-kebab superstructure while maintaining unique properties of UHMWPE. The present results suggested the OSIM PE blend has high potential for artificial joint application. © 2012 American Chemical Society

  2. Modeling the Flow Behavior, Recrystallization, and Crystallographic Texture in Hot-Deformed Fe-30 Wt Pct Ni Austenite

    NASA Astrophysics Data System (ADS)

    Abbod, M. F.; Sellars, C. M.; Cizek, P.; Linkens, D. A.; Mahfouf, M.

    2007-10-01

    The present work describes a hybrid modeling approach developed for predicting the flow behavior, recrystallization characteristics, and crystallographic texture evolution in a Fe-30 wt pct Ni austenitic model alloy subjected to hot plane strain compression. A series of compression tests were performed at temperatures between 850 °C and 1050 °C and strain rates between 0.1 and 10 s-1. The evolution of grain structure, crystallographic texture, and dislocation substructure was characterized in detail for a deformation temperature of 950 °C and strain rates of 0.1 and 10 s-1, using electron backscatter diffraction and transmission electron microscopy. The hybrid modeling method utilizes a combination of empirical, physically-based, and neuro-fuzzy models. The flow stress is described as a function of the applied variables of strain rate and temperature using an empirical model. The recrystallization behavior is predicted from the measured microstructural state variables of internal dislocation density, subgrain size, and misorientation between subgrains using a physically-based model. The texture evolution is modeled using artificial neural networks.

  3. Review of Orbital Propellant Transfer Techniques and the Feasibility of a Thermal Bootstrap Propellant Transfer Concepts

    NASA Technical Reports Server (NTRS)

    Yoshikawa, H. H.; Madison, I. B.

    1971-01-01

    This study was performed in support of the NASA Task B-2 Study Plan for Space Basing. The nature of space-based operations implies that orbital transfer of propellant is a prime consideration. The intent of this report is (1) to report on the findings and recommendations of existing literature on space-based propellant transfer techniques, and (2) to determine possible alternatives to the recommended methods. The reviewed literature recommends, in general, the use of conventional liquid transfer techniques (i.e., pumping) in conjunction with an artificially induced gravitational field. An alternate concept that was studied, the Thermal Bootstrap Transfer Process, is based on the compression of a two-phase fluid with subsequent condensation to a liquid (vapor compression/condensation). This concept utilizes the intrinsic energy capacities of the tanks and propellant by exploiting temperature differentials and available energy differences. The results indicate the thermodynamic feasibility of the Thermal Bootstrap Transfer Process for a specific range of tank sizes, temperatures, fill-factors and receiver tank heat transfer coefficients.

  4. Experience in using a numerical scheme with artificial viscosity at solving the Riemann problem for a multi-fluid model of multiphase flow

    NASA Astrophysics Data System (ADS)

    Bulovich, S. V.; Smirnov, E. M.

    2018-05-01

    The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.

  5. Performance and durability of concrete made with demolition waste and artificial fly ash-clay aggregates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakaria, M.; Cabrera, J.G.

    1996-12-31

    Demolition aggregates and artificial aggregates made with waste materials are two alternatives being studied for replacement of natural aggregates in the production of concrete. Natural aggregate sources in Europe are increasingly scarce and subject to restrictions based on environmental regulations. In many areas of the developing world sources of good quality aggregates are very limited or practically not available and therefore it has become necessary to study alternative materials. This paper presents a laboratory study on the use of demolition bricks and artificial aggregates made from fly ash-clay as coarse aggregates to make concrete. The concretes made either with demolitionmore » bricks or artificial aggregates are compared with a control mix made with natural gravel aggregates. The strength and durability characteristics of these concretes are evaluated using as a criteria compressive strength and transport properties, such as gas and water permeability. The results show clearly that concretes of good performance and durability can be produced using aggregates from demolition rubble or using artificial aggregates made with wastes such as fly ash.« less

  6. [Biomechanical analysis of different ProDisc-C arthroplasty design parameters after implanted: a numerical sensitivity study based on finite element method].

    PubMed

    Tang, Qiaohong; Mo, Zhongjun; Yao, Jie; Li, Qi; Du, Chenfei; Wang, Lizhen; Fan, Yubo

    2014-12-01

    This study was aimed to estimate the effect of different ProDisc-C arthroplasty designs after it was implanted to C5-C6 cervicalspine. Finite element (FE) model of intact C5-C6 segments including the vertebrae and disc was developed and validated. Ball-and-socket artificial disc prosthesis model (ProDisc-C, Synthes) was implanted into the validated FE model and the curvature of the ProDisc-C prosthesis was varied. All models were loaded with compressed force 74 N and the pure moment of 1.8 Nm along flexion-extension and bilateral bending and axial torsion separately. The results indicated that the variation in the curvature of ball and socket configuration would influence the range of motion in flexion/extension, while there were not apparently differences under other conditions of loads. The method increasing the curvature will solve the stress concentration of the polyethylene, but it will also bring adverse outcomes, such as facet joint force increasing and ligament tension increasing. Therefore, the design of artificial discs should be considered comprehensively to reserve the range of motion as well as to avoid the adverse problems, so as not to affect the long-term clinical results.

  7. High order accurate and low dissipation method for unsteady compressible viscous flow computation on helicopter rotor in forward flight

    NASA Astrophysics Data System (ADS)

    Xu, Li; Weng, Peifen

    2014-02-01

    An improved fifth-order weighted essentially non-oscillatory (WENO-Z) scheme combined with the moving overset grid technique has been developed to compute unsteady compressible viscous flows on the helicopter rotor in forward flight. In order to enforce periodic rotation and pitching of the rotor and relative motion between rotor blades, the moving overset grid technique is extended, where a special judgement standard is presented near the odd surface of the blade grid during search donor cells by using the Inverse Map method. The WENO-Z scheme is adopted for reconstructing left and right state values with the Roe Riemann solver updating the inviscid fluxes and compared with the monotone upwind scheme for scalar conservation laws (MUSCL) and the classical WENO scheme. Since the WENO schemes require a six point stencil to build the fifth-order flux, the method of three layers of fringes for hole boundaries and artificial external boundaries is proposed to carry out flow information exchange between chimera grids. The time advance on the unsteady solution is performed by the full implicit dual time stepping method with Newton type LU-SGS subiteration, where the solutions of pseudo steady computation are as the initial fields of the unsteady flow computation. Numerical results on non-variable pitch rotor and periodic variable pitch rotor in forward flight reveal that the approach can effectively capture vortex wake with low dissipation and reach periodic solutions very soon.

  8. A design tool for predicting the capillary transport characteristics of fuel cell diffusion media using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Kumbur, E. C.; Sharp, K. V.; Mench, M. M.

    Developing a robust, intelligent design tool for multivariate optimization of multi-phase transport in fuel cell diffusion media (DM) is of utmost importance to develop advanced DM materials. This study explores the development of a DM design algorithm based on artificial neural network (ANN) that can be used as a powerful tool for predicting the capillary transport characteristics of fuel cell DM. Direct measurements of drainage capillary pressure-saturation curves of the differently engineered DMs (5, 10 and 20 wt.% PTFE) were performed at room temperature under three compressions (0, 0.6 and 1.4 MPa) [E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1295-B1304; E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1305-B1314; E.C. Kumbur, K.V. Sharp, M.M. Mench, J. Electrochem. Soc. 154(12) (2007) B1315-B1324]. The generated benchmark data were utilized to systematically train a three-layered ANN framework that processes the feed-forward error back propagation methodology. The designed ANN successfully predicts the measured capillary pressures within an average uncertainty of ±5.1% of the measured data, confirming that the present ANN model can be used as a design tool within the range of tested parameters. The ANN simulations reveal that tailoring the DM with high PTFE loading and applying high compression pressure lead to a higher capillary pressure, therefore promoting the liquid water transport within the pores of the DM. Any increase in hydrophobicity of the DM is found to amplify the compression effect, thus yielding a higher capillary pressure for the same saturation level and compression.

  9. [Botulinum toxin type A does not affect spontaneous discharge but blocks sympathetic-sensory coupling in chronically compressed rat dorsal root ganglion neurons].

    PubMed

    Yang, Hong-jun; Peng, Kai-run; Hu, San-jue; Duan, Jian-hong

    2007-11-01

    To study the effect of botulinum toxin type A (BTXA) on spontaneous discharge and sympathetic- sensory coupling in chronically compressed dorsal root ganglion (DRG) neurons in rats. In chronically compressed rat DRG, spontaneous activities of the single fibers from DRG neurons were recorded and their changes observed after BTAX application on the damaged DGR. Sympathetic modulation of the spontaneous discharge from the compressed DRG neurons was observed by electric stimulation of the lumbar sympathetic trunk, and the changes in this effect were evaluated after intravenous BTXA injection in the rats. Active spontaneous discharges were recorded in the injured DRG neurons, and 47 injured DRG neurons responded to Ca2+-free artificial cerebrospinal fluid but not to BTXA treatment. Sixty-four percent of the neurons in the injured DRG responded to sympathetic stimulation, and this response was blocked by intravenously injection of BTXA. BTXA does not affect spontaneous activities of injured DRG neurons, but blocks sympathetic-sensory coupling in these neurons.

  10. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures

    PubMed Central

    Jeon, Joonryong

    2017-01-01

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size. PMID:28704945

  11. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures.

    PubMed

    Heo, Gwanghee; Jeon, Joonryong

    2017-07-12

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.

  12. Embedding speech into virtual realities

    NASA Technical Reports Server (NTRS)

    Bohn, Christian-Arved; Krueger, Wolfgang

    1993-01-01

    In this work a speaker-independent speech recognition system is presented, which is suitable for implementation in Virtual Reality applications. The use of an artificial neural network in connection with a special compression of the acoustic input leads to a system, which is robust, fast, easy to use and needs no additional hardware, beside a common VR-equipment.

  13. Depicting mass flow rate of R134a /LPG refrigerant through straight and helical coiled adiabatic capillary tubes of vapor compression refrigeration system using artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Gill, Jatinder; Singh, Jagdev

    2018-07-01

    In this work, an experimental investigation is carried out with R134a and LPG refrigerant mixture for depicting mass flow rate through straight and helical coil adiabatic capillary tubes in a vapor compression refrigeration system. Various experiments were conducted under steady-state conditions, by changing capillary tube length, inner diameter, coil diameter and degree of subcooling. The results showed that mass flow rate through helical coil capillary tube was found lower than straight capillary tube by about 5-16%. Dimensionless correlation and Artificial Neural Network (ANN) models were developed to predict mass flow rate. It was found that dimensionless correlation and ANN model predictions agreed well with experimental results and brought out an absolute fraction of variance of 0.961 and 0.988, root mean square error of 0.489 and 0.275 and mean absolute percentage error of 4.75% and 2.31% respectively. The results suggested that ANN model shows better statistical prediction than dimensionless correlation model.

  14. Automatic physical inference with information maximizing neural networks

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  15. A practical method for estimating maximum shear modulus of cemented sands using unconfined compressive strength

    NASA Astrophysics Data System (ADS)

    Choo, Hyunwook; Nam, Hongyeop; Lee, Woojin

    2017-12-01

    The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

  16. Microvascular Decompression for Classical Trigeminal Neuralgia Caused by Venous Compression: Novel Anatomic Classifications and Surgical Strategy.

    PubMed

    Wu, Min; Fu, Xianming; Ji, Ying; Ding, Wanhai; Deng, Dali; Wang, Yehan; Jiang, Xiaofeng; Niu, Chaoshi

    2018-05-01

    Microvascular decompression of the trigeminal nerve is the most effective treatment for trigeminal neuralgia. However, when encountering classical trigeminal neuralgia caused by venous compression, the procedure becomes much more difficult, and failure or recurrence because of incomplete decompression may become frequent. This study aimed to investigate the anatomic variation of the culprit veins and discuss the surgical strategy for different types. We performed a retrospective analysis of 64 consecutive cases in whom veins were considered as responsible vessels alone or combined with other adjacent arteries. The study classified culprit veins according to operative anatomy and designed personalized approaches and decompression management according to different forms of compressive veins. Curative effects were assessed by the Barrow Neurological Institute (BNI) pain intensity score and BNI facial numbness score. The most commonly encountered veins were the superior petrosal venous complex (SPVC), which was artificially divided into 4 types according to both venous tributary distribution and empty point site. We synthetically considered these factors and selected an approach to expose the trigeminal root entry zone, including the suprafloccular transhorizontal fissure approach and infratentorial supracerebellar approach. The methods of decompression consist of interposing and transposing by using Teflon, and sometimes with the aid of medical adhesive. Nerve combing (NC) of the trigeminal root was conducted in situations of extremely difficult neurovascular compression, instead of sacrificing veins. Pain completely disappeared in 51 patients, and the excellent outcome rate was 79.7%. There were 13 patients with pain relief treated with reoperation. Postoperative complications included 10 cases of facial numbness, 1 case of intracranial infection, and 1 case of high-frequency hearing loss. The accuracy recognition of anatomic variation of the SPVC is crucial for the management of classical trigeminal neuralgia caused by venous compression. Selecting an appropriate approach and using reasonable decompression methods can bring complete postoperative pain relief for most cases. NC can be an alternative choice for extremely difficult cases, but it could lead to facial numbness more frequently. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Control Theory based Shape Design for the Incompressible Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Cowles, G.; Martinelli, L.

    2003-12-01

    A design method for shape optimization in incompressible turbulent viscous flow has been developed and validated for inverse design. The gradient information is determined using a control theory based algorithm. With such an approach, the cost of computing the gradient is negligible. An additional adjoint system must be solved which requires the cost of a single steady state flow solution. Thus, this method has an enormous advantage over traditional finite-difference based algorithms. The method of artificial compressibility is utilized to solve both the flow and adjoint systems. An algebraic turbulence model is used to compute the eddy viscosity. The method is validated using several inverse wing design test cases. In each case, the program must modify the shape of the initial wing such that its pressure distribution matches that of the target wing. Results are shown for the inversion of both finite thickness wings as well as zero thickness wings which can be considered a model of yacht sails.

  18. Composite strengthening. [of nonferrous, fiber reinforced alloys

    NASA Technical Reports Server (NTRS)

    Stoloff, N. S.

    1976-01-01

    The mechanical behavior of unidirectionally reinforced metals is examined, with particular attention to fabrication techniques for artificial composites and eutectic alloys and to principles of fiber reinforcement. The properties of artificial composites are discussed in terms of strength of fiber composites, strength of ribbon-reinforced composites, crack initiation, crack propagation, and creep behavior. The properties of eutectic composites are examined relative to tensile strength, compressive strength, fracture, high-temperature strength, and fatigue. In the case of artificial composites, parallelism of fibers, good bonding between fibers and matrix, and freedom of fibers from damage are all necessary to ensure superior performance. For many eutectic systems there are stringent boundary conditions relative to melt purity and superheat, atmosphere control, temperature gradient, and growth rate in order to provide near-perfect alignment of the reinforcements with a minimum of growth defects.

  19. A Method for Large Eddy Simulation of Acoustic Combustion Instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton; Moin, Parviz

    2003-11-01

    A method for performing Large Eddy Simulation of acoustic combustion instabilities is presented. By extending the low Mach number pressure correction method to the case of compressible flow, a numerical method is developed in which the Poisson equation for pressure is replaced by a Helmholtz equation. The method avoids the acoustic CFL condition by using implicit time advancement, leading to large efficiency gains at low Mach number. The method also avoids artificial damping of acoustic waves. The numerical method is attractive for the simulation of acoustics combustion instabilities, since these flows are typically at low Mach number, and the acoustic frequencies of interest are usually low. Additionally, new boundary conditions based on the work of Poinsot and Lele have been developed to model the acoustic effect of a long channel upstream of the computational inlet, thus avoiding the need to include such a channel in the computational domain. The turbulent combustion model used is the Level Set model of Duchamp de Lageneste and Pitsch for premixed combustion. Comparison of LES results to the reacting experiments of Besson et al. will be presented.

  20. Electroelastic fields in artificially created vortex cores in epitaxial BiFeO 3 thin films

    DOE PAGES

    Winchester, Ben; Wisinger, Nina Balke; Cheng, X. X.; ...

    2015-08-03

    Here we employ phase-field modeling to explore the elastic properties of artificially created 1-D domain walls in (001) p-oriented BiFeO 3 thin films, composed of a junction of the four polarization variants, all with the same out-of-plane polarization. It was found that these junctions exhibit peculiarly high electroelastic fields induced by the neighboring ferroelastic/ferroelectric domains. The vortex core exhibits a volume expansion, while the anti-vortex core is more compressive. We also discuss possible ways to control the electroelastic field, such as varying material constant and applying transverse electric field.

  1. Effect of Zirconia and Alumina Fillers on the Microstructure and Mechanical Strength of Dental Glass Ionomer Cements

    PubMed Central

    Souza, Júlio C. M.; Silva, Joel B.; Aladim, Andrea; Carvalho, Oscar; Nascimento, Rubens M.; Silva, Filipe S.; Martinelli, Antonio E.; Henriques, Bruno

    2016-01-01

    Background: Glass-ionomer cements perform a protective effect on the dentin-pulp complex considering the F ions release and chemical bonding to the dental structures. On the other hand, those materials have poor physic-mechanical properties in comparison with the restorative resin composite. The main aim of this work was to evaluate the influence of zirconia and/or alumina fillers on the microstructure and strength of a resin modified glass-ionomer cement after thermal cycling. Methods: An in vitro experimental study was carried out on 9 groups (n = 10) of cylindrical samples (6 x 4 mm) made from resin modified glass-ionomer (Vitremer, 3M, USA) with different contents of alumina and/or zirconia fillers. A nano-hybrid resin composite was tested as a control group. Samples were mechanically characterized by axial compressive tests and electron scanning microscopy (SEM) coupled to energy dispersive X-ray spectrophotometry (EDS), before and after thermal cycling. Thermal cycling procedures were performed at 3000, 6000 and 10000 cycles in Fusayama´s artificial saliva at 5 and 60 oC. Results: An improvement of compressive strength was noticed on glass-ionomer reinforced with alumina fillers in comparison with the commercial glass ionomer. SEM images revealed the morphology and distribution of alumina or zirconia in the microstructure of glass-ionomers. Also, defects such as cracks and pores were detected on the glass-ionomer cements. The materials tested were not affected by thermal cycling in artificial saliva. Conclusion: Addition of inorganic particles at nano-scale such as alumina can increase the mechanical properties of glass-ionomer cements. However, the presence of cracks and pores present in glass-ionomer can negatively affect the mechanical properties of the material because they are areas of stress concentration. PMID:27053969

  2. In vitro wear assessment of the Charité Artificial Disc according to ASTM recommendations.

    PubMed

    Serhan, Hassan A; Dooris, Andrew P; Parsons, Matthew L; Ares, Paul J; Gabriel, Stefan M

    2006-08-01

    Biomechanical laboratory research. To evaluate the potential for Ultra High Molecular Weight Polyethylene (UHMWPE) wear debris from the Charité Artificial Disc. Cases of osteolysis from artificial discs are extremely rare, but hip and knee studies demonstrate the osteolytic potential and clinical concern of UHMWPE wear debris. Standards for testing artificial discs continue to evolve, and there are few detailed reports of artificial disc wear characterizations. Implant assemblies were tested to 10 million cycles of +/- 7.5 degrees flexion-extension or +/- 7.5 degrees left/right lateral bending, both with +/- 2 degrees axial rotation and 900 N to 1,850 N cyclic compression. Cores were weighed, measured, and photographed. Soak and loaded soak controls were used. Wear debris was analyzed via scanning electron microscopy and particle counters. The average total wear of the implants was 0.11 and 0.13 mg per million cycles, before and after accounting for serum absorption, respectively. Total height loss was approximately 0.2 mm. Wear debris ranged from submicron to > 10 microm in size. Under these test conditions, the Charité Artificial Disc produced minimal wear debris. Debris size and morphology tended to be similar to other CoCr-UHMWPE joints. More testing is necessary to evaluate the implants under a spectrum of loading conditions.

  3. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  4. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, Mina R.; Georgiadis, Nicholas J.; DeBonis, James R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing Large-Eddy Simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the highorder method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  5. Comparison of High-Order and Low-Order Methods for Large-Eddy Simulation of a Compressible Shear Layer

    NASA Technical Reports Server (NTRS)

    Mankbadi, M. R.; Georgiadis, N. J.; DeBonis, J. R.

    2015-01-01

    The objective of this work is to compare a high-order solver with a low-order solver for performing large-eddy simulations (LES) of a compressible mixing layer. The high-order method is the Wave-Resolving LES (WRLES) solver employing a Dispersion Relation Preserving (DRP) scheme. The low-order solver is the Wind-US code, which employs the second-order Roe Physical scheme. Both solvers are used to perform LES of the turbulent mixing between two supersonic streams at a convective Mach number of 0.46. The high-order and low-order methods are evaluated at two different levels of grid resolution. For a fine grid resolution, the low-order method produces a very similar solution to the high-order method. At this fine resolution the effects of numerical scheme, subgrid scale modeling, and filtering were found to be negligible. Both methods predict turbulent stresses that are in reasonable agreement with experimental data. However, when the grid resolution is coarsened, the difference between the two solvers becomes apparent. The low-order method deviates from experimental results when the resolution is no longer adequate. The high-order DRP solution shows minimal grid dependence. The effects of subgrid scale modeling and spatial filtering were found to be negligible at both resolutions. For the high-order solver on the fine mesh, a parametric study of the spanwise width was conducted to determine its effect on solution accuracy. An insufficient spanwise width was found to impose an artificial spanwise mode and limit the resolved spanwise modes. We estimate that the spanwise depth needs to be 2.5 times larger than the largest coherent structures to capture the largest spanwise mode and accurately predict turbulent mixing.

  6. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  7. Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    1998-01-01

    A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.

  8. Fidelity Optimization of Microprocessor System Simulations.

    DTIC Science & Technology

    1981-03-01

    effort feasible in terms of required CPU time would be to employ a separate clock with an artificially compressed time base in the serial...RETURN ILINCR -NU𔃾OPS D.% PROt.ESSING 900 IF IIERP2.NF.41 GO TO 1000 IFRCOD - L CALL VAIRCO 1A(61,NUMVALLEPCOOl IEPRZ -IEACCO IF hEARR .GT. 01 RETURN I

  9. Microstructure and mechanical properties of composite resins subjected to accelerated artificial aging.

    PubMed

    dos Reis, Andréa Cândido; de Castro, Denise Tornavoi; Schiavon, Marco Antônio; da Silva, Leandro Jardel; Agnelli, José Augusto Marcondes

    2013-01-01

    The aim of this study was to investigate the influence of accelerated artificial aging (AAA) on the microstructure and mechanical properties of the Filtek Z250, Filtek Supreme, 4 Seasons, Herculite, P60, Tetric Ceram, Charisma and Filtek Z100. composite resins. The composites were characterized by Fourier-transform Infrared spectroscopy (FTIR) and thermal analyses (Differential Scanning Calorimetry - DSC and Thermogravimetry - TG). The microstructure of the materials was examined by scanning electron microscopy. Surface hardness and compressive strength data of the resins were recorded and the mean values were analyzed statistically by ANOVA and Tukey's test (α=0.05). The results showed significant differences among the commercial brands for surface hardness (F=86.74, p<0.0001) and compressive strength (F=40.31, p<0.0001), but AAA did not affect the properties (surface hardness: F=0.39, p=0.53; compressive strength: F=2.82, p=0.09) of any of the composite resins. FTIR, DSC and TG analyses showed that resin polymerization was complete, and there were no differences between the spectra and thermal curve profiles of the materials obtained before and after AAA. TG confirmed the absence of volatile compounds and evidenced good thermal stability up to 200 °C, and similar amounts of residues were found in all resins evaluated before and after AAA. The AAA treatment did not significantly affect resin surface. Therefore, regardless of the resin brand, AAA did not influence the microstructure or the mechanical properties.

  10. Scalar conservation and boundedness in simulations of compressible flow

    DOE PAGES

    Subbareddy, Pramod K.; Kartha, Anand; Candler, Graham V.

    2017-08-07

    With the proper combination of high-order, low-dissipation numerical methods, physics-based subgrid-scale models, and boundary conditions it is becoming possible to simulate many combustion flows at relevant conditions. However, non-premixed flows are a particular challenge because the thickness of the fuel/oxidizer interface scales inversely with Reynolds number. Sharp interfaces can also be present in the initial or boundary conditions. When higher-order numerical methods are used, there are often aphysical undershoots and overshoots in the scalar variables (e.g.passive scalars, species mass fractions or progress variable). These numerical issues are especially prominent when low-dissipation methods are used, since sharp jumps in flow variablesmore » are not always coincident with regions of strong variation in the scalar fields: consequently, special detection mechanisms and dissipative fluxes are needed. Most numerical methods diffuse the interface, resulting in artificial mixing and spurious reactions. In this paper, we propose a numerical method that mitigates this issue. As a result, we present methods for passive and active scalars, and demonstrate their effectiveness with several examples.« less

  11. The Hugoniot and chemistry of ablator plastic below 100 GPa

    DOE PAGES

    Akin, M. C.; Fratanduono, D. E.; Chau, R.

    2016-01-25

    The equation of state of glow discharge polymer (GDP) was measured to high precision using the two-stage light gas gun at Lawrence Livermore National Laboratory at pressures up to 70 GPa. Both absolute measurements and impedance matching techniques were used to determine the principal and secondary Hugoniots. GDP likely reacts at about 30 GPa, demonstrated by specific emission at 450 nm coupled with changes to the Hugoniot and reshock points. As a result of these reactions, the shock pressure in GDP evolves in time, leading to a possible decrease in pressure as compression increases, or negative compressibility, and causing complexmore » pressure profiles within the plastic. Velocity wave profile variation was observed as a function of position on each shot, suggesting some internal variation of GDP may be present, which would be consistent with previous observations. The complex temporal and possibly structural evolution of GDP under shock compression suggests that calculations of compression and pressure based upon bulk or mean measurements may lead to artificially low pressures and high compressions. Evidence for this includes a large shift in calculating reshock pressures based on the reflected Hugoniot. In conclusion, these changes also suggest other degradation mechanisms for inertial confinement fusion implosions.« less

  12. Artificial Boundary Conditions for Computation of Oscillating External Flows

    NASA Technical Reports Server (NTRS)

    Tsynkov, S. V.

    1996-01-01

    In this paper, we propose a new technique for the numerical treatment of external flow problems with oscillatory behavior of the solution in time. Specifically, we consider the case of unbounded compressible viscous plane flow past a finite body (airfoil). Oscillations of the flow in time may be caused by the time-periodic injection of fluid into the boundary layer, which in accordance with experimental data, may essentially increase the performance of the airfoil. To conduct the actual computations, we have to somehow restrict the original unbounded domain, that is, to introduce an artificial (external) boundary and to further consider only a finite computational domain. Consequently, we will need to formulate some artificial boundary conditions (ABC's) at the introduced external boundary. The ABC's we are aiming to obtain must meet a fundamental requirement. One should be able to uniquely complement the solution calculated inside the finite computational domain to its infinite exterior so that the original problem is solved within the desired accuracy. Our construction of such ABC's for oscillating flows is based on an essential assumption: the Navier-Stokes equations can be linearized in the far field against the free-stream back- ground. To actually compute the ABC's, we represent the far-field solution as a Fourier series in time and then apply the Difference Potentials Method (DPM) of V. S. Ryaben'kii. This paper contains a general theoretical description of the algorithm for setting the DPM-based ABC's for time-periodic external flows. Based on our experience in implementing analogous ABC's for steady-state problems (a simpler case), we expect that these boundary conditions will become an effective tool for constructing robust numerical methods to calculate oscillatory flows.

  13. Synthesis of an Al-Mn-Based Alloy Containing In Situ-Formed Quasicrystals and Evaluation of Its Mechanical and Corrosion Properties

    NASA Astrophysics Data System (ADS)

    Naglič, Iztok; Samardžija, Zoran; Delijić, Kemal; Kobe, Spomenka; Leskovar, Blaž; Markoli, Boštjan

    2018-05-01

    An Al-Mn alloy with additions of copper, magnesium, and silicon was prepared and cast into a copper mold. It contains in situ-formed icosahedral quasicrystals (iQCs), as confirmed by electron backscatter diffraction. The aim of this work is to present the mechanical and corrosion properties of this alloy and compare its properties with some conventional commercial materials. The compressive strength and compressive yield strength were 751 MPa and 377 MPa, while the compressive fracture strain was 19%. It was observed that intensive shearing caused the final fracture of the specimens and the fractured iQC dendrites still showed cohesion with the α-Al matrix. The polarization resistance and corrosion rate of the artificially aged alloy were 7.30 kΩ and 1.2 μm/year. The evaluated properties are comparable to conventional, discontinuously reinforced aluminum metal-matrix composites and structural wrought aluminum alloys.

  14. Biomechanics of a Fixed–Center of Rotation Cervical Intervertebral Disc Prosthesis

    PubMed Central

    Crawford, Neil R.; Baek, Seungwon; Sawa, Anna G.U.; Safavi-Abbasi, Sam; Sonntag, Volker K.H.; Duggal, Neil

    2012-01-01

    Background Past in vitro experiments studying artificial discs have focused on range of motion. It is also important to understand how artificial discs affect other biomechanical parameters, especially alterations to kinematics. The purpose of this in vitro investigation was to quantify how disc replacement with a ball-and-socket disc arthroplasty device (ProDisc-C; Synthes, West Chester, Pennsylvania) alters biomechanics of the spine relative to the normal condition (positive control) and simulated fusion (negative control). Methods Specimens were tested in multiple planes by use of pure moments under load control and again in displacement control during flexion-extension with a constant 70-N compressive follower load. Optical markers measured 3-dimensional vertebral motion, and a strain gauge array measured C4-5 facet loads. Results Range of motion and lax zone after disc replacement were not significantly different from normal values except during lateral bending, whereas plating significantly reduced motion in all loading modes (P < .002). Plating but not disc replacement shifted the location of the axis of rotation anteriorly relative to the intact condition (P < 0.01). Coupled axial rotation per degree of lateral bending was 25% ± 48% greater than normal after artificial disc replacement (P = .05) but 37% ± 38% less than normal after plating (P = .002). Coupled lateral bending per degree of axial rotation was 37% ± 21% less than normal after disc replacement (P < .001) and 41% ± 36% less than normal after plating (P = .001). Facet loads did not change significantly relative to normal after anterior plating or arthroplasty, except that facet loads were decreased during flexion in both conditions (P < .03). Conclusions In all parameters studied, deviations from normal biomechanics were less substantial after artificial disc placement than after anterior plating. PMID:25694869

  15. Simulation of a pulsatile total artificial heart: Development of a partitioned Fluid Structure Interaction model

    NASA Astrophysics Data System (ADS)

    Sonntag, Simon J.; Kaufmann, Tim A. S.; Büsen, Martin R.; Laumen, Marco; Linde, Torsten; Schmitz-Rode, Thomas; Steinseifer, Ulrich

    2013-04-01

    Heart disease is one of the leading causes of death in the world. Due to a shortage in donor organs artificial hearts can be a bridge to transplantation or even serve as a destination therapy for patients with terminal heart insufficiency. A pusher plate driven pulsatile membrane pump, the Total Artificial Heart (TAH) ReinHeart, is currently under development at the Institute of Applied Medical Engineering of RWTH Aachen University.This paper presents the methodology of a fully coupled three-dimensional time-dependent Fluid Structure Interaction (FSI) simulation of the TAH using a commercial partitioned block-Gauss-Seidel coupling package. Partitioned coupling of the incompressible fluid with the slender flexible membrane as well as a high fluid/structure density ratio of about unity led inherently to a deterioration of the stability (‘artificial added mass instability’). The objective was to conduct a stable simulation with high accuracy of the pumping process. In order to achieve stability, a combined resistance and pressure outlet boundary condition as well as the interface artificial compressibility method was applied. An analysis of the contact algorithm and turbulence condition is presented. Independence tests are performed for the structural and the fluid mesh, the time step size and the number of pulse cycles. Because of the large deformation of the fluid domain, a variable mesh stiffness depending on certain mesh properties was specified for the fluid elements. Adaptive remeshing was avoided. Different approaches for the mesh stiffness function are compared with respect to convergence, preservation of mesh topology and mesh quality. The resulting mesh aspect ratios, mesh expansion factors and mesh orthogonalities are evaluated in detail. The membrane motion and flow distribution of the coupled simulations are compared with a top-view recording and stereo Particle Image Velocimetry (PIV) measurements, respectively, of the actual pump.

  16. Compression experiments on artificial, alpine and marine ice: implications for ice-shelf/continental interactions

    NASA Astrophysics Data System (ADS)

    Dierckx, Marie; Goossens, Thomas; Samyn, Denis; Tison, Jean-Louis

    2010-05-01

    Antarctic ice shelves are important components of continental ice dynamics, in that they control grounded ice flow towards the ocean. As such, Antarctic ice shelves are a key parameter to the stability of the Antarctic ice sheet in the context of global change. Marine ice, formed by sea water accretion beneath some ice shelves, displays distinct physical (grain textures, bubble content, ...) and chemical (salinity, isotopic composition, ...) characteristics as compared to glacier ice and sea ice. The aim is to refine Glen's flow relation (generally used for ice behaviour in deformation) under various parameters (temperature, salinity, debris, grain size ...) to improve deformation laws used in dynamic ice shelf models, which would then give more accurate and / or realistic predictions on ice shelf stability. To better understand the mechanical properties of natural ice, deformation experiments were performed on ice samples in laboratory, using a pneumatic compression device. To do so, we developed a custom built compression rig operated by pneumatic drives. It has been designed for performing uniaxial compression tests at constant load and under unconfined conditions. The operating pressure ranges from about 0.5 to 10 Bars. This allows modifying the experimental conditions to match the conditions found at the grounding zone (in the 1 Bar range). To maintain the ice at low temperature, the samples are immersed in a Silicone oil bath connected to an external refrigeration system. During the experiments, the vertical displacement of the piston and the applied force is measured by sensors which are connected to a digital acquisition system. We started our experiments with artificial ice and went on with continental ice samples from glaciers in the Alps. The first results allowed us to acquire realistic mechanical data for natural ice. Ice viscosity was calculated for different types of artificial ice, using Glen's flow law, and showed the importance of impurities content and ice crystallography (grain size, ice fabrics...) on the deformation behaviour. Glacier ice was also used in our experiments. Calculations of the flow parameter A give a value of 3.10e-16 s-1 kPa-3 at a temperature of -10° C. These results are in accordance with previous lab deformation studies. Compression tests show the effectiveness of the deformation unit for uniaxial strain experiment. In the future, deformation of marine ice and of the ice mélange (consisting of a melange of marine ice, broken blocks of continental ice and blown snow further metamorphosed into firn and then ice) will be studied, to obtain a comprehensive understanding of the parameters that influence the behaviour of both ice types and how they affect the overall flow of the ice shelf and potential future sea level rise.

  17. Vanishing Viscosity Approach to the Compressible Euler Equations for Transonic Nozzle and Spherically Symmetric Flows

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang G.; Schrecker, Matthew R. I.

    2018-04-01

    We are concerned with globally defined entropy solutions to the Euler equations for compressible fluid flows in transonic nozzles with general cross-sectional areas. Such nozzles include the de Laval nozzles and other more general nozzles whose cross-sectional area functions are allowed at the nozzle ends to be either zero (closed ends) or infinity (unbounded ends). To achieve this, in this paper, we develop a vanishing viscosity method to construct globally defined approximate solutions and then establish essential uniform estimates in weighted L p norms for the whole range of physical adiabatic exponents γ\\in (1, ∞) , so that the viscosity approximate solutions satisfy the general L p compensated compactness framework. The viscosity method is designed to incorporate artificial viscosity terms with the natural Dirichlet boundary conditions to ensure the uniform estimates. Then such estimates lead to both the convergence of the approximate solutions and the existence theory of globally defined finite-energy entropy solutions to the Euler equations for transonic flows that may have different end-states in the class of nozzles with general cross-sectional areas for all γ\\in (1, ∞) . The approach and techniques developed here apply to other problems with similar difficulties. In particular, we successfully apply them to construct globally defined spherically symmetric entropy solutions to the Euler equations for all γ\\in (1, ∞).

  18. Acoustic metric of the compressible draining bathtub

    NASA Astrophysics Data System (ADS)

    Cherubini, C.; Filippi, S.

    2011-10-01

    The draining bathtub flow, a cornerstone in the theory of acoustic black holes, is here extended to the case of exact solutions for compressible nonviscous flows characterized by a polytropic equation of state. Investigating the analytical configurations obtained for selected values of the polytropic index, it is found that each of them becomes nonphysical at the so called limiting circle. By studying the null geodesics structure of the corresponding acoustic line elements, it is shown that such a geometrical locus coincides with the acoustic event horizon. This region is characterized also by an infinite value of space-time curvature, so the acoustic analogy breaks down there. Possible applications for artificial and natural vortices are finally discussed.

  19. In vivo study of the biocompatibility of a novel compressed collagen hydrogel scaffold for artificial corneas.

    PubMed

    Xiao, Xianghua; Pan, Shiyin; Liu, Xianning; Zhu, Xiuping; Connon, Che John; Wu, Jie; Mi, Shengli

    2014-06-01

    The experiments were designed to evaluate the biocompatibility of a plastically compressed collagen scaffold (PCCS). The ultrastructure of the PCCS was observed via scanning electron microscopy. Twenty New Zealand white rabbits were randomly divided into experimental and control groups that received corneal pocket transplantation with PCCS and an amniotic membrane, respectively. And the contralateral eye of the implanted rabbit served as the normal group. On the 1st, 7th, 14th, 21st, 30th, 60th, 90th, and 120th postoperative day, the eyes were observed via a slit lamp. On the 120th postoperative day, the rabbit eyes were enucleated to examine the tissue compatibility of the implanted stroma. The PCCS was white and translucent. The scanning electron microscopy results showed that fibers within the PCCS were densely packed and evenly arranged. No edema, inflammation, or neovascularization was observed on ocular surface under a slit lamp and few lymphocytes were observed in the stroma of rabbit cornea after histological study. In conclusion, the PCCS has extremely high biocompatibility and is a promising corneal scaffold for an artificial cornea. Copyright © 2013 Society of Plastics Engineers.

  20. Modeling constitutive behavior of a 15Cr-15Ni-2.2Mo-Ti modified austenitic stainless steel under hot compression using artificial neural network

    NASA Astrophysics Data System (ADS)

    Mandal, Sumantra

    2006-11-01

    ABSTRACT In this paper, an artificial neural network (ANN) model has been suggested to predict the constitutive flow behavior of a 15Cr-15Ni-2.2Mo-Ti modified austenitic stainless steel under hot deformation. Hot compression tests in the temperature range 850°C- 1250°C and strain rate range 10-3-102 s-1 were carried out. These tests provided the required data for training the neural network and for subsequent testing. The inputs of the neural network are strain, log strain rate and temperature while flow stress is obtained as output. A three layer feed-forward network with ten neurons in a single hidden layer and back-propagation learning algorithm has been employed. A very good correlation between experimental and predicted result has been obtained. The effect of temperature and strain rate on flow behavior has been simulated employing the ANN model. The results have been found to be consistent with the metallurgical trend. Finally, a monte carlo analiysis has been carried out to find out the noise sensitivity of the developed model.

  1. [Recent insights into the possibilities of resuscitation of dogs and cats].

    PubMed

    How, K L; Reens, N; Stokhof, A A; Hellebrekers, L J

    1998-08-15

    This article reviews the present state of the art of resuscitation of dogs and cats. The purpose of resuscitation is to revive animals so that the vital functions resume together with a normal brain function. Resuscitation must be started as soon as the cardiopulmonary arrest has been confirmed. Adequate ventilation and effective circulation to the most vital body organs, the heart and the brain, have the highest priority. They can be achieved by endotracheal intubation, artificial ventilation with 100% oxygen and rhythmic compression of the closed chest or direct cardiac massage following thoracotomy. Medical therapy is an important part of resuscitation. In the absence of a central venous route, deep endotracheal administration is the preferred method of administration. Most medications can be administered through the endotracheal tube in this fashion.

  2. Compression-induced structural and mechanical changes of fibrin-collagen composites.

    PubMed

    Kim, O V; Litvinov, R I; Chen, J; Chen, D Z; Weisel, J W; Alber, M S

    2017-07-01

    Fibrin and collagen as well as their combinations play an important biological role in tissue regeneration and are widely employed in surgery as fleeces or sealants and in bioengineering as tissue scaffolds. Earlier studies demonstrated that fibrin-collagen composite networks displayed improved tensile mechanical properties compared to the isolated protein matrices. Unlike previous studies, here unconfined compression was applied to a fibrin-collagen filamentous polymer composite matrix to study its structural and mechanical responses to compressive deformation. Combining collagen with fibrin resulted in formation of a composite hydrogel exhibiting synergistic mechanical properties compared to the isolated fibrin and collagen matrices. Specifically, the composite matrix revealed a one order of magnitude increase in the shear storage modulus at compressive strains>0.8 in response to compression compared to the mechanical features of individual components. These material enhancements were attributed to the observed structural alterations, such as network density changes, an increase in connectivity along with criss-crossing, and bundling of fibers. In addition, the compressed composite collagen/fibrin networks revealed a non-linear transformation of their viscoelastic properties with softening and stiffening regimes. These transitions were shown to depend on protein concentrations. Namely, a decrease in protein content drastically affected the mechanical response of the networks to compression by shifting the onset of stiffening to higher degrees of compression. Since both natural and artificially composed extracellular matrices experience compression in various (patho)physiological conditions, our results provide new insights into the structural biomechanics of the polymeric composite matrix that can help to create fibrin-collagen sealants, sponges, and tissue scaffolds with tunable and predictable mechanical properties. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Analysis-Preserving Video Microscopy Compression via Correlation and Mathematical Morphology

    PubMed Central

    Shao, Chong; Zhong, Alfred; Cribb, Jeremy; Osborne, Lukas D.; O’Brien, E. Timothy; Superfine, Richard; Mayer-Patel, Ketan; Taylor, Russell M.

    2015-01-01

    The large amount video data produced by multi-channel, high-resolution microscopy system drives the need for a new high-performance domain-specific video compression technique. We describe a novel compression method for video microscopy data. The method is based on Pearson's correlation and mathematical morphology. The method makes use of the point-spread function (PSF) in the microscopy video acquisition phase. We compare our method to other lossless compression methods and to lossy JPEG, JPEG2000 and H.264 compression for various kinds of video microscopy data including fluorescence video and brightfield video. We find that for certain data sets, the new method compresses much better than lossless compression with no impact on analysis results. It achieved a best compressed size of 0.77% of the original size, 25× smaller than the best lossless technique (which yields 20% for the same video). The compressed size scales with the video's scientific data content. Further testing showed that existing lossy algorithms greatly impacted data analysis at similar compression sizes. PMID:26435032

  4. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  5. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  6. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  7. Effects of Time-Compressed Speech Training on Multiple Functional and Structural Neural Mechanisms Involving the Left Superior Temporal Gyrus.

    PubMed

    Maruyama, Tsukasa; Takeuchi, Hikaru; Taki, Yasuyuki; Motoki, Kosuke; Jeong, Hyeonjeong; Kotozaki, Yuka; Nakagawa, Seishu; Nouchi, Rui; Iizuka, Kunio; Yokoyama, Ryoichi; Yamamoto, Yuki; Hanawa, Sugiko; Araki, Tsuyoshi; Sakaki, Kohei; Sasaki, Yukako; Magistro, Daniele; Kawashima, Ryuta

    2018-01-01

    Time-compressed speech is an artificial form of rapidly presented speech. Training with time-compressed speech (TCSSL) in a second language leads to adaptation toward TCSSL. Here, we newly investigated the effects of 4 weeks of training with TCSSL on diverse cognitive functions and neural systems using the fractional amplitude of spontaneous low-frequency fluctuations (fALFF), resting-state functional connectivity (RSFC) with the left superior temporal gyrus (STG), fractional anisotropy (FA), and regional gray matter volume (rGMV) of young adults by magnetic resonance imaging. There were no significant differences in change of performance of measures of cognitive functions or second language skills after training with TCSSL compared with that of the active control group. However, compared with the active control group, training with TCSSL was associated with increased fALFF, RSFC, and FA and decreased rGMV involving areas in the left STG. These results lacked evidence of a far transfer effect of time-compressed speech training on a wide range of cognitive functions and second language skills in young adults. However, these results demonstrated effects of time-compressed speech training on gray and white matter structures as well as on resting-state intrinsic activity and connectivity involving the left STG, which plays a key role in listening comprehension.

  8. Comparison of Implicit Schemes for the Incompressible Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1995-01-01

    For a computational flow simulation tool to be useful in a design environment, it must be very robust and efficient. To develop such a tool for incompressible flow applications, a number of different implicit schemes are compared for several two-dimensional flow problems in the current study. The schemes include Point-Jacobi relaxation, Gauss-Seidel line relaxation, incomplete lower-upper decomposition, and the generalized minimum residual method preconditioned with each of the three other schemes. The efficiency of the schemes is measured in terms of the computing time required to obtain a steady-state solution for the laminar flow over a backward-facing step, the flow over a NACA 4412 airfoil, and the flow over a three-element airfoil using overset grids. The flow solver used in the study is the INS2D code that solves the incompressible Navier-Stokes equations using the method of artificial compressibility and upwind differencing of the convective terms. The results show that the generalized minimum residual method preconditioned with the incomplete lower-upper factorization outperforms all other methods by at least a factor of 2.

  9. The behavior of compression and degradation for municipal solid waste and combined settlement calculation method.

    PubMed

    Shi, Jianyong; Qian, Xuede; Liu, Xiaodong; Sun, Long; Liao, Zhiqiang

    2016-09-01

    The total compression of municipal solid waste (MSW) consists of primary, secondary, and decomposition compressions. It is usually difficult to distinguish between the three parts of compressions. In this study, the odeometer test was used to distinguish between the primary and secondary compressions to determine the primary and secondary compression coefficient. In addition, the ending time of the primary compressions were proposed based on municipal solid waste compression tests in a degradation-inhibited condition by adding vinegar. The amount of the secondary compression occurring in the primary compression stage has a relatively high percentage to either the total compression or the total secondary compression. The relationship between the degradation ratio and time was obtained from the tests independently. Furthermore, a combined compression calculation method of municipal solid waste for all three parts of compressions including considering organics degradation is proposed based on a one-dimensional compression method. The relationship between the methane generation potential L0 of LandGEM model and degradation compression index was also discussed in the paper. A special column compression apparatus system, which can be used to simulate the whole compression process of municipal solid waste in China, was designed. According to the results obtained from 197-day column compression test, the new combined calculation method for municipal solid waste compression was analyzed. The degradation compression is the main part of the compression of MSW in the medium test period. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Bunch length compression method for free electron lasers to avoid parasitic compressions

    DOEpatents

    Douglas, David R.; Benson, Stephen; Nguyen, Dinh Cong; Tennant, Christopher; Wilson, Guy

    2015-05-26

    A method of bunch length compression method for a free electron laser (FEL) that avoids parasitic compressions by 1) applying acceleration on the falling portion of the RF waveform, 2) compressing using a positive momentum compaction (R.sub.56>0), and 3) compensating for aberration by using nonlinear magnets in the compressor beam line.

  11. Novel methodology to obtain salient biomechanical characteristics of insole materials.

    PubMed

    Lavery, L A; Vela, S A; Ashry, H R; Lanctot, D R; Athanasiou, K A

    1997-06-01

    Viscoelastic inserts are commonly used as artificial shock absorbers to prevent neuropathic foot ulcerations by decreasing pressure on the sole of the foot. Unfortunately, there is little scientific information available to guide physicians in the selection of appropriate insole materials. Therefore, a novel methodology was developed to form a rational platform for biomechanical characterizations of insole material durability, which consisted of in vivo gait analysis and in vitro bioengineering measurements. Results show significant differences in the compressive stiffness of the tested insoles and the rate of change over time in both compressive stiffness and peak pressures measured. Good correlations were found between pressure-time integral and Young's modulus (r2 = 0.93), and total energy applied and Young's modulus (r2 = 0.87).

  12. Calculation of Water Entry Problem for Free-falling Bodies Using a Developed Cartesian Cut Cell Mesh

    NASA Astrophysics Data System (ADS)

    Wenhua, Wang; Yanying, Wang

    2010-05-01

    This paper describes the development of free surface capturing method on Cartesian cut cell mesh to water entry problem for free-falling bodies with body-fluid interaction. The incompressible Euler equations for a variable density fluid system are presented as governing equations and the free surface is treated as a contact discontinuity by using free surface capturing method. In order to be convenient for dealing with the problem with moving body boundary, the Cartesian cut cell technique is adopted for generating the boundary-fitted mesh around body edge by cutting solid regions out of a background Cartesian mesh. Based on this mesh system, governing equations are discretized by finite volume method, and at each cell edge inviscid flux is evaluated by means of Roe's approximate Riemann solver. Furthermore, for unsteady calculation in time domain, a time accurate solution is achieved by a dual time-stepping technique with artificial compressibility method. For the body-fluid interaction, the projection method of momentum equations and exact Riemann solution are applied in the calculation of fluid pressure on the solid boundary. Finally, the method is validated by test case of water entry for free-falling bodies.

  13. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture the steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier-Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.

  14. A robust and accurate numerical method for transcritical turbulent flows at supercritical pressure with an arbitrary equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawai, Soshi, E-mail: kawai@cfd.mech.tohoku.ac.jp; Terashima, Hiroshi; Negishi, Hideyo

    2015-11-01

    This paper addresses issues in high-fidelity numerical simulations of transcritical turbulent flows at supercritical pressure. The proposed strategy builds on a tabulated look-up table method based on REFPROP database for an accurate estimation of non-linear behaviors of thermodynamic and fluid transport properties at the transcritical conditions. Based on the look-up table method we propose a numerical method that satisfies high-order spatial accuracy, spurious-oscillation-free property, and capability of capturing the abrupt variation in thermodynamic properties across the transcritical contact surface. The method introduces artificial mass diffusivity to the continuity and momentum equations in a physically-consistent manner in order to capture themore » steep transcritical thermodynamic variations robustly while maintaining spurious-oscillation-free property in the velocity field. The pressure evolution equation is derived from the full compressible Navier–Stokes equations and solved instead of solving the total energy equation to achieve the spurious pressure oscillation free property with an arbitrary equation of state including the present look-up table method. Flow problems with and without physical diffusion are employed for the numerical tests to validate the robustness, accuracy, and consistency of the proposed approach.« less

  15. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    PubMed

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  16. Importance of Tensile Strength on the Shear Behavior of Discontinuities

    NASA Astrophysics Data System (ADS)

    Ghazvinian, A. H.; Azinfar, M. J.; Geranmayeh Vaneghi, R.

    2012-05-01

    In this study, the shear behavior of discontinuities possessing two different rock wall types with distinct separate compressive strengths was investigated. The designed profiles consisted of regular artificial joints molded by five types of plaster mortars, each representing a distinct uniaxial compressive strength. The compressive strengths of plaster specimens ranged from 5.9 to 19.5 MPa. These specimens were molded considering a regular triangular asperity profile and were designed so as to achieve joint walls with different strength material combinations. The results showed that the shear behavior of discontinuities possessing different joint wall compressive strengths (DDJCS) tested under constant normal load (CNL) conditions is the same as those possessing identical joint wall strengths, but the shear strength of DDJCS is governed by minor joint wall compressive strength. In addition, it was measured that the predicted values obtained by Barton's empirical criterion are greater than the experimental results. The finding indicates that there is a correlation between the joint roughness coefficient (JRC), normal stress, and mechanical strength. It was observed that the mode of failure of asperities is either pure tensile, pure shear, or a combination of both. Therefore, Barton's strength criterion, which considers the compressive strength of joint walls, was modified by substituting the compressive strength with the tensile strength. The validity of the modified criterion was examined by the comparison of the predicted shear values with the laboratory shear test results reported by Grasselli (Ph.D. thesis n.2404, Civil Engineering Department, EPFL, Lausanne, Switzerland, 2001). These comparisons infer that the modified criterion can predict the shear strength of joints more precisely.

  17. A novel method of edema fluid drainage in obstructive lymphedema of limbs by implantation of hydrophobic silicone tubes.

    PubMed

    Olszewski, Waldemar L; Zaleska, Marzanna

    2015-10-01

    Lymphedema of limbs is caused by partial or total obstruction of lymphatic collectors as a consequence of skin and deep soft tissue inflammation, trauma of soft tissues and bones, lymphadenectomy, and irradiation in cancer therapy. According to the statistics of the World Health Organization, around 300 million people are affected by pathologic edema of limbs. Effective treatment of such large cohorts has been a challenge for centuries. However, none of the conservative and surgical methods applied so far proved to restore the shape and function of limbs to normal conditions. Actually, physiotherapy is the therapy of choice as a main modality or supplementary to surgical procedures divided into two groups: the bridging drainage and excisional techniques. The microsurgical operations can be performed if some parts of the peripheral collecting lymphatics remain patent and partially drain edematous regions. However, in advanced cases of lymphedema, all main lymphatics are obstructed and tissue fluid accumulates in the interstitial spaces, spontaneously forming "blind channels" or "lakes." The only solution would be to create artificial pathways for edema fluid flow away to the nonobstructed regions where absorption of fluid can take place. The aim of this study was to form artificial pathways for edema fluid flow by subcutaneous implantation of silicone tubes placed along the limb from the lower leg to the lumbar or hypogastric region. In a group of 20 patients with obstructive lymphedema of the lower limbs that developed after lymphadenectomy and irradiation of the pelvis because of uterine cancer with unsuccessful conservative therapy, implantation was done, followed by external compression as intermittent pneumatic compression and elastic support of tissues. Postoperative circumference measurements, lymphoscintigraphy, and ultrasonography of tissues were carried out during 2 years of follow-up. There was a fast decrease of calf circumference since the day of implantation during weeks by a mean 3% with stabilization afterward. Patency of tubes and accumulation of fluid around them were seen on ultrasonography and lymphoscintigraphy in all cases. No tissue cellular reaction to silicone tubes was noted. The simplicity of the surgical procedure, decrease of limb edema, and lack of tissue reaction to the implant make the method worth applying in advanced stages of lymphedema with large volumes of accumulated tissue edema fluid. Copyright © 2015 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  18. Thermoplastic composites for veneering posterior teeth-a feasibility study.

    PubMed

    Gegauff, Anthony G; Garcia, Jose L; Koelling, Kurt W; Seghi, Robert R

    2002-09-01

    This pilot study was conducted to explore selected commercially-available thermoplastic composites that potentially had physical properties superior to currently available dental systems for restoring esthetic posterior crowns. Polyurethane, polycarbonate, and poly(ethylene/tetrafluoroethylene) (ETFE) composites and unfilled polyurethane specimens were injection molded to produce shapes adaptive to five standardized mechanical tests. The mechanical testing included abrasive wear rate, yield strength, apparent fracture toughness (strength ratio), flexural strength, and compressive strength. Compared to commercially available dental composites, abrasion wear rates were lower for all materials tested, yield strength was greater for the filled polycarbonates and filled polyurethane resins, fracture toughness testing was invalid (strength ratios were calculated for comparison of the pilot test materials), flexural strength was roughly similar except for the filled ETFE which was significantly greater, and compressive strength was lower. Commercially available thermoplastic resin composites, such as polyurethane, demonstrate the potential for development of an artificial crown material which exceeds the mechanical properties of currently available esthetic systems, if compressive strength can be improved.

  19. Raman study of radiation-damaged zircon under hydrostatic compression

    NASA Astrophysics Data System (ADS)

    Nasdala, Lutz; Miletich, Ronald; Ruschel, Katja; Váczi, Tamás

    2008-12-01

    Pressure-induced changes of Raman band parameters of four natural, gem-quality zircon samples with different degrees of self-irradiation damage, and synthetic ZrSiO4 without radiation damage, have been studied under hydrostatic compression in a diamond anvil cell up to ~10 GPa. Radiation-damaged zircon shows similar up-shifts of internal SiO4 stretching modes at elevated pressures as non-damaged ZrSiO4. Only minor changes of band-widths were observed in all cases. This makes it possible to estimate the degree of radiation damage from the width of the ν3(SiO4) band of zircon inclusions in situ, almost independent from potential “fossilized pressures” or compressive strain acting on the inclusions. An application is the non-destructive analysis of gemstones such as corundum or spinel: broadened Raman bands are a reliable indicator of self-irradiation damage in zircon inclusions, whose presence allows one to exclude artificial color enhancement by high-temperature treatment of the specimen.

  20. The first teleautomatic low-voltage prosthesis with multiple therapeutic applications: a new version of the German artificial sphincter system.

    PubMed

    Ruthmann, Olaf; Richter, Sabine; Seifert, Gabriel; Karcz, Wojciech; Goldschmidboing, Frank; Lemke, Thomas; Biancuzzi, Gionvanni; Woias, Peter; Schmidt, Thomas; Schwarzbqch, Stefan; Vodermayer, Bernahard; Hopt, Ulrich; Schrag, Hans-Jurgen

    2010-08-01

    To date, there are no artificial sphincter prostheses for urinary or fecal incontinence that may be implemented elsewhere instead, for example, in the upper gastrointestinal tract. Conventional systems are conceptually similar but are constructed specifically for distinct applications and are manual in operation. The German Artificial Sphincter System (GASS) II is the evolution of a highly integrative, modular, telemetric sphincter prosthesis with more than one application. Redesigning and integrating multilayer actuators into the pump allows us to reduce the input voltage to -10 to +20 V (V(PP) = 30 V). This provides for a flow rate of 2.23 mL/min and a counterpressure stability of 260 mbar. Furthermore, multiple applications have become feasible due to our standardized connection system, therapy-specific compression units, and application-specific software. These innovations allow us to integrate not only severe fecal and urinary incontinence, erectile dysfunction, and therapy-resistant reflux disease, but also morbid adiposity into the gamut of therapeutic GASS applications.

  1. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    PubMed

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  2. Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process

    PubMed Central

    Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana

    2012-01-01

    The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors. PMID:22919295

  3. Compressive strength of delaminated aerospace composites.

    PubMed

    Butler, Richard; Rhead, Andrew T; Liu, Wenli; Kontis, Nikolaos

    2012-04-28

    An efficient analytical model is described which predicts the value of compressive strain below which buckle-driven propagation of delaminations in aerospace composites will not occur. An extension of this efficient strip model which accounts for propagation transverse to the direction of applied compression is derived. In order to provide validation for the strip model a number of laminates were artificially delaminated producing a range of thin anisotropic sub-laminates made up of 0°, ±45° and 90° plies that displayed varied buckling and delamination propagation phenomena. These laminates were subsequently subject to experimental compression testing and nonlinear finite element analysis (FEA) using cohesive elements. Comparison of strip model results with those from experiments indicates that the model can conservatively predict the strain at which propagation occurs to within 10 per cent of experimental values provided (i) the thin-film assumption made in the modelling methodology holds and (ii) full elastic coupling effects do not play a significant role in the post-buckling of the sub-laminate. With such provision, the model was more accurate and produced fewer non-conservative results than FEA. The accuracy and efficiency of the model make it well suited to application in optimum ply-stacking algorithms to maximize laminate strength.

  4. Cavitation of intercellular spaces is critical to establishment of hydraulic properties of compression wood of Chamaecyparis obtusa seedlings.

    PubMed

    Nakaba, Satoshi; Hirai, Asami; Kudo, Kayo; Yamagishi, Yusuke; Yamane, Kenichi; Kuroda, Katsushi; Nugroho, Widyanto Dwi; Kitin, Peter; Funada, Ryo

    2016-03-01

    When the orientation of the stems of conifers departs from the vertical as a result of environmental influences, conifers form compression wood that results in restoration of verticality. It is well known that intercellular spaces are formed between tracheids in compression wood, but the function of these spaces remains to be clarified. In the present study, we evaluated the impact of these spaces in artificially induced compression wood in Chamaecyparis obtusa seedlings. We monitored the presence or absence of liquid in the intercellular spaces of differentiating xylem by cryo-scanning electron microscopy. In addition, we analysed the relationship between intercellular spaces and the hydraulic properties of the compression wood. Initially, we detected small intercellular spaces with liquid in regions in which the profiles of tracheids were not rounded in transverse surfaces, indicating that the intercellular spaces had originally contained no gases. In the regions where tracheids had formed secondary walls, we found that some intercellular spaces had lost their liquid. Cavitation of intercellular spaces would affect hydraulic conductivity as a consequence of the induction of cavitation in neighbouring tracheids. Our observations suggest that cavitation of intercellular spaces is the critical event that affects not only the functions of intercellular spaces but also the hydraulic properties of compression wood. © The Author 2016. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Numerical Simulation of Creep Characteristic for Composite Rock Mass with Weak Interlayer

    NASA Astrophysics Data System (ADS)

    Li, Jian-guang; Zhang, Zuo-liang; Zhang, Yu-biao; Shi, Xiu-wen; Wei, Jian

    2017-06-01

    The composite rock mass with weak interlayer is widely exist in engineering, and it’s essential to research the creep behavior which could cause stability problems of rock engineering and production accidents. However, due to it is difficult to take samples, the losses and damages in delivery and machining process, we always cannot get enough natural layered composite rock mass samples, so the indirect test method has been widely used. In this paper, we used ANSYS software (a General Finite Element software produced by American ANSYS, Inc) to carry out the numerical simulation based on the uniaxial compression creep experiments of artificial composite rock mass with weak interlayer, after experimental data fitted. The results show that the laws obtained by numerical simulations and experiments are consistent. Thus confirmed that carry out numerical simulation for the creep characteristics of rock mass with ANSYS software is feasible, and this method can also be extended to other underground engineering of simulate the weak intercalations.

  6. Machine-learning in astronomy

    NASA Astrophysics Data System (ADS)

    Hobson, Michael; Graff, Philip; Feroz, Farhan; Lasenby, Anthony

    2014-05-01

    Machine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, called SkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. The SkyNet and BAMBI packages, which are fully parallelised using MPI, are available at http://www.mrao.cam.ac.uk/software/.

  7. Revealing physical interaction networks from statistics of collective dynamics

    PubMed Central

    Nitzan, Mor; Casadiego, Jose; Timme, Marc

    2017-01-01

    Revealing physical interactions in complex systems from observed collective dynamics constitutes a fundamental inverse problem in science. Current reconstruction methods require access to a system’s model or dynamical data at a level of detail often not available. We exploit changes in invariant measures, in particular distributions of sampled states of the system in response to driving signals, and use compressed sensing to reveal physical interaction networks. Dynamical observations following driving suffice to infer physical connectivity even if they are temporally disordered, are acquired at large sampling intervals, and stem from different experiments. Testing various nonlinear dynamic processes emerging on artificial and real network topologies indicates high reconstruction quality for existence as well as type of interactions. These results advance our ability to reveal physical interaction networks in complex synthetic and natural systems. PMID:28246630

  8. High Fidelity Simulations for Unsteady Flow Through the Orbiter LH2 Feedline Flowliner

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, Dochan; Chan, William; Housman, Jeffrey

    2005-01-01

    High fidelity computations were carried out to analyze the orbiter M2 feedline flowliner. Various computational models were used to characterize the unsteady flow features in the turbopump, including the orbiter Low-Pressure-Fuel-Turbopump (LPFTP) inducer, the orbiter manifold and a test article used to represent the manifold. Unsteady flow originating from the orbiter LPFTP inducer is one of the major contributors to the high frequency cyclic loading that results in high cycle fatigue damage to the gimbal flowliners just upstream of the LPFTP. The flow fields for the orbiter manifold and representative test article are computed and analyzed for similarities and differences. An incompressible Navier-Stokes flow solver INS3D, based on the artificial compressibility method, was used to compute the flow of liquid hydrogen in each test article.

  9. Resource recycling through artificial lightweight aggregates from sewage sludge and derived ash using boric acid flux to lower co-melting temperature.

    PubMed

    Hu, Shao-Hua; Hu, Shen-Chih; Fu, Yen-Pei

    2012-02-01

    This study focuses on artificial lightweight aggregates (ALWAs) formed from sewage sludge and ash at lowered co-melting temperatures using boric acid as the fluxing agent. The weight percentages of boric acid in the conditioned mixtures of sludge and ash were 13% and 22%, respectively. The ALWA derived from sewage sludge was synthesized under the following conditions: preheating at 400 degrees C 0.5 hr and a sintering temperature of 850 degrees C 1 hr. The analytical results of water adsorption, bulk density, apparent porosity, and compressive strength were 3.88%, 1.05 g/cm3, 3.93%, and 29.7 MPa, respectively. Scanning electron microscope (SEM) images of the ALWA show that the trends in water adsorption and apparent porosity were opposite to those of bulk density. This was due to the inner pores being sealed off by lower-melting-point material at the aggregates'surface. In the case of ash-derived aggregates, water adsorption, bulk density, apparent porosity, and compressive strength were 0.82%, 0.91 g/cm3, 0.82%, and 28.0 MPa, respectively. Both the sludge- and ash-derived aggregates meet the legal standards for ignition loss and soundness in Taiwan for construction or heat insulation materials.

  10. Effects of bubbling operations on a thermally stratified reservoir: implications for water quality amelioration.

    PubMed

    Fernandez, R L; Bonansea, M; Cosavella, A; Monarde, F; Ferreyra, M; Bresciano, J

    2012-01-01

    Artificial thermal mixing of the water column is a common method of addressing water quality problems with the most popular method of destratification being the bubble curtain. The air or oxygen distribution along submerged multiport diffusers is based on similar basic principles as those of outfall disposal systems. Moreover, the disposal of sequestered greenhouse gases into the ocean, as recently proposed by several researchers to mitigate the global warming problem, requires analogous design criteria. In this paper, the influence of a bubble-plume is evaluated using full-scale temperature and water quality data collected in San Roque Reservoir, Argentina. A composite system consisting of seven separated diffusers connected to four 500 kPa compressors was installed at this reservoir by the end of 2008. The original purpose of this air bubble system was to reduce the stratification, so that the water body may completely mix under natural phenomena and remain well oxygenated throughout the year. By using a combination of the field measurements and modelling, this work demonstrates that thermal mixing by means of compressed air may improve water quality; however, if improperly sized or operated, such mixing can also cause deterioration. Any disruption in aeration during the destratification process, for example, may result in a reduction of oxygen levels due to the higher hypolimnetic temperatures. Further, the use of artificial destratification appears to have insignificant influence on reducing evaporation rates in relatively shallow impoundments such as San Roque reservoir.

  11. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    PubMed

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  13. Chemically-bonded brick production based on burned clay by means of semidry pressing

    NASA Astrophysics Data System (ADS)

    Voroshilov, Ivan; Endzhievskaya, Irina; Vasilovskaya, Nina

    2016-01-01

    We presented a study on the possibility of using the burnt rocks of the Krasnoyarsk Territory for production of chemically-bonded materials in the form of bricks which are so widely used in multistory housing and private house construction. The radiographic analysis of the composition of burnt rock was conducted and a modifier to adjust the composition uniformity was identified. The mixing moisture content was identified and optimal amount at 13-15% was determined. The method of semidry pressing has been chosen. The process of obtaining moldings has been theoretically proved; the advantages of chemically-bonded wall materials compared to ceramic brick were shown. The production of efficient artificial stone based on material burnt rocks, which is comparable with conventionally effective ceramic materials or effective with cell tile was proved, the density of the burned clay-based cell tile makes up to 1630-1785 kg m3, with compressive strength of 13.6-20.0 MPa depending on the compression ratio and cement consumption, frost resistance index is F50, and the thermal conductivity in the masonry is λ = 0,459-0,546 W m * °C. The clear geometric dimensions of pressed products allow the use of the chemically-bonded brick based on burnt clay as a facing brick.

  14. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  15. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  16. Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.

    PubMed

    Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder

    2016-02-01

    A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.

  17. Effects of Time-Compressed Speech Training on Multiple Functional and Structural Neural Mechanisms Involving the Left Superior Temporal Gyrus

    PubMed Central

    Maruyama, Tsukasa; Taki, Yasuyuki; Motoki, Kosuke; Jeong, Hyeonjeong; Kotozaki, Yuka; Nakagawa, Seishu; Iizuka, Kunio; Yokoyama, Ryoichi; Yamamoto, Yuki; Hanawa, Sugiko; Araki, Tsuyoshi; Sakaki, Kohei; Sasaki, Yukako; Magistro, Daniele; Kawashima, Ryuta

    2018-01-01

    Time-compressed speech is an artificial form of rapidly presented speech. Training with time-compressed speech (TCSSL) in a second language leads to adaptation toward TCSSL. Here, we newly investigated the effects of 4 weeks of training with TCSSL on diverse cognitive functions and neural systems using the fractional amplitude of spontaneous low-frequency fluctuations (fALFF), resting-state functional connectivity (RSFC) with the left superior temporal gyrus (STG), fractional anisotropy (FA), and regional gray matter volume (rGMV) of young adults by magnetic resonance imaging. There were no significant differences in change of performance of measures of cognitive functions or second language skills after training with TCSSL compared with that of the active control group. However, compared with the active control group, training with TCSSL was associated with increased fALFF, RSFC, and FA and decreased rGMV involving areas in the left STG. These results lacked evidence of a far transfer effect of time-compressed speech training on a wide range of cognitive functions and second language skills in young adults. However, these results demonstrated effects of time-compressed speech training on gray and white matter structures as well as on resting-state intrinsic activity and connectivity involving the left STG, which plays a key role in listening comprehension. PMID:29675038

  18. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  19. On lossy transform compression of ECG signals with reference to deformation of their parameter values.

    PubMed

    Koski, Antti; Tossavainen, Timo; Juhola, Martti

    2004-01-01

    Electrocardiogram (ECG) signals are the most prominent biomedical signal type used in clinical medicine. Their compression is important and widely researched in the medical informatics community. In the previous literature compression efficacy has been investigated only in the context of how much known or developed methods reduced the storage required by compressed forms of original ECG signals. Sometimes statistical signal evaluations based on, for example, root mean square error were studied. In previous research we developed a refined method for signal compression and tested it jointly with several known techniques for other biomedical signals. Our method of so-called successive approximation quantization used with wavelets was one of the most successful in those tests. In this paper, we studied to what extent these lossy compression methods altered values of medical parameters (medical information) computed from signals. Since the methods are lossy, some information is lost due to the compression when a high enough compression ratio is reached. We found that ECG signals sampled at 400 Hz could be compressed to one fourth of their original storage space, but the values of their medical parameters changed less than 5% due to compression, which indicates reliable results.

  20. A New Compression Method for FITS Tables

    NASA Technical Reports Server (NTRS)

    Pence, William; Seaman, Rob; White, Richard L.

    2010-01-01

    As the size and number of FITS binary tables generated by astronomical observatories increases, so does the need for a more efficient compression method to reduce the amount disk space and network bandwidth required to archive and down1oad the data tables. We have developed a new compression method for FITS binary tables that is modeled after the FITS tiled-image compression compression convention that has been in use for the past decade. Tests of this new method on a sample of FITS binary tables from a variety of current missions show that on average this new compression technique saves about 50% more disk space than when simply compressing the whole FITS file with gzip. Other advantages of this method are (1) the compressed FITS table is itself a valid FITS table, (2) the FITS headers remain uncompressed, thus allowing rapid read and write access to the keyword values, and (3) in the common case where the FITS file contains multiple tables, each table is compressed separately and may be accessed without having to uncompress the whole file.

  1. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  2. Advances in high throughput DNA sequence data compression.

    PubMed

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz

    2016-06-01

    Advances in high throughput sequencing technologies and reduction in cost of sequencing have led to exponential growth in high throughput DNA sequence data. This growth has posed challenges such as storage, retrieval, and transmission of sequencing data. Data compression is used to cope with these challenges. Various methods have been developed to compress genomic and sequencing data. In this article, we present a comprehensive review of compression methods for genome and reads compression. Algorithms are categorized as referential or reference free. Experimental results and comparative analysis of various methods for data compression are presented. Finally, key challenges and research directions in DNA sequence data compression are highlighted.

  3. Stabilization and target delivery of Nattokinase using compression coating.

    PubMed

    Law, D; Zhang, Z

    2007-05-01

    The aim of the work is to develop a new formulation in order to stabilize a nutraceutical enzyme Nattokinase (NKCP) in powders and to control its release rate when it passes through the gastrointestinal tract of human. NKCP powders were first compacted into a tablet, which was then coated with a mixture of an enteric material Eudragit L100-55 (EL100-55) and Hydroxypropylcellulose (HPC) by direct compression. The activity of the enzyme was determined using amidolytic assay and its release rates in artificial gastric juice and an intestinal fluid were quantified using bicinchoninic acid assay. Results have shown that the activity of NKCP was pressure independent and the coated tablets protected NKCP from being denatured in the gastric juice, and realized its controlled release to the intestine based on in vitro experiments.

  4. Novel Data Reduction Based on Statistical Similarity

    DOE PAGES

    Lee, Dongeun; Sim, Alex; Choi, Jaesik; ...

    2016-07-18

    Applications such as scientific simulations and power grid monitoring are generating so much data quickly that compression is essential to reduce storage requirement or transmission capacity. To achieve better compression, one is often willing to discard some repeated information. These lossy compression methods are primarily designed to minimize the Euclidean distance between the original data and the compressed data. But this measure of distance severely limits either reconstruction quality or compression performance. In this paper, we propose a new class of compression method by redefining the distance measure with a statistical concept known as exchangeability. This approach reduces the storagemore » requirement and captures essential features, while reducing the storage requirement. In this paper, we report our design and implementation of such a compression method named IDEALEM. To demonstrate its effectiveness, we apply it on a set of power grid monitoring data, and show that it can reduce the volume of data much more than the best known compression method while maintaining the quality of the compressed data. Finally, in these tests, IDEALEM captures extraordinary events in the data, while its compression ratios can far exceed 100.« less

  5. LagLoc - a new surgical technique for locking plate systems.

    PubMed

    Triana, Miguel; Gueorguiev, Boyko; Sommer, Christoph; Stoffel, Karl; Agarwal, Yash; Zderic, Ivan; Helfen, Tobias; Krieg, James C; Krause, Fabian; Knobe, Matthias; Richards, R Geoff; Lenz, Mark

    2018-06-19

    Treatment of oblique and spiral fractures remains challenging. The aim of this study was to introduce and investigate the new LagLoc technique for locked plating with generation of interfragmentary compression, combining the advantages of lag-screw and locking-head-screw techniques. Oblique fracture was simulated in artificial diaphyseal bones, assigned to three groups for plating with a 7-hole locking compression plate. Group I was plated with three locking screws in holes 1, 4 and 7. The central screw crossed the fracture line. In group II the central hole was occupied with a lag screw perpendicular to fracture line. Group III was instrumented applying the LagLoc technique as follows. Hole 4 was predrilled perpendicularly to the plate, followed by overdrilling of the near cortex and insertion of a locking screw whose head was covered by a holding sleeve to prevent temporarily the locking in the plate hole and generate interfragmentary compression. Subsequently, the screw head was released and locked in the plate hole. Holes 1 and 7 were occupied with locking screws. Interfragmentary compression in the fracture gap was measured using pressure sensors. All screws in the three groups were tightened with 4Nm torque. Interfragmentary compression in group I (167 ± 25N) was significantly lower in comparison to groups II (431 ± 21N) and III (379 ± 59N), p≤0.005. The difference in compression between groups II and III remained not significant (p = 0.999). The new LagLoc technique offers an alternative tool to generate interfragmentary compression with the application of locking plates by combining the biomechanical advantages of lag screw and locking screw fixations. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Cultural Resources Inventory of the Montz Freshwater Diversion Project Corridor, St. Charles Parish, Louisiana. Volume 1.

    DTIC Science & Technology

    1986-06-23

    to St. Louis Cathedral; they did so, again citing health hazards . However, in 1846, Wood built a cotton compress at the location despite the presence...price of meat has always been low, and that the artificial restrictions of commerce have prevented the development of other outlets (Robin 1966:114...relationships among the variables. The cultural resource professional will avoid pigeon -holing data gathering operations into unalterable research

  7. Calculation methods for compressible turbulent boundary layers, 1976

    NASA Technical Reports Server (NTRS)

    Bushnell, D. M.; Cary, A. M., Jr.; Harris, J. E.

    1977-01-01

    Equations and closure methods for compressible turbulent boundary layers are discussed. Flow phenomena peculiar to calculation of these boundary layers were considered, along with calculations of three dimensional compressible turbulent boundary layers. Procedures for ascertaining nonsimilar two and three dimensional compressible turbulent boundary layers were appended, including finite difference, finite element, and mass-weighted residual methods.

  8. Artificial Satellites Observations Using the Complex of Telescopes of RI "MAO"

    NASA Astrophysics Data System (ADS)

    Sybiryakova, Ye. S.; Shulga, O. V.; Vovk, V. S.; Kaliuzny, M. P.; Bushuev, F. I.; Kulichenko, M. O.; Haloley, M. I.; Chernozub, V. M.

    2017-02-01

    Special methods, means and software for cosmic objects' observation and processing of obtained results were developed. Combined method, which consists in separated accumulation of images of reference stars and artificial objects, is the main method used in observations of artificial cosmic objects. It is used for observations of artificial objects at all types of orbits.

  9. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  10. 2D-RBUC for efficient parallel compression of residuals

    NASA Astrophysics Data System (ADS)

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  11. Method and apparatus for holding two separate metal pieces together for welding

    NASA Technical Reports Server (NTRS)

    Mcclure, S. R. (Inventor)

    1980-01-01

    A method of holding two separate metal pieces together for welding is described including the steps of overlapping a portion of one of the metal pieces on a portion of the other metal piece, encasing the overlapping metal piece in a compressible device, drawing the compressible device into an enclosure, and compressing a portion of the compressible device around the overlapping portions of the metal pieces for holding the metal pieces under constant and equal pressure during welding. The preferred apparatus for performing the method utilizes a support mechanism to support the two separate metal pieces in an overlapping configuration; a compressible device surrounding the support mechanism and at least one of the metal pieces, and a compressing device surrounding the compressible device for compressing the compressible device around the overlapping portions of the metal pieces, thus providing constant and equal pressure at all points on the overlapping portions of the metal pieces.

  12. System using data compression and hashing adapted for use for multimedia encryption

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffland, Douglas R

    2011-07-12

    A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

  13. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  14. Coil compression in simultaneous multislice functional MRI with concentric ring slice-GRAPPA and SENSE.

    PubMed

    Chu, Alan; Noll, Douglas C

    2016-10-01

    Simultaneous multislice (SMS) imaging is a useful way to accelerate functional magnetic resonance imaging (fMRI). As acceleration becomes more aggressive, an increasingly larger number of receive coils are required to separate the slices, which significantly increases the computational burden. We propose a coil compression method that works with concentric ring non-Cartesian SMS imaging and should work with Cartesian SMS as well. We evaluate the method on fMRI scans of several subjects and compare it to standard coil compression methods. The proposed method uses a slice-separation k-space kernel to simultaneously compress coil data into a set of virtual coils. Five subjects were scanned using both non-SMS fMRI and SMS fMRI with three simultaneous slices. The SMS fMRI scans were processed using the proposed method, along with other conventional methods. Code is available at https://github.com/alcu/sms. The proposed method maintained functional activation with a fewer number of virtual coils than standard SMS coil compression methods. Compression of non-SMS fMRI maintained activation with a slightly lower number of virtual coils than the proposed method, but does not have the acceleration advantages of SMS fMRI. The proposed method is a practical way to compress and reconstruct concentric ring SMS data and improves the preservation of functional activation over standard coil compression methods in fMRI. Magn Reson Med 76:1196-1209, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  15. Information Compression, Multiple Alignment, and the Representation and Processing of Knowledge in the Brain

    PubMed Central

    Wolff, J. Gerard

    2016-01-01

    The SP theory of intelligence, with its realization in the SP computer model, aims to simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human perception and cognition, with information compression as a unifying theme. This paper describes how abstract structures and processes in the theory may be realized in terms of neurons, their interconnections, and the transmission of signals between neurons. This part of the SP theory—SP-neural—is a tentative and partial model for the representation and processing of knowledge in the brain. Empirical support for the SP theory—outlined in the paper—provides indirect support for SP-neural. In the abstract part of the SP theory (SP-abstract), all kinds of knowledge are represented with patterns, where a pattern is an array of atomic symbols in one or two dimensions. In SP-neural, the concept of a “pattern” is realized as an array of neurons called a pattern assembly, similar to Hebb's concept of a “cell assembly” but with important differences. Central to the processing of information in SP-abstract is information compression via the matching and unification of patterns (ICMUP) and, more specifically, information compression via the powerful concept of multiple alignment, borrowed and adapted from bioinformatics. Processes such as pattern recognition, reasoning and problem solving are achieved via the building of multiple alignments, while unsupervised learning is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another. It is envisaged that, in SP-neural, short-lived neural structures equivalent to multiple alignments will be created via an inter-play of excitatory and inhibitory neural signals. It is also envisaged that unsupervised learning will be achieved by the creation of pattern assemblies from sensory information and from the neural equivalents of multiple alignments, much as in the non-neural SP theory—and significantly different from the “Hebbian” kinds of learning which are widely used in the kinds of artificial neural network that are popular in computer science. The paper discusses several associated issues, with relevant empirical evidence. PMID:27857695

  16. Information Compression, Multiple Alignment, and the Representation and Processing of Knowledge in the Brain.

    PubMed

    Wolff, J Gerard

    2016-01-01

    The SP theory of intelligence , with its realization in the SP computer model , aims to simplify and integrate observations and concepts across artificial intelligence, mainstream computing, mathematics, and human perception and cognition, with information compression as a unifying theme. This paper describes how abstract structures and processes in the theory may be realized in terms of neurons, their interconnections, and the transmission of signals between neurons. This part of the SP theory- SP-neural -is a tentative and partial model for the representation and processing of knowledge in the brain. Empirical support for the SP theory-outlined in the paper-provides indirect support for SP-neural. In the abstract part of the SP theory (SP-abstract), all kinds of knowledge are represented with patterns , where a pattern is an array of atomic symbols in one or two dimensions. In SP-neural, the concept of a "pattern" is realized as an array of neurons called a pattern assembly , similar to Hebb's concept of a "cell assembly" but with important differences. Central to the processing of information in SP-abstract is information compression via the matching and unification of patterns (ICMUP) and, more specifically, information compression via the powerful concept of multiple alignment , borrowed and adapted from bioinformatics. Processes such as pattern recognition, reasoning and problem solving are achieved via the building of multiple alignments, while unsupervised learning is achieved by creating patterns from sensory information and also by creating patterns from multiple alignments in which there is a partial match between one pattern and another. It is envisaged that, in SP-neural, short-lived neural structures equivalent to multiple alignments will be created via an inter-play of excitatory and inhibitory neural signals. It is also envisaged that unsupervised learning will be achieved by the creation of pattern assemblies from sensory information and from the neural equivalents of multiple alignments, much as in the non-neural SP theory-and significantly different from the "Hebbian" kinds of learning which are widely used in the kinds of artificial neural network that are popular in computer science. The paper discusses several associated issues, with relevant empirical evidence.

  17. Image splitting and remapping method for radiological image compression

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  18. Elliptic flow computation by low Reynolds number two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Shih, T.-H.

    1991-01-01

    A detailed comparison of ten low-Reynolds-number k-epsilon models is carried out. The flow solver, based on an implicit approximate factorization method, is designed for incompressible, steady two-dimensional flows. The conservation of mass is enforced by the artificial compressibility approach and the computational domain is discretized using centered finite differences. The turbulence model predictions of the flow past a hill are compared with experiments at Re = 10 exp 6. The effects of the grid spacing together with the numerical efficiency of the various formulations are investigated. The results show that the models provide a satisfactory prediction of the flow field in the presence of a favorable pressure gradient, while the accuracy rapidly deteriorates when a strong adverse pressure gradient is encountered. A newly proposed model form that does not explicitly depend on the wall distance seems promising for application to complex geometries.

  19. Calculation of afterbody flows with a composite velocity formulation

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Rubin, S. G.; Khosla, P. K.

    1983-01-01

    A recently developed technique for numerical solution of the Navier-Stokes equations for subsonic, laminar flows is investigated. It is extended here to allow for the computation of transonic and turbulent flows. The basic approach involves a multiplicative composite of the appropriate velocity representations for the inviscid and viscous flow regions. The resulting equations are structured so that far from the surface of the body the momentum equations lead to the Bernoulli equation for the pressure, while the continuity equation reduces to the familiar potential equation. Close to the body surface, the governing equations and solution techniques are characteristic of those describing interacting boundary layers. The velocity components are computed with a coupled strongly implicity procedure. For transonic flows the artificial compressibility method is used to treat supersonic regions. Calculations are made for both laminar and turbulent flows over axisymmetric afterbody configurations. Present results compare favorably with other numerical solutions and/or experimental data.

  20. High performance sandwich structured Si thin film anodes with LiPON coating

    NASA Astrophysics Data System (ADS)

    Luo, Xinyi; Lang, Jialiang; Lv, Shasha; Li, Zhengcao

    2018-06-01

    The sandwich structured silicon thin film anodes with lithium phosphorus oxynitride (LiPON) coating are synthesized via the radio frequency magnetron sputtering method, whereas the thicknesses of both layers are in the nanometer range, i.e. between 50 and 200 nm. In this sandwich structure, the separator simultaneously functions as a flexible substrate, while the LiPON layer is regarded as a protective layer. This sandwich structure combines the advantages of flexible substrate, which can help silicon release the compressive stress, and the LiPON coating, which can provide a stable artificial solid-electrolyte interphase (SEI) film on the electrode. As a result, the silicon anodes are protected well, and the cells exhibit high reversible capacity, excellent cycling stability and good rate capability. All the results demonstrate that this sandwich structure can be a promising option for high performance Si thin film lithium ion batteries.

  1. Astrometric Search Method for Individually Resolvable Gravitational Wave Sources with Gaia

    NASA Astrophysics Data System (ADS)

    Moore, Christopher J.; Mihaylov, Deyan P.; Lasenby, Anthony; Gilmore, Gerard

    2017-12-01

    Gravitational waves (GWs) cause the apparent position of distant stars to oscillate with a characteristic pattern on the sky. Astrometric measurements (e.g., those made by Gaia) provide a new way to search for GWs. The main difficulty facing such a search is the large size of the data set; Gaia observes more than one billion stars. In this Letter the problem of searching for GWs from individually resolvable supermassive black hole binaries using astrometry is addressed for the first time; it is demonstrated how the data set can be compressed by a factor of more than 1 06, with a loss of sensitivity of less than 1%. This technique was successfully used to recover artificially injected GW signals from mock Gaia data and to assess the GW sensitivity of Gaia. Throughout the Letter the complementarity of Gaia and pulsar timing searches for GWs is highlighted.

  2. High performance sandwich structured Si thin film anodes with LiPON coating

    NASA Astrophysics Data System (ADS)

    Luo, Xinyi; Lang, Jialiang; Lv, Shasha; Li, Zhengcao

    2018-04-01

    The sandwich structured silicon thin film anodes with lithium phosphorus oxynitride (LiPON) coating are synthesized via the radio frequency magnetron sputtering method, whereas the thicknesses of both layers are in the nanometer range, i.e. between 50 and 200 nm. In this sandwich structure, the separator simultaneously functions as a flexible substrate, while the LiPON layer is regarded as a protective layer. This sandwich structure combines the advantages of flexible substrate, which can help silicon release the compressive stress, and the LiPON coating, which can provide a stable artificial solidelectrolyte interphase (SEI) film on the electrode. As a result, the silicon anodes are protected well, and the cells exhibit high reversible capacity, excellent cycling stability and good rate capability. All the results demonstrate that this sandwich structure can be a promising option for high performance Si thin film lithium ion batteries.

  3. An efficient nonlinear relaxation technique for the three-dimensional, Reynolds-averaged Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1993-01-01

    An efficient implicit method for the computation of steady, three-dimensional, compressible Navier-Stokes flowfields is presented. A nonlinear iteration strategy based on planar Gauss-Seidel sweeps is used to drive the solution toward a steady state, with approximate factorization errors within a crossflow plane reduced by the application of a quasi-Newton technique. A hybrid discretization approach is employed, with flux-vector splitting utilized in the streamwise direction and central differences with artificial dissipation used for the transverse fluxes. Convergence histories and comparisons with experimental data are presented for several 3-D shock-boundary layer interactions. Both laminar and turbulent cases are considered, with turbulent closure provided by a modification of the Baldwin-Barth one-equation model. For the problems considered (175,000-325,000 mesh points), the algorithm provides steady-state convergence in 900-2000 CPU seconds on a single processor of a Cray Y-MP.

  4. High Fidelity Simulations of Unsteady Flow through Turbopumps and Flowliners

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin C.; Kwak, dochan; Chan, William; Housman, Jeff

    2006-01-01

    High fidelity computations were carried out to analyze the orbiter LH2 feedline flowliner. Computations were performed on the Columbia platform which is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processor each. Various computational models were used to characterize the unsteady flow features in the turbopump, including the orbiter Low-Pressure-Fuel-Turbopump (LPFTP) inducer, the orbiter manifold and a test article used to represent the manifold. Unsteady flow originating from the orbiter LPFTP inducer is one of the major contributors to the high frequency cyclic loading that results in high cycle fatigue damage to the gimbal flowliners just upstream of the LPFTP. The flow fields for the orbiter manifold and representative test article are computed and analyzed for similarities and differences. The incompressible Navier-Stokes flow solver INS3D, based on the artificial compressibility method, was used to compute the flow of liquid hydrogen in each test article.

  5. Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1991-01-01

    Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.

  6. A technique to remove the tensile instability in weakly compressible SPH

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyang; Yu, Peng

    2018-01-01

    When smoothed particle hydrodynamics (SPH) is directly applied for the numerical simulations of transient viscoelastic free surface flows, a numerical problem called tensile instability arises. In this paper, we develop an optimized particle shifting technique to remove the tensile instability in SPH. The basic equations governing free surface flow of an Oldroyd-B fluid are considered, and approximated by an improved SPH scheme. This includes the implementations of the correction of kernel gradient and the introduction of Rusanov flux into the continuity equation. To verify the effectiveness of the optimized particle shifting technique in removing the tensile instability, the impacting drop, the injection molding of a C-shaped cavity, and the extrudate swell, are conducted. The numerical results obtained are compared with those simulated by other numerical methods. A comparison among different numerical techniques (e.g., the artificial stress) to remove the tensile instability is further performed. All numerical results agree well with the available data.

  7. A matrix-form GSM-CFD solver for incompressible fluids and its application to hemodynamics

    NASA Astrophysics Data System (ADS)

    Yao, Jianyao; Liu, G. R.

    2014-10-01

    A GSM-CFD solver for incompressible flows is developed based on the gradient smoothing method (GSM). A matrix-form algorithm and corresponding data structure for GSM are devised to efficiently approximate the spatial gradients of field variables using the gradient smoothing operation. The calculated gradient values on various test fields show that the proposed GSM is capable of exactly reproducing linear field and of second order accuracy on all kinds of meshes. It is found that the GSM is much more robust to mesh deformation and therefore more suitable for problems with complicated geometries. Integrated with the artificial compressibility approach, the GSM is extended to solve the incompressible flows. As an example, the flow simulation of carotid bifurcation is carried out to show the effectiveness of the proposed GSM-CFD solver. The blood is modeled as incompressible Newtonian fluid and the vessel is treated as rigid wall in this paper.

  8. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    NASA Astrophysics Data System (ADS)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  9. Load Transmission Through Artificial Hip Joints due to Stress Wave Loading

    NASA Astrophysics Data System (ADS)

    Tanabe, Y.; Uchiyama, T.; Yamaoka, H.; Ohashi, H.

    Since wear of the polyethylene (Ultra High Molecular Weight Polyethylene or UHMWPE) acetabular cup is considered to be the main cause of loosening of the artificial hip joint, the cross-linked UHMWPE with high durability to wear has been developed. This paper deals with impact load transmission through the complex of an artificial hip joint consisting of a UHMWPE acetabular cup (or liner), a metallic femoral head and stem. Impact compressive tests on the complex were performed using the split-Hopkinson pressure bar apparatus. To investigate the effects of material (conventional or cross-linked UHMWPE), size and setting angle of the liner, and test temperature on force transmission, the impact load transmission ratio (ILTR) was experimentally determined. The ILTR decreased with an increase of the setting angle independent of material and size of the liner, and test temperature. The ILTR values at 37°C were larger than those at 24 °C and 60°C. The ILTR also appeared to be affected by the type of material as well as size of the liner.

  10. Experimental spinal cord trauma: a review of mechanically induced spinal cord injury in rat models.

    PubMed

    Abdullahi, Dauda; Annuar, Azlina Ahmad; Mohamad, Masro; Aziz, Izzuddin; Sanusi, Junedah

    2017-01-01

    It has been shown that animal spinal cord compression (using methods such as clips, balloons, spinal cord strapping, or calibrated forceps) mimics the persistent spinal canal occlusion that is common in human spinal cord injury (SCI). These methods can be used to investigate the effects of compression or to know the optimal timing of decompression (as duration of compression can affect the outcome of pathology) in acute SCI. Compression models involve prolonged cord compression and are distinct from contusion models, which apply only transient force to inflict an acute injury to the spinal cord. While the use of forceps to compress the spinal cord is a common choice due to it being inexpensive, it has not been critically assessed against the other methods to determine whether it is the best method to use. To date, there is no available review specifically focused on the current compression methods of inducing SCI in rats; thus, we performed a systematic and comprehensive publication search to identify studies on experimental spinalization in rat models, and this review discusses the advantages and limitations of each method.

  11. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  12. A streamlined artificial variable free version of simplex method.

    PubMed

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

  13. A Streamlined Artificial Variable Free Version of Simplex Method

    PubMed Central

    Inayatullah, Syed; Touheed, Nasir; Imtiaz, Muhammad

    2015-01-01

    This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. PMID:25767883

  14. Supersaturation of Nitrogen Gas caused by Artificial Aeration in Reservoirs.

    DTIC Science & Technology

    1982-09-01

    compressors at Skinner Reservoir ............ ........................ BI8 B-17 Compressed air bubbling up the face of the dam at Vail Reservoir...16.3 0.0 0 614 13.0 106 58 12 12.0 0.0 0 620 14.3 107 51 15 10.5 0.0 0 624 14.9 108 45 18 10.1 0.0 0 635 15.3 110 41 21 10.0 0.0 0 640 15.4 111 38 24

  15. Numerical solution of the incompressible Navier-Stokes equations. Ph.D. Thesis - Stanford Univ., Mar. 1989

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.

    1990-01-01

    The current work is initiated in an effort to obtain an efficient, accurate, and robust algorithm for the numerical solution of the incompressible Navier-Stokes equations in two- and three-dimensional generalized curvilinear coordinates for both steady-state and time-dependent flow problems. This is accomplished with the use of the method of artificial compressibility and a high-order flux-difference splitting technique for the differencing of the convective terms. Time accuracy is obtained in the numerical solutions by subiterating the equations in psuedo-time for each physical time step. The system of equations is solved with a line-relaxation scheme which allows the use of very large pseudo-time steps leading to fast convergence for steady-state problems as well as for the subiterations of time-dependent problems. Numerous laminar test flow problems are computed and presented with a comparison against analytically known solutions or experimental results. These include the flow in a driven cavity, the flow over a backward-facing step, the steady and unsteady flow over a circular cylinder, flow over an oscillating plate, flow through a one-dimensional inviscid channel with oscillating back pressure, the steady-state flow through a square duct with a 90 degree bend, and the flow through an artificial heart configuration with moving boundaries. An adequate comparison with the analytical or experimental results is obtained in all cases. Numerical comparisons of the upwind differencing with central differencing plus artificial dissipation indicates that the upwind differencing provides a much more robust algorithm, which requires significantly less computing time. The time-dependent problems require on the order of 10 to 20 subiterations, indicating that the elliptical nature of the problem does require a substantial amount of computing effort.

  16. Method for compression molding of thermosetting plastics utilizing a temperature gradient across the plastic to cure the article

    NASA Technical Reports Server (NTRS)

    Heier, W. C. (Inventor)

    1974-01-01

    A method is described for compression molding of thermosetting plastics composition. Heat is applied to the compressed load in a mold cavity and adjusted to hold molding temperature at the interface of the cavity surface and the compressed compound to produce a thermal front. This thermal front advances into the evacuated compound at mean right angles to the compression load and toward a thermal fence formed at the opposite surface of the compressed compound.

  17. An Efficient, Lossless Database for Storing and Transmitting Medical Images

    NASA Technical Reports Server (NTRS)

    Fenstermacher, Marc J.

    1998-01-01

    This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).

  18. Fast lossless compression via cascading Bloom filters

    PubMed Central

    2014-01-01

    Background Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. Results We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Conclusions Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly. PMID:25252952

  19. Fast lossless compression via cascading Bloom filters.

    PubMed

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space slightly.

  20. METHOD OF FIXING NITROGEN FOR PRODUCING OXIDES OF NITROGEN

    DOEpatents

    Harteck, P.; Dondes, S.

    1959-08-01

    A method is described for fixing nitrogen from air by compressing the air, irradiating the compressed air in a nuclear reactor, cooling to remove NO/ sub 2/, compressing the cooled gas, further cooling to remove N/sub 2/O and recirculating the cooled compressed air to the reactor.

  1. Development of a High-Order Navier-Stokes Solver Using Flux Reconstruction to Simulate Three-Dimensional Vortex Structures in a Curved Artery Model

    NASA Astrophysics Data System (ADS)

    Cox, Christopher

    Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the shear stress is both oscillatory and multidirectional. Also, the combined effect of curvature and pulsatility in cardiovascular flows produces unsteady vortices. The aim of this research as it relates to cardiovascular fluid dynamics is to predict the spatial and temporal evolution of vortical structures generated by secondary flows, as well as to assess the correlation between multiple vortex pairs and wall shear stress. We use a physiologically (pulsatile) relevant flow rate and generate results using both fully developed and uniform entrance conditions, the latter being motivated by the fact that flow upstream of a curved artery may not have sufficient straight entrance length to become fully developed. Under the two pulsatile inflow conditions, we characterize the morphology and evolution of various vortex pairs and their subsequent effect on relevant haemodynamic wall shear stress metrics.

  2. Biodiesel from plant seed oils as an alternate fuel for compression ignition engines-a review.

    PubMed

    Vijayakumar, C; Ramesh, M; Murugesan, A; Panneerselvam, N; Subramaniam, D; Bharathiraja, M

    2016-12-01

    The modern scenario reveals that the world is facing energy crisis due to the dwindling sources of fossil fuels. Environment protection agencies are more concerned about the atmospheric pollution due to the burning of fossil fuels. Alternative fuel research is getting augmented because of the above reasons. Plant seed oils (vegetable oils) are cleaner, sustainable, and renewable. So, it can be the most suitable alternative fuel for compression ignition (CI) engines. This paper reviews the availability of different types of plant seed oils, several methods for production of biodiesel from vegetable oils, and its properties. The different types of oils considered in this review are cashew nut shell liquid (CNSL) oil, ginger oil, eucalyptus oil, rice bran oil, Calophyllum inophyllum, hazelnut oil, sesame oil, clove stem oil, sardine oil, honge oil, polanga oil, mahua oil, rubber seed oil, cotton seed oil, neem oil, jatropha oil, egunsi melon oil, shea butter, linseed oil, Mohr oil, sea lemon oil, pumpkin oil, tobacco seed oil, jojoba oil, and mustard oil. Several methods for production of biodiesel are transesterification, pre-treatment, pyrolysis, and water emulsion are discussed. The various fuel properties considered for review such as specific gravity, viscosity, calorific value, flash point, and fire point are presented. The review also portrays advantages, limitations, performance, and emission characteristics of engine using plant seed oil biodiesel are discussed. Finally, the modeling and optimization of engine for various biofuels with different input and output parameters using artificial neural network, response surface methodology, and Taguchi are included.

  3. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    PubMed

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  4. A material-sparing method for assessment of powder deformation characteristics using data collected during a single compression-decompression cycle.

    PubMed

    Katz, Jeffrey M; Roopwani, Rahul; Buckner, Ira S

    2013-10-01

    Compressibility profiles, or functions of solid fraction versus applied pressure, are used to provide insight into the fundamental mechanical behavior of powders during compaction. These functions, collected during compression (in-die) or post ejection (out-of-die), indicate the amount of pressure that a given powder formulation requires to be compressed to a given density or thickness. To take advantage of the benefits offered by both methods, the data collected in-die during a single compression-decompression cycle will be used to generate the equivalent of a complete out-of-die compressibility profile that has been corrected for both elastic and viscoelastic recovery of the powder. This method has been found to be both a precise and accurate means of evaluating out-of-die compressibility for four common tableting excipients. Using this method, a comprehensive characterization of powder compaction behavior, specifically in relation to plastic/brittle, elastic and viscoelastic deformation, can be obtained. Not only is the method computationally simple, but it is also material-sparing. The ability to characterize powder compressibility using this approach can improve productivity and streamline tablet development studies. © 2013 Wiley Periodicals, Inc. and the American Pharmacists Association.

  5. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  6. Determine the Compressive Strength of Calcium Silicate Bricks by Combined Nondestructive Method

    PubMed Central

    2014-01-01

    The paper deals with the application of combined nondestructive method for assessment of compressive strength of calcium silicate bricks. In this case, it is a combination of the rebound hammer method and ultrasonic pulse method. Calibration relationships for determining compressive strength of calcium silicate bricks obtained from nondestructive parameter testing for the combined method as well as for the L-type Schmidt rebound hammer and ultrasonic pulse method are quoted here. Calibration relationships are known for their close correlation and are applicable in practice. The highest correlation between parameters from nondestructive measurement and predicted compressive strength is obtained using the SonReb combined nondestructive method. Combined nondestructive SonReb method was proved applicable for determination of compressive strength of calcium silicate bricks at checking tests in a production plant and for evaluation of bricks built in existing masonry structures. PMID:25276864

  7. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  8. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  9. Chemically-bonded brick production based on burned clay by means of semidry pressing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voroshilov, Ivan, E-mail: Nixon.06@mail.ru; Endzhievskaya, Irina, E-mail: icaend@mail.ru; Vasilovskaya, Nina, E-mail: icaend@mail.ru

    We presented a study on the possibility of using the burnt rocks of the Krasnoyarsk Territory for production of chemically-bonded materials in the form of bricks which are so widely used in multistory housing and private house construction. The radiographic analysis of the composition of burnt rock was conducted and a modifier to adjust the composition uniformity was identified. The mixing moisture content was identified and optimal amount at 13-15% was determined. The method of semidry pressing has been chosen. The process of obtaining moldings has been theoretically proved; the advantages of chemically-bonded wall materials compared to ceramic brick weremore » shown. The production of efficient artificial stone based on material burnt rocks, which is comparable with conventionally effective ceramic materials or effective with cell tile was proved, the density of the burned clay-based cell tile makes up to 1630-1785 kg \\ m{sup 3}, with compressive strength of 13.6-20.0 MPa depending on the compression ratio and cement consumption, frost resistance index is F50, and the thermal conductivity in the masonry is λ = 0,459-0,546 W \\ m {sup *} °C. The clear geometric dimensions of pressed products allow the use of the chemically-bonded brick based on burnt clay as a facing brick.« less

  10. Mechanical stiffness of TMJ condylar cartilage increases after artificial aging by ribose.

    PubMed

    Mirahmadi, Fereshteh; Koolstra, Jan Harm; Lobbezoo, Frank; van Lenthe, G Harry; Ghazanfari, Samaneh; Snabel, Jessica; Stoop, Reinout; Everts, Vincent

    2018-03-01

    Aging is accompanied by a series of changes in mature tissues that influence their properties and functions. Collagen, as one of the main extracellular components of cartilage, becomes highly crosslinked during aging. In this study, the aim was to examine whether a correlation exists between collagen crosslinking induced by artificial aging and mechanical properties of the temporomandibular joint (TMJ) condyle. To evaluate this hypothesis, collagen crosslinks were induced using ribose incubation. Porcine TMJ condyles were incubated for 7 days with different concentrations of ribose. The compressive modulus and stiffness ratio (incubated versus control) was determined after loading. Glycosaminoglycan and collagen content, and the number of crosslinks were analyzed. Tissue structure was visualized by microscopy using different staining methods. Concomitant with an increasing concentration of ribose, an increase of collagen crosslinks was found. The number of crosslinks increased almost 50 fold after incubation with the highest concentration of ribose. Simultaneously, the stiffness ratio of the samples showed a significant increase after incubation with the ribose. Pearson correlation analyses showed a significant positive correlation between the overall stiffness ratio and the crosslink level; the higher the number of crosslinks the higher the stiffness. The present model, in which ribose was used to mimic certain aspects of age-related changes, can be employed as an in vitro model to study age-related mechanical changes in the TMJ condyle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The new concept of the monitoring and appraisal of bone union inflexibility of fractures treated by Dynastab DK external fixator.

    PubMed

    Lenz, Gerhard P; Stasiak, Andrzej; Deszczyński, Jarosław; Karpiński, Janusz; Stolarczyk, Artur; Ziółkowski, Marcin; Szczesny, Grzegorz

    2003-10-30

    Background. This work focuses on problems of heuristic techniques based on artificial intelligence. Mainly about artificial non-linear and multilayer neurons, which were used to estimate the bone union fractures treatment process using orthopaedic stabilizers Dynastab DK. Material and methods. The author utilizes computer software based on multilayer neuronal network systems, which allows to predict the curve of the bone union at early stages of therapy. The training of the neural net has been made on fifty six cases of bone fracture which has been cured by the Dynastab stabilizers DK. Using such trained net, seventeen fractures of long bones shafts were being examined on strength and prediction of the bone union as well. Results. Analyzing results, it should be underlined that mechanical properties of the bone union in the slot of fracture are changing in nonlinear way in function of time. Especially, major changes were observed during the forth month of the fracture treatment. There is strong correlation between measure number two and measure number six. Measure number two is more strict and in the matter of fact it refers to flexion, as well as the measure number six, to compression of the bone in the fracture slot. Conclusions. Consequently, deflection loads are especially hazardous for healing bone. The very strong correlation between real curves and predicted curves shows the correctness of the neuronal model.

  12. Implicit solution of three-dimensional internal turbulent flows

    NASA Technical Reports Server (NTRS)

    Michelassi, V.; Liou, M.-S.; Povinelli, Louis A.; Martelli, F.

    1991-01-01

    The scalar form of the approximate factorization method was used to develop a new code for the solution of three dimensional internal laminar and turbulent compressible flows. The Navier-Stokes equations in their Reynolds-averaged form were iterated in time until a steady solution was reached. Evidence was given to the implicit and explicit artificial damping schemes that proved to be particularly efficient in speeding up convergence and enhancing the algorithm robustness. A conservative treatment of these terms at the domain boundaries was proposed in order to avoid undesired mass and/or momentum artificial fluxes. Turbulence effects were accounted for by the zero-equation Baldwin-Lomax turbulence model and the q-omega two-equation model. The flow in a developing S-duct was then solved in the laminar regime in a Reynolds number (Re) of 790 and in the turbulent regime at Re equals 40,000 by using the Baldwin-Lomax model. The Stanitz elbow was then solved by using an invicid version of the same code at M sub inlet equals 0.4. Grid dependence and convergence rate were investigated, showing that for this solver the implicit damping scheme may play a critical role for convergence characteristics. The same flow at Re equals 2.5 times 10(exp 6) was solved with the Baldwin-Lomax and the q-omega models. Both approaches show satisfactory agreement with experiments, although the q-omega model was slightly more accurate.

  13. Data compression techniques applied to high resolution high frame rate video technology

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  14. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  15. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  16. Geometric decompositions of collective motion

    NASA Astrophysics Data System (ADS)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  17. Sound Emission of Rotor Induced Deformations of Generator Casings

    NASA Technical Reports Server (NTRS)

    Polifke, W.; Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)

    2001-01-01

    The casing of large electrical generators can be deformed slightly by the rotor's magnetic field. The sound emission produced by these periodic deformations, which could possibly exceed guaranteed noise emission limits, is analysed analytically and numerically. From the deformation of the casing, the normal velocity of the generator's surface is computed. Taking into account the corresponding symmetry, an analytical solution for the acoustic pressure outside the generator is round in terms of the Hankel function of second order. The normal velocity or the generator surface provides the required boundary condition for the acoustic pressure and determines the magnitude of pressure oscillations. For the numerical simulation, the nonlinear 2D Euler equations are formulated In a perturbation form for low Mach number Computational Aeroacoustics (CAA). The spatial derivatives are discretized by the classical sixth-order central interior scheme and a third-order boundary scheme. Spurious high frequency oscillations are damped by a characteristic-based artificial compression method (ACM) filter. The time derivatives are approximated by the classical 4th-order Runge-Kutta method. The numerical results are In excellent agreement with the analytical solution.

  18. Geometric decompositions of collective motion

    PubMed Central

    Krishnaprasad, P. S.

    2017-01-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319

  19. Determination of total polyphenol index in wines employing a voltammetric electronic tongue.

    PubMed

    Cetó, Xavier; Gutiérrez, Juan Manuel; Gutiérrez, Manuel; Céspedes, Francisco; Capdevila, Josefina; Mínguez, Santiago; Jiménez-Jorquera, Cecilia; del Valle, Manel

    2012-06-30

    This work reports the application of a voltammetric electronic tongue system (ET) made from an array of modified graphite-epoxy composites plus a gold microelectrode in the qualitative and quantitative analysis of polyphenols found in wine. Wine samples were analyzed using cyclic voltammetry without any sample pretreatment. The obtained responses were preprocessed employing discrete wavelet transform (DWT) in order to compress and extract significant features from the voltammetric signals, and the obtained approximation coefficients fed a multivariate calibration method (artificial neural network-ANN-or partial least squares-PLS-) which accomplished the quantification of total polyphenol content. External test subset samples results were compared with the ones obtained with the Folin-Ciocalteu (FC) method and UV absorbance polyphenol index (I(280)) as reference values, with highly significant correlation coefficients of 0.979 and 0.963 in the range from 50 to 2400 mg L(-1) gallic acid equivalents, respectively. In a separate experiment, qualitative discrimination of different polyphenols found in wine was also assessed by principal component analysis (PCA). Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  1. Graphics processing unit-assisted lossless decompression

    DOEpatents

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  2. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  3. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  4. Graphical classification of DNA sequences of HLA alleles by deep learning.

    PubMed

    Miyake, Jun; Kaneshita, Yuhei; Asatani, Satoshi; Tagawa, Seiichi; Niioka, Hirohiko; Hirano, Takashi

    2018-04-01

    Alleles of human leukocyte antigen (HLA)-A DNAs are classified and expressed graphically by using artificial intelligence "Deep Learning (Stacked autoencoder)". Nucleotide sequence data corresponding to the length of 822 bp, collected from the Immuno Polymorphism Database, were compressed to 2-dimensional representation and were plotted. Profiles of the two-dimensional plots indicate that the alleles can be classified as clusters are formed. The two-dimensional plot of HLA-A DNAs gives a clear outlook for characterizing the various alleles.

  5. On Numerical Heating

    NASA Astrophysics Data System (ADS)

    Liou, Meng-Sing

    2013-11-01

    The development of computational fluid dynamics over the last few decades has yielded enormous successes and capabilities that are being routinely employed today; however there remain some open problems to be properly resolved. One example is the so-called overheating problem, which can arise in two very different scenarios, from either colliding or receding streams. Common in both is a localized, numerically over-predicted temperature. Von Neumann reported the former, a compressive overheating, nearly 70 years ago and numerically smeared the temperature peak by introducing artificial diffusion. However, the latter is unphysical in an expansive (rarefying) situation; it still dogs every method known to the author. We will present a study aiming at resolving this overheating problem and we find that: (1) the entropy increase is one-to-one linked to the increase in the temperature rise and (2) the overheating is inevitable in the current computational fluid dynamics framework in practice. Finally we will show a simple hybrid method that fundamentally cures the overheating problem in a rarefying flow, but also retains the property of accurate shock capturing. Moreover, this remedy (enhancement of current numerical methods) can be included easily in the present Eulerian codes. This work is performed under NASA's Fundamental Aeronautics Program.

  6. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  7. Collective intelligence of the artificial life community on its own successes, failures, and future.

    PubMed

    Rasmussen, Steen; Raven, Michael J; Keating, Gordon N; Bedau, Mark A

    2003-01-01

    We describe a novel Internet-based method for building consensus and clarifying conflicts in large stakeholder groups facing complex issues, and we use the method to survey and map the scientific and organizational perspectives of the artificial life community during the Seventh International Conference on Artificial Life (summer 2000). The issues addressed in this survey included artificial life's main successes, main failures, main open scientific questions, and main strategies for the future, as well as the benefits and pitfalls of creating a professional society for artificial life. By illuminating the artificial life community's collective perspective on these issues, this survey illustrates the value of such methods of harnessing the collective intelligence of large stakeholder groups.

  8. Tissue-engineered articular cartilage exhibits tension-compression nonlinearity reminiscent of the native cartilage.

    PubMed

    Kelly, Terri-Ann N; Roach, Brendan L; Weidner, Zachary D; Mackenzie-Smith, Charles R; O'Connell, Grace D; Lima, Eric G; Stoker, Aaron M; Cook, James L; Ateshian, Gerard A; Hung, Clark T

    2013-07-26

    The tensile modulus of articular cartilage is much larger than its compressive modulus. This tension-compression nonlinearity enhances interstitial fluid pressurization and decreases the frictional coefficient. The current set of studies examines the tensile and compressive properties of cylindrical chondrocyte-seeded agarose constructs over different developmental stages through a novel method that combines osmotic loading, video microscopy, and uniaxial unconfined compression testing. This method was previously used to examine tension-compression nonlinearity in native cartilage. Engineered cartilage, cultured under free-swelling (FS) or dynamically loaded (DL) conditions, was tested in unconfined compression in hypertonic and hypotonic salt solutions. The apparent equilibrium modulus decreased with increasing salt concentration, indicating that increasing the bath solution osmolarity shielded the fixed charges within the tissue, shifting the measured moduli along the tension-compression curve and revealing the intrinsic properties of the tissue. With this method, we were able to measure the tensile (401±83kPa for FS and 678±473kPa for DL) and compressive (161±33kPa for FS and 348±203kPa for DL) moduli of the same engineered cartilage specimens. These moduli are comparable to values obtained from traditional methods, validating this technique for measuring the tensile and compressive properties of hydrogel-based constructs. This study shows that engineered cartilage exhibits tension-compression nonlinearity reminiscent of the native tissue, and that dynamic deformational loading can yield significantly higher tensile properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Bystander fatigue and CPR quality by older bystanders: a randomized crossover trial comparing continuous chest compressions and 30:2 compressions to ventilations.

    PubMed

    Liu, Shawn; Vaillancourt, Christian; Kasaboski, Ann; Taljaard, Monica

    2016-11-01

    This study sought to measure bystander fatigue and cardiopulmonary resuscitation (CPR) quality after five minutes of CPR using the continuous chest compression (CCC) versus the 30:2 chest compression to ventilation method in older lay persons, a population most likely to perform CPR on cardiac arrest victims. This randomized crossover trial took place at three tertiary care hospitals and a seniors' center. Participants were aged ≥55 years without significant physical limitations (frailty score ≤3/7). They completed two 5-minute CPR sessions (using 30:2 and CCC) on manikins; sessions were separated by a rest period. We used concealed block randomization to determine CPR method order. Metronome feedback maintained a compression rate of 100/minute. We measured heart rate (HR), mean arterial pressure (MAP), and Borg Exertion Scale. CPR quality measures included total number of compressions and number of adequate compressions (depth ≥5 cm). Sixty-three participants were enrolled: mean age 70.8 years, female 66.7%, past CPR training 60.3%. Bystander fatigue was similar between CPR methods: mean difference in HR -0.59 (95% CI -3.51-2.33), MAP 1.64 (95% CI -0.23-3.50), and Borg 0.46 (95% CI 0.07-0.84). Compared to 30:2, participants using CCC performed more chest compressions (480.0 v. 376.3, mean difference 107.7; p<0.0001) and more adequate chest compressions (381.5 v. 324.9, mean difference. 62.0; p=0.0001), although good compressions/minute declined significantly faster with the CCC method (p=0.0002). CPR quality decreased significantly faster when performing CCC compared to 30:2. However, performing CCC produced more adequate compressions overall with a similar level of fatigue compared to the 30:2 method.

  10. Monitoring compaction and compressibility changes in offshore chalk reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, G.; Hardy, R.; Eltvik, P.

    1994-03-01

    Some of the North Sea's largest and most important oil fields are in chalk reservoirs. In these fields, it is important to measure reservoir compaction and compressibility because compaction can result in platform subsidence. Also, compaction drive is a main drive mechanism in these fields, so an accurate reserves estimate cannot be made without first measuring compressibility. Estimating compaction and reserves is difficult because compressibility changes throughout field life. Installing of accurate, permanent downhole pressure gauges on offshore chalk fields makes it possible to use a new method to monitor compressibility -- measurement of reservoir pressure changes caused by themore » tide. This tidal-monitoring technique is an in-situ method that can greatly increase compressibility information. It can be used to estimate compressibility and to measure compressibility variation over time. This paper concentrates on application of the tidal-monitoring technique to North Sea chalk reservoirs. However, the method is applicable for any tidal offshore area and can be applied whenever necessary to monitor in-situ rock compressibility. One such application would be if platform subsidence was expected.« less

  11. Digital mammography, cancer screening: Factors important for image compression

    NASA Technical Reports Server (NTRS)

    Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria

    1993-01-01

    The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.

  12. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  13. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  14. The deconvolution of complex spectra by artificial immune system

    NASA Astrophysics Data System (ADS)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  15. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    PubMed

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  16. Performance evaluation of the multiple-image optical compression and encryption method by increasing the number of target images

    NASA Astrophysics Data System (ADS)

    Aldossari, M.; Alfalou, A.; Brosseau, C.

    2017-08-01

    In an earlier study [Opt. Express 22, 22349-22368 (2014)], a compression and encryption method that simultaneous compress and encrypt closely resembling images was proposed and validated. This multiple-image optical compression and encryption (MIOCE) method is based on a special fusion of the different target images spectra in the spectral domain. Now for the purpose of assessing the capacity of the MIOCE method, we would like to evaluate and determine the influence of the number of target images. This analysis allows us to evaluate the performance limitation of this method. To achieve this goal, we use a criterion based on the root-mean-square (RMS) [Opt. Lett. 35, 1914-1916 (2010)] and compression ratio to determine the spectral plane area. Then, the different spectral areas are merged in a single spectrum plane. By choosing specific areas, we can compress together 38 images instead of 26 using the classical MIOCE method. The quality of the reconstructed image is evaluated by making use of the mean-square-error criterion (MSE).

  17. Architecture for one-shot compressive imaging using computer-generated holograms.

    PubMed

    Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D

    2016-09-10

    We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.

  18. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these sensors are scheme independent and can be stand alone options for numerical algorithm other than the Yee et al. scheme.

  19. An Application of the Difference Potentials Method to Solving External Problems in CFD

    NASA Technical Reports Server (NTRS)

    Ryaben 'Kii, Victor S.; Tsynkov, Semyon V.

    1997-01-01

    Numerical solution of infinite-domain boundary-value problems requires some special techniques that would make the problem available for treatment on the computer. Indeed, the problem must be discretized in a way that the computer operates with only finite amount of information. Therefore, the original infinite-domain formulation must be altered and/or augmented so that on one hand the solution is not changed (or changed slightly) and on the other hand the finite discrete formulation becomes available. One widely used approach to constructing such discretizations consists of truncating the unbounded original domain and then setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The role of the ABC's is to close the truncated problem and at the same time to ensure that the solution found inside the finite computational domain would be maximally close to (in the ideal case, exactly the same as) the corresponding fragment of the original infinite-domain solution. Let us emphasize that the proper treatment of artificial boundaries may have a profound impact on the overall quality and performance of numerical algorithms. The latter statement is corroborated by the numerous computational experiments and especially concerns the area of CFD, in which external problems present a wide class of practically important formulations. In this paper, we review some work that has been done over the recent years on constructing highly accurate nonlocal ABC's for calculation of compressible external flows. The approach is based on implementation of the generalized potentials and pseudodifferential boundary projection operators analogous to those proposed first by Calderon. The difference potentials method (DPM) by Ryaben'kii is used for the effective computation of the generalized potentials and projections. The resulting ABC's clearly outperform the existing methods from the standpoints of accuracy and robustness, in many cases noticeably speed up the multigrid convergence, and at the same time are quite comparable to other methods from the standpoints of geometric universality and simplicity of implementation.

  20. The Polygon-Ellipse Method of Data Compression of Weather Maps

    DTIC Science & Technology

    1994-03-28

    Report No. DOT’•FAAJRD-9416 Pr•oject Report AD-A278 958 ATC-213 The Polygon-Ellipse Method of Data Compression of Weather Maps ELDCT E J.L. GerIz 28...a o means must he- found to Compress this image. The l’olygion.Ellip.e (PE.) encoding algorithm develop.ed in this report rt-premrnt. weather regions...severely compress the image. For example, Mode S would require approximately a 10-fold compression . In addition, the algorithms used to perform the

  1. Compressible Convection Experiment using Xenon Gas in a Centrifuge

    NASA Astrophysics Data System (ADS)

    Menaut, R.; Alboussiere, T.; Corre, Y.; Huguet, L.; Labrosse, S.; Deguen, R.; Moulin, M.

    2017-12-01

    We present here an experiment especially designed to study compressible convection in the lab. For significant compressible convection effects, the parameters of the experiment have to be optimized: we use xenon gaz in a cubic cell. This cell is placed in a centrifuge to artificially increase the apparent gravity and heated from below. With these choices, we are able to reach a dissipation number close to Earth's outer core value. We will present our results for different heating fluxes and rotation rates. We success to observe an adiabatic gradient of 3K/cm in the cell. Studies of pressure and temperature fluctuations lead us to think that the convection takes place under the form of a single roll in the cell for high heating flux. Moreover, these fluctuations show that the flow is geostrophic due to the high rotation speed. This important role of rotation, via Coriolis force effects, in our experimental setup leads us to develop a 2D quasigeostrophic compressible model in the anelastic liquid approximation. We test numerically this model with the finite element solver FreeFem++ and compare its results with our experimental data. In conclusion, we will present our project for the next experiment in which the cubic cell will be replace by a annulus cell. We will discuss the new expected effects due to this geometry as Rossby waves and zonal flows.

  2. Load-unloading response of intact and artificially degraded articular cartilage correlated with near infrared (NIR) absorption spectra.

    PubMed

    Afara, I O; Singh, S; Oloyede, A

    2013-04-01

    The conventional mechanical properties of articular cartilage, such as compressive stiffness, have been demonstrated to be limited in their capacity to distinguish intact (visually normal) from degraded cartilage samples. In this paper, we explore the correlation between a new mechanical parameter, namely the reswelling of articular cartilage following unloading from a given compressive load, and the near infrared (NIR) spectrum. The capacity to distinguish mechanically intact from proteoglycan-depleted tissue relative to the "reswelling" characteristic was first established, and the result was subsequently correlated with the NIR spectral data of the respective tissue samples. To achieve this, normal intact and enzymatically degraded samples were subjected to both NIR probing and mechanical compression based on a load-unload-reswelling protocol. The parameter δr, characteristic of the osmotic "reswelling" of the matrix after unloading to a constant small load in the order of the osmotic pressure of cartilage, was obtained for the different sample types. Multivariate statistics was employed to determine the degree of correlation between δr and the NIR absorption spectrum of relevant specimens using Partial Least Squared (PLS) regression. The results show a strong relationship (R(2)=95.89%, p<0.0001) between the spectral data and δr. This correlation of δr with NIR spectral data suggests the potential for determining the reswelling characteristics non-destructively. It was also observed that δr values bear a significant relationship with the cartilage matrix integrity, indicated by its proteoglycan content, and can therefore differentiate between normal and artificially degraded proteoglycan-depleted cartilage samples. It is therefore argued that the reswelling of cartilage, which is both biochemical (osmotic) and mechanical (hydrostatic pressure) in origin, could be a strong candidate for characterizing the tissue, especially in regions surrounding focal cartilage defects in joints. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. 3D fabrication and characterization of phosphoric acid scaffold with a HA/β-TCP weight ratio of 60:40 for bone tissue engineering applications.

    PubMed

    Wang, Yanen; Wang, Kai; Li, Xinpei; Wei, Qinghua; Chai, Weihong; Wang, Shuzhi; Che, Yu; Lu, Tingli; Zhang, Bo

    2017-01-01

    A key requirement for three-dimensional printing (3-DP) at room temperature of medical implants depends on the availability of printable and biocompatible binder-powder systems. Different concentration polyvinyl alcohol (PVA) and phosphoric acid solutions were chosen as the binders to make the artificial stent biocompatible with sufficient compressive strength. In order to achieve an optimum balance between the bioceramic powder and binder solution, the biocompatibility and mechanical properties of these artificial stent samples were tested using two kinds of binder solutions. This study demonstrated the printable binder formulation at room temperature for the 3D artificial bone scaffolds. 0.6 wt% PVA solution was ejected easily via inkjet printing, with a supplementation of 0.25 wt% Tween 80 to reduce the surface tension of the polyvinyl alcohol solution. Compared with the polyvinyl alcohol scaffolds, the phosphoric acid scaffolds had better mechanical properties. Though both scaffolds supported the cell proliferation, the absorbance of the polyvinyl alcohol scaffolds was higher than that of the phosphoric acid scaffolds. The artificial stents with a hydroxyapatite/beta-tricalcium phosphate (HA/β-TCP) weight ratios of 60:40 depicted good biocompatibility for both scaffolds. Considering the scaffolds' mechanical and biocompatible properties, the phosphoric acid scaffolds with a HA/β-TCP weight ratio of 60:40 may be the best combination for bone tissue engineering applications.

  4. Mechanical instability of monocrystalline and polycrystalline methane hydrates

    PubMed Central

    Wu, Jianyang; Ning, Fulong; Trinh, Thuat T.; Kjelstrup, Signe; Vlugt, Thijs J. H.; He, Jianying; Skallerud, Bjørn H.; Zhang, Zhiliang

    2015-01-01

    Despite observations of massive methane release and geohazards associated with gas hydrate instability in nature, as well as ductile flow accompanying hydrate dissociation in artificial polycrystalline methane hydrates in the laboratory, the destabilising mechanisms of gas hydrates under deformation and their grain-boundary structures have not yet been elucidated at the molecular level. Here we report direct molecular dynamics simulations of the material instability of monocrystalline and polycrystalline methane hydrates under mechanical loading. The results show dislocation-free brittle failure in monocrystalline hydrates and an unexpected crossover from strengthening to weakening in polycrystals. Upon uniaxial depressurisation, strain-induced hydrate dissociation accompanied by grain-boundary decohesion and sliding destabilises the polycrystals. In contrast, upon compression, appreciable solid-state structural transformation dominates the response. These findings provide molecular insight not only into the metastable structures of grain boundaries, but also into unusual ductile flow with hydrate dissociation as observed during macroscopic compression experiments. PMID:26522051

  5. Constitutive flow behaviour of austenitic stainless steels under hot deformation: artificial neural network modelling to understand, evaluate and predict

    NASA Astrophysics Data System (ADS)

    Mandal, Sumantra; Sivaprasad, P. V.; Venugopal, S.; Murthy, K. P. N.

    2006-09-01

    An artificial neural network (ANN) model is developed to predict the constitutive flow behaviour of austenitic stainless steels during hot deformation. The input parameters are alloy composition and process variables whereas flow stress is the output. The model is based on a three-layer feed-forward ANN with a back-propagation learning algorithm. The neural network is trained with an in-house database obtained from hot compression tests on various grades of austenitic stainless steels. The performance of the model is evaluated using a wide variety of statistical indices. Good agreement between experimental and predicted data is obtained. The correlation between individual alloying elements and high temperature flow behaviour is investigated by employing the ANN model. The results are found to be consistent with the physical phenomena. The model can be used as a guideline for new alloy development.

  6. A Voltammetric Electronic Tongue for the Resolution of Ternary Nitrophenol Mixtures

    PubMed Central

    González-Calabuig, Andreu; Cetó, Xavier

    2018-01-01

    This work reports the applicability of a voltammetric sensor array able to quantify the content of 2,4-dinitrophenol, 4-nitrophenol, and picric acid in artificial samples using the electronic tongue (ET) principles. The ET is based on cyclic voltammetry signals, obtained from an array of metal disk electrodes and a graphite epoxy composite electrode, compressed using discrete wavelet transform with chemometric tools such as artificial neural networks (ANNs). ANNs were employed to build the quantitative prediction model. In this manner, a set of standards based on a full factorial design, ranging from 0 to 300 mg·L−1, was prepared to build the model; afterward, the model was validated with a completely independent set of standards. The model successfully predicted the concentration of the three considered phenols with a normalized root mean square error of 0.030 and 0.076 for the training and test subsets, respectively, and r ≥ 0.948. PMID:29342848

  7. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  8. Unified approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Chang, Tyne-Hsien

    1995-07-01

    A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.

  9. Unified approach for incompressible flows

    NASA Technical Reports Server (NTRS)

    Chang, Tyne-Hsien

    1995-01-01

    A unified approach for solving incompressible flows has been investigated in this study. The numerical CTVD (Centered Total Variation Diminishing) scheme used in this study was successfully developed by Sanders and Li for compressible flows, especially for the high speed. The CTVD scheme possesses better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that the CTVD scheme can equally well apply to solve incompressible flows. Because of the mathematical difference between the governing equations for incompressible and compressible flows, the scheme can not directly apply to the incompressible flows. However, if one can modify the continuity equation for incompressible flows by introducing pseudo-compressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of the algorithm to incompressible flows thus becomes feasible. In this study, the governing equations for incompressible flows comprise continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the physical and numerical boundary conditions are properly implemented by the characteristic boundary conditions. Accordingly, a CFD code has been developed for this research and is currently under testing. Flow past a circular cylinder was chosen for numerical experiments to determine the accuracy and efficiency of the code. The code has shown some promising results.

  10. Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.

    PubMed

    Lok, U-Wai; Li, Pai-Chi

    2016-03-01

    Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.

  11. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  12. JPEG and wavelet compression of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  13. Method for compression of binary data

    DOEpatents

    Berlin, Gary J.

    1996-01-01

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression.

  14. An unstructured mesh arbitrary Lagrangian-Eulerian unsteady incompressible flow solver and its application to insect flight aerodynamics

    NASA Astrophysics Data System (ADS)

    Su, Xiaohui; Cao, Yuanwei; Zhao, Yong

    2016-06-01

    In this paper, an unstructured mesh Arbitrary Lagrangian-Eulerian (ALE) incompressible flow solver is developed to investigate the aerodynamics of insect hovering flight. The proposed finite-volume ALE Navier-Stokes solver is based on the artificial compressibility method (ACM) with a high-resolution method of characteristics-based scheme on unstructured grids. The present ALE model is validated and assessed through flow passing over an oscillating cylinder. Good agreements with experimental results and other numerical solutions are obtained, which demonstrates the accuracy and the capability of the present model. The lift generation mechanisms of 2D wing in hovering motion, including wake capture, delayed stall, rapid pitch, as well as clap and fling are then studied and illustrated using the current ALE model. Moreover, the optimized angular amplitude in symmetry model, 45°, is firstly reported in details using averaged lift and the energy power method. Besides, the lift generation of complete cyclic clap and fling motion, which is simulated by few researchers using the ALE method due to large deformation, is studied and clarified for the first time. The present ALE model is found to be a useful tool to investigate lift force generation mechanism for insect wing flight.

  15. Artificial Neural Network-Based Three-dimensional Continuous Response Relationship Construction of 3Cr20Ni10W2 Heat-Resisting Alloy and Its Application in Finite Element Simulation

    NASA Astrophysics Data System (ADS)

    Li, Le; Wang, Li-yong

    2018-04-01

    The application of accurate constitutive relationship in finite element simulation would significantly contribute to accurate simulation results, which plays a critical role in process design and optimization. In this investigation, the true stress-strain data of 3Cr20Ni10W2 heat-resisting alloy were obtained from a series of isothermal compression tests conducted in a wide temperature range of 1203-1403 K and strain rate range of 0.01-10 s-1 on a Gleeble 1500 testing machine. Then the constitutive relationship was modeled by an optimally constructed and well-trained back-propagation artificial neural network (BP-ANN). The evaluation of the BP-ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of 3Cr20Ni10W2 heat-resisting alloy. Meanwhile, a comparison between improved Arrhenius-type constitutive equation and BP-ANN model shows that the latter has higher accuracy. Consequently, the developed BP-ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions and construct the three-dimensional continuous response relationship for temperature, strain rate, strain, and stress. Finally, the three-dimensional continuous response relationship was applied to the numerical simulation of isothermal compression tests. The results show that such constitutive relationship can significantly promote the accuracy improvement of numerical simulation for hot forming processes.

  16. Force sensitive carbon nanotube arrays for biologically inspired airflow sensing

    NASA Astrophysics Data System (ADS)

    Maschmann, Matthew R.; Dickinson, Ben; Ehlert, Gregory J.; Baur, Jeffery W.

    2012-09-01

    The compressive electromechanical response of aligned carbon nanotube (CNT) arrays is evaluated for use as an artificial hair sensor (AHS) transduction element. CNT arrays with heights of 12, 75, and 225 µm are examined. The quasi-static and dynamic sensitivity to force, response time, and signal drift are examined within the range of applied stresses predicted by a mechanical model applicable to the conceptual CNT array-based AHS (0-1 kPa). Each array is highly sensitive to compressive loading, with a maximum observed gauge factor of 114. The arrays demonstrate a repeatable response to dynamic cycling after a break-in period of approximately 50 cycles. Utilizing a four-wire measurement electrode configuration, the change in contact resistance between the array and the electrodes is observed to dominate the electromechanical response of the arrays. The response time of the CNT arrays is of the order of 10 ms. When the arrays are subjected to constant stress, mechanical creep is observed that results in a signal drift that generally diminishes the responsiveness of the arrays, particularly at stress approaching 1 kPa. The results of this study serve as a preliminary proof of concept for utilizing CNT arrays as a transduction mechanism for a proposed artificial hair sensor. Such a low profile and light-weight flow sensor is expected to have application in a number of applications including navigation and state awareness of small air vehicles, similar in function to natural hair cell receptors utilized by insects and bats.

  17. A semi-analytic model of magnetized liner inertial fusion

    DOE PAGES

    McBride, Ryan D.; Slutz, Stephen A.

    2015-05-21

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primarymore » fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.« less

  18. A semi-analytic model of magnetized liner inertial fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McBride, Ryan D.; Slutz, Stephen A.

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primarymore » fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.« less

  19. An evaluation of the thermal and mechanical properties of a salt-modified polyvinyl alcohol hydrogel for a knee meniscus application.

    PubMed

    Curley, Colin; Hayes, Jennifer C; Rowan, Neil J; Kennedy, James E

    2014-12-01

    The treatment of irreparable knee meniscus tears remains a major challenge for the orthopaedic community. The main purpose of this research was to analyse the mechanical properties and thermal behaviour of a salt-modified polyvinyl alcohol hydrogel, in order to assess its potential for use as an artificial meniscal implant. Aqueous poly vinyl alcohol was treated with a sodium sulphate solution to precipitate out the polyvinyl alcohol resulting in a pliable hydrogel. The freeze-thaw process, a strictly physical method of crosslinking, was employed to crosslink the hydrogel. Physical crosslinks in the form of crystalline regions were induced within the hydrogel structure which resulted in a large increase in mechanical resistance. Results showed that the optimal sodium sulphate addition of 6.6% (w/v) Na2SO4 in 8.33% (w/v) PVA causes the PVA to precipitate out of its solution. The effect of multiple freeze thaw cycles was also investigated. Investigation comprised of a variety of well-established characterisation techniques such as differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), mechanical analysis, rheometry and swelling studies. DSC analysis showed that samples cross-linked using the freeze thaw process display a thermal shift due to increased crosslink density. FTIR analysis confirmed crystallisation is present at 1142cm(-1) and also showed that no chemical alteration occurs when PVA is treated with sodium sulphate. Swelling studies indicated that that PVA/sodium sulphate hydrogels absorb less water than untreated hydrogels due to increased amounts of PVA present. Compressive strength analysis of PVA/sodium sulphate hydrogels prepared at -80°C displayed average maximum loads of 2472N, 2482.4N and 2476N of over 1, 3 and 5 freeze thaw cycles respectively. Mechanical analysis of the hydrogel indicated that the material is thermally stable and resistant to breakdown by compressive force. These properties are crucial for potential use as a meniscus or cartilage replacement. As such, the results of this study indicate that polyvinyl alcohol modified with sodium sulphate may be a suitable material for the construction of an artificial knee meniscus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Geometric scaling of artificial hair sensors for flow measurement under different conditions

    NASA Astrophysics Data System (ADS)

    Su, Weihua; Reich, Gregory W.

    2017-03-01

    Artificial hair sensors (AHSs) have been developed for prediction of the local flow speed and aerodynamic force around an airfoil and subsequent application in vibration control of the airfoil. Usually, a specific sensor design is only sensitive to the flow speeds within its operating flow measurement region. This paper aims at expanding this flow measurement concept of using AHSs to different flow speed conditions by properly sizing the parameters of the sensors, including the dimensions of the artificial hair, capillary, and carbon nanotubes (CNTs) that make up the sensor design, based on a baseline sensor design and its working flow condition. In doing so, the glass fiber hair is modeled as a cantilever beam with an elastic foundation, subject to the distributed aerodynamic drag over the length of the hair. Hair length and diameter, capillary depth, and CNT height are scaled by keeping the maximum compressive strain of the CNTs constant for different sensors under different speed conditions. Numerical studies will demonstrate the feasibility of the geometric scaling methodology by designing AHSs for aircraft with different dimensions and flight conditions, starting from the same baseline sensor. Finally, the operating bandwidth of the scaled sensors are explored.

  1. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.

  2. A Novel Method of Newborn Chest Compression: A Randomized Crossover Simulation Study.

    PubMed

    Smereka, Jacek; Szarpak, Lukasz; Ladny, Jerzy R; Rodriguez-Nunez, Antonio; Ruetzler, Kurt

    2018-01-01

    Objective: To compare a novel two-thumb chest compression technique with standard techniques during newborn resuscitation performed by novice physicians in terms of median depth of chest compressions, degree of full chest recoil, and effective compression efficacy. Patients and Methods: The total of 74 novice physicians with less than 1-year work experience participated in the study. They performed chest compressions using three techniques: (A) The new two-thumb technique (nTTT). The novel method of chest compressions in an infant consists in using two thumbs directed at the angle of 90° to the chest while closing the fingers of both hands in a fist. (B) TFT. With this method, the rescuer compresses the sternum with the tips of two fingers. (C) TTHT. Two thumbs are placed over the lower third of the sternum, with the fingers encircling the torso and supporting the back. Results: The median depth of chest compressions for nTTT was 3.8 (IQR, 3.7-3.9) cm, for TFT-2.1 (IQR, 1.7-2.5) cm, while for TTHT-3.6 (IQR, 3.5-3.8) cm. There was a significant difference between nTTT and TFT, and TTHT and TFT ( p < 0.001) for each time interval during resuscitation. The degree of full chest recoil was 93% (IQR, 91-97) for nTTT, 99% (IQR, 96-100) for TFT, and 90% (IQR, 74-91) for TTHT. There was a statistically significant difference in the degree of complete chest relaxation between nTTT and TFT ( p < 0.001), between nTTT and TTHT ( p = 0.016), and between TFT and TTHT ( p < 0.001). Conclusion: The median chest compression depth for nTTT and TTHT is significantly higher than that for TFT. The degree of full chest recoil was highest for TFT, then for nTTT and TTHT. The effective compression efficiency with nTTT was higher than for TTHT and TFT. Our novel newborn chest compression method in this manikin study provided adequate chest compression depth and degree of full chest recoil, as well as very good effective compression efficiency. Further clinical studies are necessary to confirm these initial results.

  3. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  4. Application of PDF methods to compressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Delarue, B. J.; Pope, S. B.

    1997-09-01

    A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au; Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk; Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation frommore » moving bodies and flow induced noise generation without any penalty in allowable time step.« less

  6. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  7. Three-dimensional numerical simulation for plastic injection-compression molding

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; Yu, Wenjie; Liang, Junjie; Lang, Jianlin; Li, Dequn

    2018-03-01

    Compared with conventional injection molding, injection-compression molding can mold optical parts with higher precision and lower flow residual stress. However, the melt flow process in a closed cavity becomes more complex because of the moving cavity boundary during compression and the nonlinear problems caused by non-Newtonian polymer melt. In this study, a 3D simulation method was developed for injection-compression molding. In this method, arbitrary Lagrangian- Eulerian was introduced to model the moving-boundary flow problem in the compression stage. The non-Newtonian characteristics and compressibility of the polymer melt were considered. The melt flow and pressure distribution in the cavity were investigated by using the proposed simulation method and compared with those of injection molding. Results reveal that the fountain flow effect becomes significant when the cavity thickness increases during compression. The back flow also plays an important role in the flow pattern and redistribution of cavity pressure. The discrepancy in pressures at different points along the flow path is complicated rather than monotonically decreased in injection molding.

  8. Method for compression of binary data

    DOEpatents

    Berlin, G.J.

    1996-03-26

    The disclosed method for compression of a series of data bytes, based on LZSS-based compression methods, provides faster decompression of the stored data. The method involves the creation of a flag bit buffer in a random access memory device for temporary storage of flag bits generated during normal LZSS-based compression. The flag bit buffer stores the flag bits separately from their corresponding pointers and uncompressed data bytes until all input data has been read. Then, the flag bits are appended to the compressed output stream of data. Decompression can be performed much faster because bit manipulation is only required when reading the flag bits and not when reading uncompressed data bytes and pointers. Uncompressed data is read using byte length instructions and pointers are read using word instructions, thus reducing the time required for decompression. 5 figs.

  9. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  10. Chapter 22: Compressed Air Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Benton, Nathanael; Burns, Patrick

    Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: High-efficiency/variable speed drive (VSD) compressormore » replacing modulating, load/unload, or constant-speed compressor; and Compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.« less

  11. Artificial enzymes with protein scaffolds: structural design and modification.

    PubMed

    Matsuo, Takashi; Hirota, Shun

    2014-10-15

    Recent development in biochemical experiment techniques and bioinformatics has enabled us to create a variety of artificial biocatalysts with protein scaffolds (namely 'artificial enzymes'). The construction methods of these catalysts include genetic mutation, chemical modification using synthetic molecules and/or a combination of these methods. Designed evolution strategy based on the structural information of host proteins has become more and more popular as an effective approach to construct artificial protein-based biocatalysts with desired reactivities. From the viewpoint of application of artificial enzymes for organic synthesis, recently constructed artificial enzymes mediating oxidation, reduction and C-C bond formation/cleavage are introduced in this review article. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  13. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  14. An effective and efficient compression algorithm for ECG signals with irregular periods.

    PubMed

    Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son

    2006-06-01

    This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.

  15. Flexible hemispheric microarrays of highly pressure-sensitive sensors based on breath figure method.

    PubMed

    Wang, Zhihui; Zhang, Ling; Liu, Jin; Jiang, Hao; Li, Chunzhong

    2018-05-30

    Recently, flexible pressure sensors featuring high sensitivity, broad sensing range and real-time detection have aroused great attention owing to their crucial role in the development of artificial intelligent devices and healthcare systems. Herein, highly sensitive pressure sensors based on hemisphere-microarray flexible substrates are fabricated via inversely templating honeycomb structures deriving from a facile and static breath figure process. The interlocked and subtle microstructures greatly improve the sensing characteristics and compressibility of the as-prepared pressure sensor, endowing it a sensitivity as high as 196 kPa-1 and a wide pressure sensing range (0-100 kPa), as well as other superior performance, including a lower detection limit of 0.5 Pa, fast response time (<26 ms) and high reversibility (>10 000 cycles). Based on the outstanding sensing performance, the potential capability of our pressure sensor in capturing physiological information and recognizing speech signals has been demonstrated, indicating promising application in wearable and intelligent electronics.

  16. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  17. Comparative study of numerical schemes of TVD3, UNO3-ACM and optimized compact scheme

    NASA Technical Reports Server (NTRS)

    Lee, Duck-Joo; Hwang, Chang-Jeon; Ko, Duck-Kon; Kim, Jae-Wook

    1995-01-01

    Three different schemes are employed to solve the benchmark problem. The first one is a conventional TVD-MUSCL (Monotone Upwind Schemes for Conservation Laws) scheme. The second scheme is a UNO3-ACM (Uniformly Non-Oscillatory Artificial Compression Method) scheme. The third scheme is an optimized compact finite difference scheme modified by us: the 4th order Runge Kutta time stepping, the 4th order pentadiagonal compact spatial discretization with the maximum resolution characteristics. The problems of category 1 are solved by using the second (UNO3-ACM) and third (Optimized Compact) schemes. The problems of category 2 are solved by using the first (TVD3) and second (UNO3-ACM) schemes. The problem of category 5 is solved by using the first (TVD3) scheme. It can be concluded from the present calculations that the Optimized Compact scheme and the UN03-ACM show good resolutions for category 1 and category 2 respectively.

  18. Fast and accurate face recognition based on image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Blasch, Erik

    2017-05-01

    Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.

  19. Cardiopulmonary Resuscitation in Adults and Children With Mechanical Circulatory Support: A Scientific Statement From the American Heart Association.

    PubMed

    Peberdy, Mary Ann; Gluck, Jason A; Ornato, Joseph P; Bermudez, Christian A; Griffin, Russell E; Kasirajan, Vigneshwar; Kerber, Richard E; Lewis, Eldrin F; Link, Mark S; Miller, Corinne; Teuteberg, Jeffrey J; Thiagarajan, Ravi; Weiss, Robert M; O'Neil, Brian

    2017-06-13

    Cardiac arrest in patients on mechanical support is a new phenomenon brought about by the increased use of this therapy in patients with end-stage heart failure. This American Heart Association scientific statement highlights the recognition and treatment of cardiovascular collapse or cardiopulmonary arrest in an adult or pediatric patient who has a ventricular assist device or total artificial heart. Specific, expert consensus recommendations are provided for the role of external chest compressions in such patients. © 2017 American Heart Association, Inc.

  20. On the Vanishing Dissipation Limit for the Full Navier-Stokes-Fourier System with Non-slip Condition

    NASA Astrophysics Data System (ADS)

    Wang, Y.-G.; Zhu, S.-Y.

    2018-06-01

    In this paper, we study the vanishing dissipation limit problem for the full Navier-Stokes-Fourier equations with non-slip boundary condition in a smooth bounded domain Ω \\subseteq R3. By using Kato's idea (Math Sci Res Inst Publ 2:85-98, 1984) of constructing an artificial boundary layer, we obtain a sufficient condition for the convergence of the solution of the full Navier-Stokes-Fourier equations to the solution of the compressible Euler equations in the energy space L2(Ω ) uniformly in time.

  1. Application of artificial intelligence to the management of urological cancer.

    PubMed

    Abbod, Maysam F; Catto, James W F; Linkens, Derek A; Hamdy, Freddie C

    2007-10-01

    Artificial intelligence techniques, such as artificial neural networks, Bayesian belief networks and neuro-fuzzy modeling systems, are complex mathematical models based on the human neuronal structure and thinking. Such tools are capable of generating data driven models of biological systems without making assumptions based on statistical distributions. A large amount of study has been reported of the use of artificial intelligence in urology. We reviewed the basic concepts behind artificial intelligence techniques and explored the applications of this new dynamic technology in various aspects of urological cancer management. A detailed and systematic review of the literature was performed using the MEDLINE and Inspec databases to discover reports using artificial intelligence in urological cancer. The characteristics of machine learning and their implementation were described and reports of artificial intelligence use in urological cancer were reviewed. While most researchers in this field were found to focus on artificial neural networks to improve the diagnosis, staging and prognostic prediction of urological cancers, some groups are exploring other techniques, such as expert systems and neuro-fuzzy modeling systems. Compared to traditional regression statistics artificial intelligence methods appear to be accurate and more explorative for analyzing large data cohorts. Furthermore, they allow individualized prediction of disease behavior. Each artificial intelligence method has characteristics that make it suitable for different tasks. The lack of transparency of artificial neural networks hinders global scientific community acceptance of this method but this can be overcome by neuro-fuzzy modeling systems.

  2. Compression of the Global Land 1-km AVHRR dataset

    USGS Publications Warehouse

    Kess, B. L.; Steinwand, D.R.; Reichenbach, S.E.

    1996-01-01

    Large datasets, such as the Global Land 1-km Advanced Very High Resolution Radiometer (AVHRR) Data Set (Eidenshink and Faundeen 1994), require compression methods that provide efficient storage and quick access to portions of the data. A method of lossless compression is described that provides multiresolution decompression within geographic subwindows of multi-spectral, global, 1-km, AVHRR images. The compression algorithm segments each image into blocks and compresses each block in a hierarchical format. Users can access the data by specifying either a geographic subwindow or the whole image and a resolution (1,2,4, 8, or 16 km). The Global Land 1-km AVHRR data are presented in the Interrupted Goode's Homolosine map projection. These images contain masked regions for non-land areas which comprise 80 per cent of the image. A quadtree algorithm is used to compress the masked regions. The compressed region data are stored separately from the compressed land data. Results show that the masked regions compress to 0·143 per cent of the bytes they occupy in the test image and the land areas are compressed to 33·2 per cent of their original size. The entire image is compressed hierarchically to 6·72 per cent of the original image size, reducing the data from 9·05 gigabytes to 623 megabytes. These results are compared to the first order entropy of the residual image produced with lossless Joint Photographic Experts Group predictors. Compression results are also given for Lempel-Ziv-Welch (LZW) and LZ77, the algorithms used by UNIX compress and GZIP respectively. In addition to providing multiresolution decompression of geographic subwindows of the data, the hierarchical approach and the use of quadtrees for storing the masked regions gives a marked improvement over these popular methods.

  3. Artificial insect wings with biomimetic wing morphology and mechanical properties.

    PubMed

    Liu, Zhiwei; Yan, Xiaojun; Qi, Mingjing; Zhu, Yangsheng; Huang, Dawei; Zhang, Xiaoyong; Lin, Liwei

    2017-09-26

    The pursuit of a high lift force for insect-scale flapping-wing micro aerial vehicles (FMAVs) requires that their artificial wings possess biomimetic wing features which are close to those of their natural counterpart. In this work, we present both fabrication and testing methods for artificial insect wings with biomimetic wing morphology and mechanical properties. The artificial cicada (Hyalessa maculaticollis) wing is fabricated through a high precision laser cutting technique and a bonding process of multilayer materials. Through controlling the shape of the wing venation, the fabrication method can achieve three-dimensional wing architecture, including cambers or corrugations. Besides the artificial cicada wing, the proposed fabrication method also shows a promising versatility for diverse wing types. Considering the artificial cicada wing's characteristics of small size and light weight, special mechanical testing systems are designed to investigate its mechanical properties. Flexural stiffness, maximum deformation rate and natural frequency are measured and compared with those of its natural counterpart. Test results reveal that the mechanical properties of the artificial cicada wing depend strongly on its vein thickness, which can be used to optimize an artificial cicada wing's mechanical properties in the future. As such, this work provides a new form of artificial insect wings which can be used in the field of insect-scale FMAVs.

  4. Hot-compress: A new postdeposition treatment for ZnO-based flexible dye-sensitized solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque Choudhury, Mohammad Shamimul, E-mail: shamimul129@gmail.com; Department of Electrical and Electronic Engineering, International Islamic University Chittagong, b154/a, College Road, Chittagong 4203; Kishi, Naoki

    2016-08-15

    Highlights: • A new postdeposition treatment named hot-compress is introduced. • Hot-compression gives homogeneous compact layer ZnO photoanode. • I-V and EIS analysis data confirms the efficacy of this method. • Charge transport resistance was reduced by the application of hot-compression. - Abstract: This article introduces a new postdeposition treatment named hot-compress for flexible zinc oxide–base dye-sensitized solar cells. This postdeposition treatment includes the application of compression pressure at an elevated temperature. The optimum compression pressure of 130 Ma at an optimum compression temperature of 70 °C heating gives better photovoltaic performance compared to the conventional cells. The aptness ofmore » this method was confirmed by investigating scanning electron microscopy image, X-ray diffraction, current-voltage and electrochemical impedance spectroscopy analysis of the prepared cells. Proper heating during compression lowers the charge transport resistance, longer the electron lifetime of the device. As a result, the overall power conversion efficiency of the device was improved about 45% compared to the conventional room temperature compressed cell.« less

  5. Introducing a novel gravitation-based high-velocity compaction analysis method for pharmaceutical powders.

    PubMed

    Tanner, Timo; Antikainen, Osmo; Ehlers, Henrik; Yliruusi, Jouko

    2017-06-30

    With modern tableting machines large amounts of tablets are produced with high output. Consequently, methods to examine powder compression in a high-velocity setting are in demand. In the present study, a novel gravitation-based method was developed to examine powder compression. A steel bar is dropped on a punch to compress microcrystalline cellulose and starch samples inside the die. The distance of the bar is being read by a high-accuracy laser displacement sensor which provides a reliable distance-time plot for the bar movement. In-die height and density of the compact can be seen directly from this data, which can be examined further to obtain information on velocity, acceleration and energy distribution during compression. The energy consumed in compact formation could also be seen. Despite the high vertical compression speed, the method was proven to be cost-efficient, accurate and reproducible. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Internal combustion engine for natural gas compressor operation

    DOEpatents

    Hagen, Christopher; Babbitt, Guy

    2016-12-27

    This application concerns systems and methods for compressing natural gas with an internal combustion engine. In a representative embodiment, a method is featured which includes placing a first cylinder of an internal combustion engine in a compressor mode, and compressing a gas within the first cylinder, using the cylinder as a reciprocating compressor. In some embodiments a compression check valve system is used to regulate pressure and flow within cylinders of the engine during a compression process.

  7. Artificial submicron or nanometer speckle fabricating technique and electron microscope speckle photography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Zhanwei; Xie Huimin; Fang Daining

    2007-03-15

    In this article, a novel artificial submicro- or nanometer speckle fabricating technique is proposed by taking advantage of submicro or nanometer particles. In the technique, submicron or nanometer particles were adhered to an object surface by using ultrasonic dispersing technique. The particles on the object surface can be regarded as submicro or nanometer speckle by using a scanning electronic microscope at a special magnification. In addition, an electron microscope speckle photography (EMSP) method is developed to measure in-plane submicron or nanometer deformation of the object coated with the artificial submicro or nanometer speckles. The principle of artificial submicro or nanometermore » speckle fabricating technique and the EMSP method are discussed in detail in this article. Some typical applications of this method are offered. The experimental results verified that the artificial submicro or nanometer speckle fabricating technique and EMSP method is feasible.« less

  8. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  9. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. Restless behavior increases over time, but not with compressibility of the flooring surface, during forced standing at the feed bunk.

    PubMed

    Krebs, N; Berry, S L; Tucker, C B

    2011-01-01

    Interest in the use of rubber flooring in freestall barns has increased, but little is known about which design features of these surfaces are important for cattle. In 2 experiments, we evaluated how the type and compressibility of the flooring surface in front of the feed bunk influenced the behavioral response to 4 h of forced standing after morning milking. Two flooring types were compared: rubber and concrete. Rubber was tested at 3 levels of compressibility: 2, 4, and 35 times as compressible as concrete. Four hours of forced standing was evaluated because it mimicked conditions that can occur on dairies, particularly when waiting for artificial insemination or veterinary treatment. The effects of cow weight and hoof surface area, gait score, and hoof health on the response to treatment were evaluated. Restless behavior, as measured by number of steps, almost doubled over the 4h of forced standing, regardless of flooring material. Cows lay down, on average, within 5 min after access to the lying area was provided. These results indicate that the 4 h of forced standing was uncomfortable. No differences in restless behavior were observed in association with the type or compressibility of the flooring surface in front of the feed bunk. Cow size, hoof health, or gait score did not consistently explain the response to the flooring treatments or stepping rate, although these populations of animals were generally healthy. It is unclear if comfort did not differ between the flooring options tested during 4 h of forced standing or if alterative methodology, such as measuring more subtle shifts in weight, is required to assess design features of rubber flooring. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Compressive sampling by artificial neural networks for video

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  12. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  13. A Discriminative Sentence Compression Method as Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.

  14. A novel method to detect ignition angle of diesel

    NASA Astrophysics Data System (ADS)

    Li, Baofu; Peng, Yong; Huang, Hongzhong

    2018-04-01

    This paper is based on the combustion signal collected by the combustion sensor of piezomagnetic type, taking how to get the diesel fuel to start the combustion as the starting point. It analyzes the operating principle and pressure change of the combustion sensor, the compression peak signal of the diesel engine in the process of compression, and several common methods. The author puts forward a new idea that ignition angle timing can be determined more accurately by the compression peak decomposition method. Then, the method is compared with several common methods.

  15. Two-level image authentication by two-step phase-shifting interferometry and compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Meng, Xiangfeng; Yin, Yongkai; Yang, Xiulun; Wang, Yurong; Li, Xianye; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-01-01

    A two-level image authentication method is proposed; the method is based on two-step phase-shifting interferometry, double random phase encoding, and compressive sensing (CS) theory, by which the certification image can be encoded into two interferograms. Through discrete wavelet transform (DWT), sparseness processing, Arnold transform, and data compression, two compressed signals can be generated and delivered to two different participants of the authentication system. Only the participant who possesses the first compressed signal attempts to pass the low-level authentication. The application of Orthogonal Match Pursuit CS algorithm reconstruction, inverse Arnold transform, inverse DWT, two-step phase-shifting wavefront reconstruction, and inverse Fresnel transform can result in the output of a remarkable peak in the central location of the nonlinear correlation coefficient distributions of the recovered image and the standard certification image. Then, the other participant, who possesses the second compressed signal, is authorized to carry out the high-level authentication. Therefore, both compressed signals are collected to reconstruct the original meaningful certification image with a high correlation coefficient. Theoretical analysis and numerical simulations verify the feasibility of the proposed method.

  16. Numerical methods for large eddy simulation of acoustic combustion instabilities

    NASA Astrophysics Data System (ADS)

    Wall, Clifton T.

    Acoustic combustion instabilities occur when interaction between the combustion process and acoustic modes in a combustor results in periodic oscillations in pressure, velocity, and heat release. If sufficiently large in amplitude, these instabilities can cause operational difficulties or the failure of combustor hardware. In many situations, the dominant instability is the result of the interaction between a low frequency acoustic mode of the combustor and the large scale hydrodynamics. Large eddy simulation (LES), therefore, is a promising tool for the prediction of these instabilities, since both the low frequency acoustic modes and the large scale hydrodynamics are well resolved in LES. Problems with the tractability of such simulations arise, however, due to the difficulty of solving the compressible Navier-Stokes equations efficiently at low Mach number and due to the large number of acoustic periods that are often required for such instabilities to reach limit cycles. An implicit numerical method for the solution of the compressible Navier-Stokes equations has been developed which avoids the acoustic CFL restriction, allowing for significant efficiency gains at low Mach number, while still resolving the low frequency acoustic modes of interest. In the limit of a uniform grid the numerical method causes no artificial damping of acoustic waves. New, non-reflecting boundary conditions have also been developed for use with the characteristic-based approach of Poinsot and Lele (1992). The new boundary conditions are implemented in a manner which allows for significant reduction of the computational domain of an LES by eliminating the need to perform LES in regions where one-dimensional acoustics significantly affect the instability but details of the hydrodynamics do not. These new numerical techniques have been demonstrated in an LES of an experimental combustor. The new techniques are shown to be an efficient means of performing LES of acoustic combustion instabilities and are shown to accurately predict the occurrence and frequency of the dominant mode of the instability observed in the experiment.

  17. The least-squares finite element method for low-mach-number compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  18. Data Characterization Using Artificial-Star Tests: Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Hu, Yi; Deng, Licai; de Grijs, Richard; Liu, Qiang

    2011-01-01

    Traditional artificial-star tests are widely applied to photometry in crowded stellar fields. However, to obtain reliable binary fractions (and their uncertainties) of remote, dense, and rich star clusters, one needs to recover huge numbers of artificial stars. Hence, this will consume much computation time for data reduction of the images to which the artificial stars must be added. In this article, we present a new method applicable to data sets characterized by stable, well-defined, point-spread functions, in which we add artificial stars to the retrieved-data catalog instead of to the raw images. Taking the young Large Magellanic Cloud cluster NGC 1818 as an example, we compare results from both methods and show that they are equivalent, while our new method saves significant computational time.

  19. Compression Testing of Textile Composite Materials

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1996-01-01

    The applicability of existing test methods, which were developed primarily for laminates made of unidirectional prepreg tape, to textile composites is an area of concern. The issue is whether the values measured for the 2-D and 3-D braided, woven, stitched, and knit materials are accurate representations of the true material response. This report provides a review of efforts to establish a compression test method for textile reinforced composite materials. Experimental data have been gathered from several sources and evaluated to assess the effectiveness of a variety of test methods. The effectiveness of the individual test methods to measure the material's modulus and strength is determined. Data are presented for 2-D triaxial braided, 3-D woven, and stitched graphite/epoxy material. However, the determination of a recommended test method and specimen dimensions is based, primarily, on experimental results obtained by the Boeing Defense and Space Group for 2-D triaxially braided materials. They evaluated seven test methods: NASA Short Block, Modified IITRI, Boeing Open Hole Compression, Zabora Compression, Boeing Compression after Impact, NASA ST-4, and a Sandwich Column Test.

  20. Effect of compression pressure on inhalation grade lactose as carrier for dry powder inhalations

    PubMed Central

    Raut, Neha Sureshrao; Jamaiwar, Swapnil; Umekar, Milind Janrao; Kotagale, Nandkishor Ramdas

    2016-01-01

    Introduction: This study focused on the potential effects of compression forces experienced during lactose (InhaLac 70, 120, and 230) storage and transport on the flowability and aerosol performance in dry powder inhaler formulation. Materials and Methods: Lactose was subjected to typical compression forces 4, 10, and 20 N/cm2. Powder flowability and particle size distribution analysis of un-compressed and compressed lactose was evaluated by Carr's index, Hausner's ratio, the angle of repose and by laser diffraction method. Aerosol performance of un-compressed and compressed lactose was assessed in dispersion studies using glass twin-stage-liquid-impenger at flow rate 40-80 L/min. Results: At compression forces, the flowability of compressed lactose was observed same or slightly improved. Furthermore, compression of lactose caused a decrease in in vitro aerosol dispersion performance. Conclusion: The present study illustrates that, as carrier size increases, a concurrent decrease in drug aerosolization performance was observed. Thus, the compression of the lactose fines onto the surfaces of the larger lactose particles due to compression pressures was hypothesized to be the cause of these observed performance variations. The simulations of storage and transport in an industrial scale can induce significant variations in formulation performance, and it could be a source of batch-to-batch variations. PMID:27014618

  1. New patient-controlled abdominal compression method in radiography: radiation dose and image quality.

    PubMed

    Piippo-Huotari, Oili; Norrman, Eva; Anderzén-Carlsson, Agneta; Geijer, Håkan

    2018-05-01

    The radiation dose for patients can be reduced with many methods and one way is to use abdominal compression. In this study, the radiation dose and image quality for a new patient-controlled compression device were compared with conventional compression and compression in the prone position . To compare radiation dose and image quality of patient-controlled compression compared with conventional and prone compression in general radiography. An experimental design with quantitative approach. After obtaining the approval of the ethics committee, a consecutive sample of 48 patients was examined with the standard clinical urography protocol. The radiation doses were measured as dose-area product and analyzed with a paired t-test. The image quality was evaluated by visual grading analysis. Four radiologists evaluated each image individually by scoring nine criteria modified from the European quality criteria for diagnostic radiographic images. There was no significant difference in radiation dose or image quality between conventional and patient-controlled compression. Prone position resulted in both higher dose and inferior image quality. Patient-controlled compression gave similar dose levels as conventional compression and lower than prone compression. Image quality was similar with both patient-controlled and conventional compression and was judged to be better than in the prone position.

  2. Tomographic Image Compression Using Multidimensional Transforms.

    ERIC Educational Resources Information Center

    Villasenor, John D.

    1994-01-01

    Describes a method for compressing tomographic images obtained using Positron Emission Tomography (PET) and Magnetic Resonance (MR) by applying transform compression using all available dimensions. This takes maximum advantage of redundancy of the data, allowing significant increases in compression efficiency and performance. (13 references) (KRN)

  3. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  4. Logarithmic compression methods for spectral data

    DOEpatents

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  5. A Dynamic Eddy Viscosity Model for the Shallow Water Equations Solved by Spectral Element and Discontinuous Galerkin Methods

    NASA Astrophysics Data System (ADS)

    Marras, Simone; Suckale, Jenny; Giraldo, Francis X.; Constantinescu, Emil

    2016-04-01

    We present the solution of the viscous shallow water equations where viscosity is built as a residual-based subgrid scale model originally designed for large eddy simulation of compressible [1] and stratified flows [2]. The necessity of viscosity for a shallow water model not only finds motivation from mathematical analysis [3], but is supported by physical reasoning as can be seen by an analysis of the energetics of the solution. We simulated the flow of an idealized wave as it hits a set of obstacles. The kinetic energy spectrum of this flow shows that, although the inviscid Galerkin solutions -by spectral elements and discontinuous Galerkin [4]- preserve numerical stability in spite of the spurious oscillations in the proximity of the wave fronts, the slope of the energy cascade deviates from the theoretically expected values. We show that only a sufficiently small amount of dynamically adaptive viscosity removes the unwanted high-frequency modes while preserving the overall sharpness of the solution. In addition, it yields a physically plausible energy decay. This work is motivated by a larger interest in the application of a shallow water model to the solution of tsunami triggered coastal flows. In particular, coastal flows in regions around the world where coastal parks made of mitigation hills of different sizes and configurations are considered as a means to deviate the power of the incoming wave. References [1] M. Nazarov and J. Hoffman (2013) "Residual-based artificial viscosity for simulation of turbulent compressible flow using adaptive finite element methods" Int. J. Numer. Methods Fluids, 71:339-357 [2] S. Marras, M. Nazarov, F. X. Giraldo (2015) "Stabilized high-order Galerkin methods based on a parameter-free dynamic SGS model for LES" J. Comput. Phys. 301:77-101 [3] J. F. Gerbeau and B. Perthame (2001) "Derivation of the viscous Saint-Venant system for laminar shallow water; numerical validation" Discrete Contin. Dyn. Syst. Ser. B, 1:89?102 [4] F. X. Giraldo and M. Restelli (2010) "High-order semi-implicit time-integrators for a triangular discontinuous Galerkin oceanic shallow water model. Int. J. Numer. Methods Fluids, 63:1077-1102

  6. HUGO: Hierarchical mUlti-reference Genome cOmpression for aligned reads

    PubMed Central

    Li, Pinghao; Jiang, Xiaoqian; Wang, Shuang; Kim, Jihoon; Xiong, Hongkai; Ohno-Machado, Lucila

    2014-01-01

    Background and objective Short-read sequencing is becoming the standard of practice for the study of structural variants associated with disease. However, with the growth of sequence data largely surpassing reasonable storage capability, the biomedical community is challenged with the management, transfer, archiving, and storage of sequence data. Methods We developed Hierarchical mUlti-reference Genome cOmpression (HUGO), a novel compression algorithm for aligned reads in the sorted Sequence Alignment/Map (SAM) format. We first aligned short reads against a reference genome and stored exactly mapped reads for compression. For the inexact mapped or unmapped reads, we realigned them against different reference genomes using an adaptive scheme by gradually shortening the read length. Regarding the base quality value, we offer lossy and lossless compression mechanisms. The lossy compression mechanism for the base quality values uses k-means clustering, where a user can adjust the balance between decompression quality and compression rate. The lossless compression can be produced by setting k (the number of clusters) to the number of different quality values. Results The proposed method produced a compression ratio in the range 0.5–0.65, which corresponds to 35–50% storage savings based on experimental datasets. The proposed approach achieved 15% more storage savings over CRAM and comparable compression ratio with Samcomp (CRAM and Samcomp are two of the state-of-the-art genome compression algorithms). The software is freely available at https://sourceforge.net/projects/hierachicaldnac/with a General Public License (GPL) license. Limitation Our method requires having different reference genomes and prolongs the execution time for additional alignments. Conclusions The proposed multi-reference-based compression algorithm for aligned reads outperforms existing single-reference based algorithms. PMID:24368726

  7. Analysis Resilient Algorithm on Artificial Neural Network Backpropagation

    NASA Astrophysics Data System (ADS)

    Saputra, Widodo; Tulus; Zarlis, Muhammad; Widia Sembiring, Rahmat; Hartama, Dedy

    2017-12-01

    Prediction required by decision makers to anticipate future planning. Artificial Neural Network (ANN) Backpropagation is one of method. This method however still has weakness, for long training time. This is a reason to improve a method to accelerate the training. One of Artificial Neural Network (ANN) Backpropagation method is a resilient method. Resilient method of changing weights and bias network with direct adaptation process of weighting based on local gradient information from every learning iteration. Predicting data result of Istanbul Stock Exchange training getting better. Mean Square Error (MSE) value is getting smaller and increasing accuracy.

  8. Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2005-01-01

    The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.

  9. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  10. Method of controlling coherent synchroton radiation-driven degradation of beam quality during bunch length compression

    DOEpatents

    Douglas, David R [Newport News, VA; Tennant, Christopher D [Williamsburg, VA

    2012-07-10

    A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

  11. Numerical study of the flow in a three-dimensional thermally driven cavity

    NASA Astrophysics Data System (ADS)

    Rauwoens, Pieter; Vierendeels, Jan; Merci, Bart

    2008-06-01

    Solutions for the fully compressible Navier-Stokes equations are presented for the flow and temperature fields in a cubic cavity with large horizontal temperature differences. The ideal-gas approximation for air is assumed and viscosity is computed using Sutherland's law. The three-dimensional case forms an extension of previous studies performed on a two-dimensional square cavity. The influence of imposed boundary conditions in the third dimension is investigated as a numerical experiment. Comparison is made between convergence rates in case of periodic and free-slip boundary conditions. Results with no-slip boundary conditions are presented as well. The effect of the Rayleigh number is studied. Results are computed using a finite volume method on a structured, collocated grid. An explicit third-order discretization for the convective part and an implicit central discretization for the acoustic part and for the diffusive part are used. To stabilize the scheme an artificial dissipation term for the pressure and the temperature is introduced. The discrete equations are solved using a time-marching method with restrictions on the timestep corresponding to the explicit parts of the solver. Multigrid is used as acceleration technique.

  12. Performance of Low Dissipative High Order Shock-Capturing Schemes for Shock-Turbulence Interactions

    NASA Technical Reports Server (NTRS)

    Sandham, N. D.; Yee, H. C.

    1998-01-01

    Accurate and efficient direct numerical simulation of turbulence in the presence of shock waves represents a significant challenge for numerical methods. The objective of this paper is to evaluate the performance of high order compact and non-compact central spatial differencing employing total variation diminishing (TVD) shock-capturing dissipations as characteristic based filters for two model problems combining shock wave and shear layer phenomena. A vortex pairing model evaluates the ability of the schemes to cope with shear layer instability and eddy shock waves, while a shock wave impingement on a spatially-evolving mixing layer model studies the accuracy of computation of vortices passing through a sequence of shock and expansion waves. A drastic increase in accuracy is observed if a suitable artificial compression formulation is applied to the TVD dissipations. With this modification to the filter step the fourth-order non-compact scheme shows improved results in comparison to second-order methods, while retaining the good shock resolution of the basic TVD scheme. For this characteristic based filter approach, however, the benefits of compact schemes or schemes with higher than fourth order are not sufficient to justify the higher complexity near the boundary and/or the additional computational cost.

  13. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Unconventional supercapacitors from nanocarbon-based electrode materials to device configurations.

    PubMed

    Liu, Lili; Niu, Zhiqiang; Chen, Jun

    2016-07-25

    As energy storage devices, supercapacitors that are also called electrochemical capacitors possess high power density, excellent reversibility and long cycle life. The recent boom in electronic devices with different functions in transparent LED displays, stretchable electronic systems and artificial skin has increased the demand for supercapacitors to move towards light, thin, integrated macro- and micro-devices with transparent, flexible, stretchable, compressible and/or wearable abilities. The successful fabrication of such supercapacitors depends mainly on the preparation of innovative electrode materials and the design of unconventional supercapacitor configurations. Tremendous research efforts have been recently made to design and construct innovative nanocarbon-based electrode materials and supercapacitors with unconventional configurations. We review here recent developments in supercapacitors from nanocarbon-based electrode materials to device configurations. The advances in nanocarbon-based electrode materials mainly include the assembly technologies of macroscopic nanostructured electrodes with different dimensions of carbon nanotubes/nanofibers, graphene, mesoporous carbon, activated carbon, and their composites. The electrodes with macroscopic nanostructured carbon-based materials overcome the issues of low conductivity, poor mechanical properties, and limited dimensions that are faced by conventional methods. The configurational design of advanced supercapacitor devices is presented with six types of unconventional supercapacitor devices: flexible, micro-, stretchable, compressible, transparent and fiber supercapacitors. Such supercapacitors display unique configurations and excellent electrochemical performance at different states such as bending, stretching, compressing and/or folding. For example, all-solid-state simplified supercapacitors that are based on nanostructured graphene composite paper are able to maintain 95% of the original capacity at a 180° folding state. The progress made so far will guide further developments in the structural design of nanocarbon-based electrode materials and the configurational diversity of supercapacitor devices. Future developments and prospects in the controllable assembly of macroscopic nanostructured electrodes and the innovation of unconventional supercapacitor configurations are also discussed. This should shed light on the R&D of supercapacitors.

  15. Heating and Acceleration of Charged Particles by Weakly Compressible Magnetohydrodynamic Turbulence

    NASA Astrophysics Data System (ADS)

    Lynn, Jacob William

    We investigate the interaction between low-frequency magnetohydrodynamic (MHD) turbulence and a distribution of charged particles. Understanding this physics is central to understanding the heating of the solar wind, as well as the heating and acceleration of other collisionless plasmas. Our central method is to simulate weakly compressible MHD turbulence using the Athena code, along with a distribution of test particles which feel the electromagnetic fields of the turbulence. We also construct analytic models of transit-time damping (TTD), which results from the mirror force caused by compressible (fast or slow) MHD waves. Standard linear-theory models in the literature require an exact resonance between particle and wave velocities to accelerate particles. The models developed in this thesis go beyond standard linear theory to account for the fact that wave-particle interactions decorrelate over a short time, which allows particles with velocities off resonance to undergo acceleration and velocity diffusion. We use the test particle simulation results to calibrate and distinguish between different models for this velocity diffusion. Test particle heating is larger than the linear theory prediction, due to continued acceleration of particles with velocities off-resonance. We also include an artificial pitch-angle scattering to the test particle motion, representing the effect of high-frequency waves or velocity-space instabilities. For low scattering rates, we find that the scattering enforces isotropy and enhances heating by a modest factor. For much higher scattering rates, the acceleration is instead due to a non-resonant effect, as particles "frozen" into the fluid adiabatically gain and lose energy as eddies expand and contract. Lastly, we generalize our calculations to allow for relativistic test particles. Linear theory predicts that relativistic particles with velocities much higher than the speed of waves comprising the turbulence would undergo no acceleration; resonance-broadening modifies this conclusion and allows for a continued Fermi-like acceleration process. This may affect the observed spectra of black hole accretion disks by accelerating relativistic particles into a quasi-powerlaw tail.

  16. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  17. Method for testing the strength and structural integrity of nuclear fuel particles

    DOEpatents

    Lessing, P.A.

    1995-10-17

    An accurate method for testing the strength of nuclear fuel particles is disclosed. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle. 13 figs.

  18. Method for testing the strength and structural integrity of nuclear fuel particles

    DOEpatents

    Lessing, Paul A.

    1995-01-01

    An accurate method for testing the strength of nuclear fuel particles. Each particle includes an upper and lower portion, and is placed within a testing apparatus having upper and lower compression members. The upper compression member includes a depression therein which is circular and sized to receive only part of the upper portion of the particle. The lower compression member also includes a similar depression. The compression members are parallel to each other with the depressions therein being axially aligned. The fuel particle is then placed between the compression members and engaged within the depressions. The particle is then compressed between the compression members until it fractures. The amount of force needed to fracture the particle is thereafter recorded. This technique allows a broader distribution of forces and provides more accurate results compared with systems which distribute forces at singular points on the particle.

  19. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  20. Administrators Gaming Test- and Observation-Based Teacher Evaluation Methods: To Conform To or Confront the System

    ERIC Educational Resources Information Center

    Geiger, Tray J.; Amrein-Beardsley, Audrey

    2017-01-01

    In this commentary, we discuss three types of data manipulations that can occur within teacher evaluation methods: artificial inflation, artificial deflation, and artificial conflation. These types of manipulation are more popularly known in the education profession as instances of Campbell's Law (1976), which states that the higher the…

  1. Artificial cranial deformation in newborns in the pre-Columbian Andes.

    PubMed

    Schijman, Edgardo

    2005-11-01

    Artificial deformation of the neonatal cranial vault is one form of permanent alteration of the body that has been performed by the human being from the beginning of history as a way of differentiating from others. These procedures have been observed on all continents, although it became widespread practice among the aborigines who lived in the Andean region of South America. It has been suggested that the expansion of this practice started with the Scythians from their original settlements in central Asia and spread toward the rest of Asia and Europe, and it is believed that Asiatic people carried this cultural custom to America when they arrived on the current coasts of Alaska after crossing the Strait of Behring. The practice of deforming newborn heads was present in the whole of the American continent, from North America to Patagonia, but cranial molding in neonates was most widely practiced in the Andean region, from Venezuela to Guyana, Colombia, Ecuador, Peru, Bolivia, Chile, and Argentina. Intentional deformation of the head in neonates was carried out in different ways: by compression of the head with boards and pads; by compression with adjusted bindings; or by restraining the child on specially designed cradle-boards. The purpose of head shaping varied according to culture and region: while in certain regions it was a symbol of nobility or separated the different social groups within society, in others it served to emphasize ethnic differences or was performed for aesthetic, magical or religious reasons. There is no evidence of any neurological impairment among indigenous groups who practiced cranial deformations in newborns.

  2. Privacy protection in surveillance systems based on JPEG DCT baseline compression and spectral domain watermarking

    NASA Astrophysics Data System (ADS)

    Sablik, Thomas; Velten, Jörg; Kummert, Anton

    2015-03-01

    An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.

  3. Low complication rate of sellar reconstruction by artificial dura mater during endoscopic endonasal transsphenoidal surgery.

    PubMed

    Ye, Yuanliang; Wang, Fuyu; Zhou, Tao; Luo, Yi

    2017-12-01

    To evaluate effect of sellar reconstruction during pituitary adenoma resection surgery by the endoscopic endonasal transsphenoidal approach using artificial cerebral dura mater patch.This was a retrospective study of 1281 patients who underwent endoscopic transsphenoidal resection for the treatment of pituitary adenomas between December 2006 and May 2014 at the Neurosurgery Department of the People's Liberation Army General Hospital. The patients were classified into 4 grades according to intraoperative cerebrospinal fluid (CSF) leakage site. All patients were followed up for 3 months by telephone and outpatient visits.One thousand seventy three (83.7%) patients underwent sellar reconstruction using artificial dura matter patched outside the sellar region (method A), 106 (8.3%) using artificial dura matter patched inside the sellar region (method B), and 102 (8.0%) using artificial dura matter and a mucosal flap (method C). Method A was used for grade 0-1 leakage, method B for grade 1 to 2 leakage, and method C for grade 2 to 3 leakage. During the 3-month follow-up, postoperative CSF leakage was observed in 7 patients (0.6%): 2 among patients who underwent method B (1.9%) and 5 among those who underwent method C (4.9%). Meningitis was diagnosed in 13 patients (1.0%): 2 among patients who underwent method A (0.2%), 4 among those who underwent method B (3.8%), and 7 among those who underwent method C (6.7%).Compared with other reconstruction methods, sellar reconstruction surgery that only use artificial dura mater as repair material had a low rate of complications. Copyright © 2017 The Authors. Published by Wolters Kluwer Health, Inc. All rights reserved.

  4. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that it can compress, Bit Grooming guarantees the specified precision throughout the full floating-point range. Data quantization by Bit Grooming is irreversible (i.e., lossy) yet transparent, meaning that no extra processing is required by data users/readers. Hence Bit Grooming can easily reduce data storage volume without sacrificing scientific precision or imposing extra burdens on users.

  5. Hyperspectral image compressing using wavelet-based method

    NASA Astrophysics Data System (ADS)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  6. Three-dimensional imaging of artificial fingerprint by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Larin, Kirill V.; Cheng, Yezeng

    2008-03-01

    Fingerprint recognition is one of the popular used methods of biometrics. However, due to the surface topography limitation, fingerprint recognition scanners are easily been spoofed, e.g. using artificial fingerprint dummies. Thus, biometric fingerprint identification devices need to be more accurate and secure to deal with different fraudulent methods including dummy fingerprints. Previously, we demonstrated that Optical Coherence Tomography (OCT) images revealed the presence of the artificial fingerprints (made from different household materials, such as cement and liquid silicone rubber) at all times, while the artificial fingerprints easily spoofed the commercial fingerprint reader. Also we demonstrated that an analysis of the autocorrelation of the OCT images could be used in automatic recognition systems. Here, we exploited the three-dimensional (3D) imaging of the artificial fingerprint by OCT to generate vivid 3D image for both the artificial fingerprint layer and the real fingerprint layer beneath. With the reconstructed 3D image, it could not only point out whether there exists an artificial material, which is intended to spoof the scanner, above the real finger, but also could provide the hacker's fingerprint. The results of these studies suggested that Optical Coherence Tomography could be a powerful real-time noninvasive method for accurate identification of artificial fingerprints real fingerprints as well.

  7. Biodegradation of artificial monolayers applied to water storages to reduce evaporative loss.

    PubMed

    Pittaway, P; Herzig, M; Stuckey, N; Larsen, K

    2015-01-01

    Repeat applications of an artificial monolayer to the interfacial boundary layer of large agricultural water storages during periods of high evaporative demand remains the most commercially feasible water conservation strategy. However, the interfacial boundary layer (or microlayer) is ecologically distinct from subsurface water, and repeat monolayer applications may adversely affect microlayer processes. In this study, the natural cleansing mechanisms operating within the microlayer were investigated to compare the biodegradability of two fatty alcohol (C16OH and C18OH) and one glycol ether (C18E1) monolayer compound. The C16OH and C18OH compounds were more susceptible to microbial degradation, but the C18E1 compound was most susceptible to indirect photodegradation. On clean water the surface pressure and evaporation reduction achieved with a compressed C18E1 monolayer was superior to the C18OH monolayer, but on brown water the surface pressure dropped rapidly. These results suggest artificial monolayers are readily degraded by the synergy between photo and microbial degradation. The residence time of C18OH and C18E1 monolayers on clear water is sufficient for cost-effective water conservation. However, the susceptibility of C18E1 to photodegradation indicates the application of this monolayer to brown water may not be cost-effective.

  8. Genetic algorithm applied to the selection of factors in principal component-artificial neural networks: application to QSAR study of calcium channel antagonist activity of 1,4-dihydropyridines (nifedipine analogous).

    PubMed

    Hemmateenejad, Bahram; Akhond, Morteza; Miri, Ramin; Shamsipur, Mojtaba

    2003-01-01

    A QSAR algorithm, principal component-genetic algorithm-artificial neural network (PC-GA-ANN), has been applied to a set of newly synthesized calcium channel blockers, which are of special interest because of their role in cardiac diseases. A data set of 124 1,4-dihydropyridines bearing different ester substituents at the C-3 and C-5 positions of the dihydropyridine ring and nitroimidazolyl, phenylimidazolyl, and methylsulfonylimidazolyl groups at the C-4 position with known Ca(2+) channel binding affinities was employed in this study. Ten different sets of descriptors (837 descriptors) were calculated for each molecule. The principal component analysis was used to compress the descriptor groups into principal components. The most significant descriptors of each set were selected and used as input for the ANN. The genetic algorithm (GA) was used for the selection of the best set of extracted principal components. A feed forward artificial neural network with a back-propagation of error algorithm was used to process the nonlinear relationship between the selected principal components and biological activity of the dihydropyridines. A comparison between PC-GA-ANN and routine PC-ANN shows that the first model yields better prediction ability.

  9. Numerical study on the responses of groundwater and strata to pumping and recharge in a deep confined aquifer

    NASA Astrophysics Data System (ADS)

    Zhang, Yang-Qing; Wang, Jian-Hua; Chen, Jin-Jian; Li, Ming-Guang

    2017-05-01

    Groundwater drawdown and strata settlements induced by dewatering in confined aquifers can be relieved by artificial recharge. In this study, numerical simulations of a field multi-well pumping-recharge test in a deep confined aquifer are conducted to analyze the responses of groundwater and strata to pumping and recharge. A three-dimensional numerical model is developed in a finite-difference software, which considers the fluid-mechanical interaction using the Biot consolidation theory. The predicted groundwater drawdown and ground settlements are compared to the measured data to confirm the validation of the numerical analysis of the pumping and recharge. Both numerical results and measured data indicate that the effect of recharge on controlling the groundwater drawdown and strata settlements correlates with the injection rate and well arrangements. Since the groundwater drawdown induced by pumping can be controlled by artificial recharge, soil compression can be relieved by reducing the changes of effective stress of the soils. Consequently, strata settlement induced by pumping can be relieved by artificial recharge and ground settlements can be eliminated if an appropriate injection rate and well arrangement are being determined. Moreover, the changes of the pore pressure and seepage force induced by pumping and recharge will also result in significant horizontal deformations in the strata near the recharge wells.

  10. Sentence Processing in an Artificial Language: Learning and Using Combinatorial Constraints

    ERIC Educational Resources Information Center

    Amato, Michael S.; MacDonald, Maryellen C.

    2010-01-01

    A study combining artificial grammar and sentence comprehension methods investigated the learning and online use of probabilistic, nonadjacent combinatorial constraints. Participants learned a small artificial language describing cartoon monsters acting on objects. Self-paced reading of sentences in the artificial language revealed comprehenders'…

  11. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. A low cost method of testing compression-after-impact strength of composite laminates

    NASA Technical Reports Server (NTRS)

    Nettles, Alan T.

    1991-01-01

    A method was devised to test the compression strength of composite laminate specimens that are much thinner and wider than other tests require. The specimen can be up to 7.62 cm (3 in) wide and as thin as 1.02 mm (.04 in). The best features of the Illinois Institute of Technology Research Institute (IITRI) fixture are combined with an antibuckling jig developed and used at the University of Dayton Research Institute to obtain a method of compression testing thin, wide test coupons on any 20 kip (or larger) loading frame. Up to 83 pct. less composite material is needed for the test coupons compared to the most commonly used compression-after-impact (CAI) tests, which calls for 48 ply thick (approx. 6.12 mm) test coupons. Another advantage of the new method is that composite coupons of the exact lay-up and thickness of production parts can be tested for CAI strength, thus yielding more meaningful results. This new method was used to compression test 8 and 16 ply laminates of T300/934 carbon/epoxy. These results were compared to those obtained using ASTM standard D 3410-87 (Celanese compression test). CAI testing was performed on IM6/3501-6, IM7/SP500 and IM7/F3900. The new test method and associated fixture work well and is a valuable asset to MSFC's damage tolerance program.

  13. Development and validation of an improved mechanical thorax for simulating cardiopulmonary resuscitation with adjustable chest stiffness and simulated blood flow.

    PubMed

    Eichhorn, Stefan; Spindler, Johannes; Polski, Marcin; Mendoza, Alejandro; Schreiber, Ulrich; Heller, Michael; Deutsch, Marcus Andre; Braun, Christian; Lange, Rüdiger; Krane, Markus

    2017-05-01

    Investigations of compressive frequency, duty cycle, or waveform during CPR are typically rooted in animal research or computer simulations. Our goal was to generate a mechanical model incorporating alternate stiffness settings and an integrated blood flow system, enabling defined, reproducible comparisons of CPR efficacy. Based on thoracic stiffness data measured in human cadavers, such a model was constructed using valve-controlled pneumatic pistons and an artificial heart. This model offers two realistic levels of chest elasticity, with a blood flow apparatus that reflects compressive depth and waveform changes. We conducted CPR at opposing levels of physiologic stiffness, using a LUCAS device, a motor-driven plunger, and a group of volunteers. In high-stiffness mode, blood flow generated by volunteers was significantly less after just 2min of CPR, whereas flow generated by LUCAS device was superior by comparison. Optimal blood flow was obtained via motor-driven plunger, with trapezoidal waveform. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Mechanical and Physical Properties of Polyester Polymer Concrete Using Recycled Aggregates from Concrete Sleepers

    PubMed Central

    Carrión, Francisco; Montalbán, Laura; Real, Julia I.

    2014-01-01

    Currently, reuse of solid waste from disused infrastructures is an important environmental issue to study. In this research, polymer concrete was developed by mixing orthophthalic unsaturated polyester resin, artificial microfillers (calcium carbonate), and waste aggregates (basalt and limestone) coming from the recycling process of concrete sleepers. The variation of the mechanical and physical properties of the polymer concrete (compressive strength, flexural strength, modulus of elasticity, density, and water absorption) was analyzed based on the modification of different variables: nature of the recycled aggregates, resin contents (11 wt%, 12 wt%, and 13 wt%), and particle-size distributions of microfillers used. The results show the influence of these variables on mechanical performance of polymer concrete. Compressive and flexural strength of recycled polymer concrete were improved by increasing amount of polyester resin and by optimizing the particle-size distribution of the microfillers. Besides, the results show the feasibility of developing a polymer concrete with excellent mechanical behavior. PMID:25243213

  15. Co-melting technology in resource recycling of sludge derived from stone processing.

    PubMed

    Hu, Shao-Hua; Hu, Shen-Chih; Fu, Yen-Pei

    2012-12-01

    Stone processing sludge (SPS) is a by-product of stone-processing wastewater treatment; it is suitable for use as a raw material for making artificial lightweight aggregates (ALWAs). In this study, boric acid was utilized as a flux to lower sintering temperature. The formation of the viscous glassy phase was observed by DTA curve and changes in XRD patterns. Experiments were conducted to find the optimal combination of sintering temperature, sintering time, and boric acid dosage to produce an ALWA of favorable characteristics in terms of water absorption, bulk density, apparent porosity, compressive strength and weight loss to satisfy Taiwan's regulatory requirements for construction and insulation materials. Optimal results gave a sintering temperature of 850 degrees C for 15 min at a boric acid dosage of 15% by weight of SPS. Results for ALWA favorable characteristics were: 0.21% (water absorption), 0.35% (apparent porosity), 1.67 g/cm3 (bulk density), 66.94 MPa (compressive strength), and less than 0.1% (weight loss).

  16. Co-melting technology in resource recycling of sludge derived from stone processing.

    PubMed

    Hu, Shao-Hua; Hu, Shen-Chih; Fu, Yen-Pei

    2012-12-01

    Stone processing sludge (SPS) is a by-product of stone-processing wastewater treatment; it is suitable for use as a raw material for making artificial lightweight aggregates (ALWAs). In this study, boric acid was utilized as a flux to lower sintering temperature. The formation of the viscous glassy phase was observed by DTA curve and changes in XRD patterns. Experiments were conducted to find the optimal combination of sintering temperature, sintering time, and boric acid dosage to produce an ALWA of favorable characteristics in terms of water absorption, bulk density, apparent porosity, compressive strength and weight loss to satisfy Taiwan's regulatory requirements for construction and insulation materials. Optimal results gave a sintering temperature of 850 °C for 15 min at a boric acid dosage of 15 % by weight of SPS. Results for ALWA favorable characteristics were: 0.21 % (water absorption), 0.35 %(apparent porosity), 1.67 g/cm3 (bulk density), 66.94 MPa (compressive strength), and less than 0.1% (weight loss). [Box: see text].

  17. Mechanical and physical properties of polyester polymer concrete using recycled aggregates from concrete sleepers.

    PubMed

    Carrión, Francisco; Montalbán, Laura; Real, Julia I; Real, Teresa

    2014-01-01

    Currently, reuse of solid waste from disused infrastructures is an important environmental issue to study. In this research, polymer concrete was developed by mixing orthophthalic unsaturated polyester resin, artificial microfillers (calcium carbonate), and waste aggregates (basalt and limestone) coming from the recycling process of concrete sleepers. The variation of the mechanical and physical properties of the polymer concrete (compressive strength, flexural strength, modulus of elasticity, density, and water absorption) was analyzed based on the modification of different variables: nature of the recycled aggregates, resin contents (11 wt%, 12 wt%, and 13 wt%), and particle-size distributions of microfillers used. The results show the influence of these variables on mechanical performance of polymer concrete. Compressive and flexural strength of recycled polymer concrete were improved by increasing amount of polyester resin and by optimizing the particle-size distribution of the microfillers. Besides, the results show the feasibility of developing a polymer concrete with excellent mechanical behavior.

  18. Viscous compressible flow direct and inverse computation and illustrations

    NASA Technical Reports Server (NTRS)

    Yang, T. T.; Ntone, F.

    1986-01-01

    An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.

  19. Fundamental study of compression for movie files of coronary angiography

    NASA Astrophysics Data System (ADS)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  20. A cost-effective strategy for nonoscillatory convection without clipping

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Niknafs, H. S.

    1990-01-01

    Clipping of narrow extrema and distortion of smooth profiles is a well known problem associated with so-called high resolution nonoscillatory convection schemes. A strategy is presented for accurately simulating highly convective flows containing discontinuities such as density fronts or shock waves, without distorting smooth profiles or clipping narrow local extrema. The convection algorithm is based on non-artificially diffusive third-order upwinding in smooth regions, with automatic adaptive stencil expansion to (in principle, arbitrarily) higher order upwinding locally, in regions of rapidly changing gradients. This is highly cost effective because the wider stencil is used only where needed-in isolated narrow regions. A recently developed universal limiter assures sharp monotonic resolution of discontinuities without introducing artificial diffusion or numerical compression. An adaptive discriminator is constructed to distinguish between spurious overshoots and physical peaks; this automatically relaxes the limiter near local turning points, thereby avoiding loss of resolution in narrow extrema. Examples are given for one-dimensional pure convection of scalar profiles at constant velocity.

  1. Fatigue Behavior Degradation due to the Interlaminar Conditions in Lightweight Piezoelectric Composite Actuator (lipca)

    NASA Astrophysics Data System (ADS)

    Kim, Cheol-Woong; Yoon, Kwang-Joon

    The advanced piezoelectric ceramic composite actuator, which is called LIPCA (LIghtweight Piezoelectric Composite Actuator), replaced the Al foil and stainless steel in THUNDER with the FRP and the optimization of the laminate configuration was performed to maximize the stress transfer and the fiber bridging effect. This study evaluated the fatigue characteristics in LIPCA under the resonance frequency, and the changes of its interlaminar phase were also evaluated. Beside, the residual stress distribution was estimated. In conclusions, firstly, comparing with the fatigue life of LIPCA without the artificial delamination (intact LIPCA), the fatigue life of LIPCA embedded by the artificial delamination was decreased up to 50%. Secondly, the micro void growth and the coalescence of epoxy were actively made at the interlaminar phase subject to the large tensile stress. Finally, it was known that the harmonic configuration between the compressive residuals stress and the tensile one was made. The requirement of the performance displacement increment was satisfied.

  2. Respiratory pattern changes during costovertebral joint movement.

    PubMed

    Shannon, R

    1980-05-01

    Experiments were conducted to determine if costovertebral joint manipulation (CVJM) could influence the respiratory pattern. Phrenic efferent activity (PA) was monitored in dogs that were anesthetized with Dial-urethane, vagotomized, paralyzed, and artificially ventilated. Ribs 6-10 (bilaterally) were cut and separated from ribs 5-11. Branches of thoracic nerves 5-11 were cut, leaving only the joint nerve supply intact. Manual joint movement in an inspiratory or expiratory direction had an inhibitory effect on PA. Sustained displacement of the ribs could inhibit PA for a duration equal to numerous respiratory cycles. CVJM in synchrony with PA resulted in an increased respiratory rate. The inspiratory inhibitory effect of joint receptor stimulation was elicited with manual chest compression in vagotomized spontaneously breathing dogs, but not with artificial lung inflation or deflation. It is concluded that the effect of CVJM on the respiratory pattern is due to stimulation of joint mechanoreceptors, and that they exert their influence in part via the medullary-pontine rhythm generator.

  3. CAS2D: FORTRAN program for nonrotating blade-to-blade, steady, potential transonic cascade flows

    NASA Technical Reports Server (NTRS)

    Dulikravich, D. S.

    1980-01-01

    An exact, full-potential-equation (FPE) model for the steady, irrotational, homentropic and homoenergetic flow of a compressible, homocompositional, inviscid fluid through two dimensional planar cascades of airfoils was derived, together with its appropriate boundary conditions. A computer program, CAS2D, was developed that numerically solves an artificially time-dependent form of the actual FPE. The governing equation was discretized by using type-dependent, rotated finite differencing and the finite area technique. The flow field was discretized by providing a boundary-fitted, nonuniform computational mesh. The mesh was generated by using a sequence of conforming mapping, nonorthogonal coordinate stretching, and local, isoparametric, bilinear mapping functions. The discretized form of the FPE was solved iteratively by using successive line overrelaxation. The possible isentropic shocks were correctly captured by adding explicitly an artificial viscosity in a conservative form. In addition, a three-level consecutive, mesh refinement feature makes CAS2D a reliable and fast algorithm for the analysis of transonic, two dimensional cascade flows.

  4. Material Data Representation of Hysteresis Loops for Hastelloy X Using Artificial Neural Networks

    NASA Technical Reports Server (NTRS)

    Alam, Javed; Berke, Laszlo; Murthy, Pappu L. N.

    1993-01-01

    The artificial neural network (ANN) model proposed by Rumelhart, Hinton, and Williams is applied to develop a functional approximation of material data in the form of hysteresis loops from a nickel-base superalloy, Hastelloy X. Several different ANN configurations are used to model hysteresis loops at different cycles for this alloy. The ANN models were successful in reproducing the hysteresis loops used for its training. However, because of sharp bends at the two ends of hysteresis loops, a drift occurs at the corners of the loops where loading changes to unloading and vice versa (the sharp bends occurred when the stress-strain curves were reproduced by adding stress increments to the preceding values of the stresses). Therefore, it is possible only to reproduce half of the loading path. The generalization capability of the network was tested by using additional data for two other hysteresis loops at different cycles. The results were in good agreement. Also, the use of ANN led to a data compression ratio of approximately 22:1.

  5. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    PubMed

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  6. Energy recovery during expansion of compressed gas using power plant low-quality heat sources

    DOEpatents

    Ochs, Thomas L [Albany, OR; O'Connor, William K [Lebanon, OR

    2006-03-07

    A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.

  7. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    NASA Astrophysics Data System (ADS)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  8. 49 CFR Appendix D to Part 173 - Test Methods for Dynamite (Explosive, Blasting, Type A)

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... weighed to determine the percent of weight loss. 3. Test method D-3—Compression Exudation Test The entire... from the glass tube and weighed to determine the percent of weight loss. EC02MR91.067 ... assembly is placed under the compression rod, and compression is applied by means of the weight on the...

  9. Three dimensional range geometry and texture data compression with space-filling curves.

    PubMed

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  10. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  11. Comparative study of label and label-free techniques using shotgun proteomics for relative protein quantification.

    PubMed

    Sjödin, Marcus O D; Wetterhall, Magnus; Kultima, Kim; Artemenko, Konstantin

    2013-06-01

    The analytical performance of three different strategies, iTRAQ (isobaric tag for relative and absolute quantification), dimethyl labeling (DML) and label free (LF) for relative protein quantification using shotgun proteomics have been evaluated. The methods have been explored using samples containing (i) Bovine proteins in known ratios and (ii) Bovine proteins in known ratios spiked into Escherichia coli. The latter case mimics the actual conditions in a typical biological sample with a few differentially expressed proteins and a bulk of proteins with unchanged ratios. Additionally, the evaluation was performed on both QStar and LTQ-FTICR mass spectrometers. LF LTQ-FTICR was found to have the highest proteome coverage while the highest accuracy based on the artificially regulated proteins was found for DML LTQ-FTICR (54%). A varying linearity (k: 0.55-1.16, r(2): 0.61-0.96) was shown for all methods within selected dynamic ranges. All methods were found to consistently underestimate Bovine protein ratios when matrix proteins were added. However, LF LTQ-FTICR was more tolerant toward a compression effect. A single peptide was demonstrated to be sufficient for a reliable quantification using iTRAQ. A ranking system utilizing several parameters important for quantitative proteomics demonstrated that the overall performance of the five different methods was; DML LTQ-FTICR>iTRAQ QStar>LF LTQ-FTICR>DML QStar>LF QStar. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Iterative dictionary construction for compression of large DNA data sets.

    PubMed

    Kuruppu, Shanika; Beresford-Smith, Bryan; Conway, Thomas; Zobel, Justin

    2012-01-01

    Genomic repositories increasingly include individual as well as reference sequences, which tend to share long identical and near-identical strings of nucleotides. However, the sequential processing used by most compression algorithms, and the volumes of data involved, mean that these long-range repetitions are not detected. An order-insensitive, disk-based dictionary construction method can detect this repeated content and use it to compress collections of sequences. We explore a dictionary construction method that improves repeat identification in large DNA data sets. Our adaptation, COMRAD, of an existing disk-based method identifies exact repeated content in collections of sequences with similarities within and across the set of input sequences. COMRAD compresses the data over multiple passes, which is an expensive process, but allows COMRAD to compress large data sets within reasonable time and space. COMRAD allows for random access to individual sequences and subsequences without decompressing the whole data set. COMRAD has no competitor in terms of the size of data sets that it can compress (extending to many hundreds of gigabytes) and, even for smaller data sets, the results are competitive compared to alternatives; as an example, 39 S. cerevisiae genomes compressed to 0.25 bits per base.

  13. Operations Monitoring Assistant System Design

    DTIC Science & Technology

    1986-07-01

    Logic. Artificial Inteligence 25(1)::75-94. January.18. 41 -Nils J. Nilsson. Problem-Solving Methods In Artificli Intelligence. .klcG raw-Hill B3ook...operations monitoring assistant (OMA) system is designed that combines operations research, artificial intelligence, and human reasoning techniques and...KnowledgeCraft (from Carnegie Group), and 5.1 (from Teknowledze). These tools incorporate the best methods of applied artificial intelligence, and

  14. Alteration of blue pigment in artificial iris in ocular prosthesis: effect of paint, drying method and artificial aging.

    PubMed

    Goiato, Marcelo Coelho; Fernandes, Aline Úrsula Rocha; dos Santos, Daniela Micheline; Hadadd, Marcela Filié; Moreno, Amália; Pesqueira, Aldiéris Alves

    2011-02-01

    The artificial iris is the structure responsible for the dissimulation and aesthetics of ocular prosthesis. The objective of the present study was to evaluate the color stability of artificial iris of microwaveable polymerized ocular prosthesis, as a function of paint type, drying method and accelerated aging. A total of 40 discs of microwaveable polymerized acrylic resin were fabricated, and divided according to the blue paint type (n = 5): hydrosoluble acrylic, nitrocellulose automotive, hydrosoluble gouache and oil paints. Paints where dried either at natural or at infrared light bulb method. Each specimen was constituted of one disc in colorless acrylic resin and another colored with a basic sclera pigment. Painting was performed in one surface of one of the discs. The specimens were submitted to an artificial aging chamber under ultraviolet light, during 1008 h. A reflective spectrophotometer was used to evaluate color changes. Data were evaluated by 3-way repeated-measures ANOVA and the Tukey HSD test (α = 0.05). All paints suffered color alteration. The oil paint presented the highest color resistance to artificial aging regardless of drying method. Copyright © 2010 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  15. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  16. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  17. Review of Artificial Abrasion Test Methods for PV Module Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, David C.; Muller, Matt T.; Simpson, Lin J.

    This review is intended to identify the method or methods--and the basic details of those methods--that might be used to develop an artificial abrasion test. Methods used in the PV literature were compared with their closest implementation in existing standards. Also, meetings of the International PV Quality Assurance Task Force Task Group 12-3 (TG12-3, which is concerned with coated glass) were used to identify established test methods. Feedback from the group, which included many of the authors from the PV literature, included insights not explored within the literature itself. The combined experience and examples from the literature are intended tomore » provide an assessment of the present industry practices and an informed path forward. Recommendations toward artificial abrasion test methods are then identified based on the experiences in the literature and feedback from the PV community. The review here is strictly focused on abrasion. Assessment methods, including optical performance (e.g., transmittance or reflectance), surface energy, and verification of chemical composition were not examined. Methods of artificially soiling PV modules or other specimens were not examined. The weathering of artificial or naturally soiled specimens (which may ultimately include combined temperature and humidity, thermal cycling and ultraviolet light) were also not examined. A sense of the purpose or application of an abrasion test method within the PV industry should, however, be evident from the literature.« less

  18. Compressive sensing method for recognizing cat-eye effect targets.

    PubMed

    Li, Li; Li, Hui; Dang, Ersheng; Liu, Bo

    2013-10-01

    This paper proposes a cat-eye effect target recognition method with compressive sensing (CS) and presents a recognition method (sample processing before reconstruction based on compressed sensing, or SPCS) for image processing. In this method, the linear projections of original image sequences are applied to remove dynamic background distractions and extract cat-eye effect targets. Furthermore, the corresponding imaging mechanism for acquiring active and passive image sequences is put forward. This method uses fewer images to recognize cat-eye effect targets, reduces data storage, and translates the traditional target identification, based on original image processing, into measurement vectors processing. The experimental results show that the SPCS method is feasible and superior to the shape-frequency dual criteria method.

  19. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  20. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  1. A simple accurate chest-compression depth gauge using magnetic coils during cardiopulmonary resuscitation

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Zhang, Yuhua; Tsuji, Toshio

    2015-12-01

    This paper describes a new method for calculating chest compression depth and a simple chest-compression gauge for validating the accuracy of the method. The chest-compression gauge has two plates incorporating two magnetic coils, a spring, and an accelerometer. The coils are located at both ends of the spring, and the accelerometer is set on the bottom plate. Waveforms obtained using the magnetic coils (hereafter, "magnetic waveforms"), which are proportional to compression-force waveforms and the acceleration waveforms were measured at the same time. The weight factor expressing the relationship between the second derivatives of the magnetic waveforms and the measured acceleration waveforms was calculated. An estimated-compression-displacement (depth) waveform was obtained by multiplying the weight factor and the magnetic waveforms. Displacements of two large springs (with similar spring constants) within a thorax and displacements of a cardiopulmonary resuscitation training manikin were measured using the gauge to validate the accuracy of the calculated waveform. A laser-displacement detection system was used to compare the real displacement waveform and the estimated waveform. Intraclass correlation coefficients (ICCs) between the real displacement using the laser system and the estimated displacement waveforms were calculated. The estimated displacement error of the compression depth was within 2 mm (<1 standard deviation). All ICCs (two springs and a manikin) were above 0.85 (0.99 in the case of one of the springs). The developed simple chest-compression gauge, based on a new calculation method, provides an accurate compression depth (estimation error < 2 mm).

  2. The compression–error trade-off for large gridded data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silver, Jeremy D.; Zender, Charles S.

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  3. The compression–error trade-off for large gridded data sets

    DOE PAGES

    Silver, Jeremy D.; Zender, Charles S.

    2017-01-27

    The netCDF-4 format is widely used for large gridded scientific data sets and includes several compression methods: lossy linear scaling and the non-lossy deflate and shuffle algorithms. Many multidimensional geoscientific data sets exhibit considerable variation over one or several spatial dimensions (e.g., vertically) with less variation in the remaining dimensions (e.g., horizontally). On such data sets, linear scaling with a single pair of scale and offset parameters often entails considerable loss of precision. We introduce an alternative compression method called "layer-packing" that simultaneously exploits lossy linear scaling and lossless compression. Layer-packing stores arrays (instead of a scalar pair) of scalemore » and offset parameters. An implementation of this method is compared with lossless compression, storing data at fixed relative precision (bit-grooming) and scalar linear packing in terms of compression ratio, accuracy and speed. When viewed as a trade-off between compression and error, layer-packing yields similar results to bit-grooming (storing between 3 and 4 significant figures). Bit-grooming and layer-packing offer significantly better control of precision than scalar linear packing. Relative performance, in terms of compression and errors, of bit-groomed and layer-packed data were strongly predicted by the entropy of the exponent array, and lossless compression was well predicted by entropy of the original data array. Layer-packed data files must be "unpacked" to be readily usable. The compression and precision characteristics make layer-packing a competitive archive format for many scientific data sets.« less

  4. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  5. Method and apparatus for signal compression

    DOEpatents

    Carangelo, R.M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original. 8 figures.

  6. Method and apparatus for signal compression

    DOEpatents

    Carangelo, Robert M.

    1994-02-08

    The method and apparatus of the invention effects compression of an analog electrical signal (e.g., representing an interferogram) by introducing into it a component that is a cubic function thereof, normally as a nonlinear negative signal in a feedback loop of an Op Amp. The compressed signal will most desirably be digitized and then digitally decompressed so as to produce a signal that emulates the original.

  7. Collisionless coupling processes in AMPTE releases

    NASA Technical Reports Server (NTRS)

    Lui, A. T. Y.

    1990-01-01

    An evaluation is made of results obtained to date by in situ measurements, numerical simulations, and theoretical considerations of Active Magnetospheric Particle Tracer Explorer chemical releases bearing on the nature of collisionless coupling processes. It is noted that both laminar and turbulent forces act to couple the solar wind momentum and energy to the release cloud; the magnetic field compression formed in this interaction plays an important intermediary role in coupling the two plasmas, and the intense electrostatic turbulence generated enhances the interaction. A scenario accounting for several features in the observed evolution of the December 27, 1984 artificial comet release is presented.

  8. An investigation of the compressive strength of PRD-49-3/Epoxy composites

    NASA Technical Reports Server (NTRS)

    Kulkarni, S. V.; Rice, J. S.; Rosen, B. W.

    1973-01-01

    The development of unidirectional fiber composite materials is discussed. The mechanical and physical properties of the materials are described. Emphasis is placed in analyzing the compressive behavior of composite materials and developing methods for increasing compressive strength. The test program for evaluating the various procedures for improving compressive strength are reported.

  9. Scan-Line Methods in Spatial Data Systems

    DTIC Science & Technology

    1990-09-04

    algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of

  10. Potential application of a triaxial three-dimensional fabric (3-DF) as an implant.

    PubMed

    Shikinami, Y; Kawarada, H

    1998-01-01

    Various three-dimensional fabrics (3-DFs) woven with a triaxial three-dimensional (3A-3D) structure in which the warps, wefts and vertical fibres are three-dimensionally orientated with orthogonal, off-angle, cylindrical or complex fibre alignments using a single long fibre, which may be one of several kinds of fibres, have been developed. The physical strengths and behaviour of these fabrics under different external forces were measured for such stress-strain relationships as compressive, tensile and cyclic bending, compressing torsional and compressive tensile systems to evaluate the effect of the continuous loading caused by living body movements over a long period of time. The 3-DFs led to downward convex 'J'-shaped curves in stress-strain profiles, because they were markedly flexible at low strain levels, but became rigid as strain increased. In this behaviour they reflected the behaviour of natural cartilage rather than that of conventional artificial biomaterials. There were also some 3-DFs that showed hysteresis loss curves with quite similar mechanical strengths and behaviour to natural intervertebral discs with regard to the compressive-tensile cyclic stress and showed little variation from the first 'J'-shaped hysteresis profile even after 100,000 deformation cycles. Accordingly, it has been shown that, without a doubt, 3-DFs can be effective implants possessing both design and mechanical biocompatibilities as well as the durability necessary for long-term implantation in the living body. The surface of bioinert linear low-density polyethylene coating on multifilaments of ultra-high molecular weight polyethylene, a constructional fibre of 3A-3D weaving, was modified by treatment with corona-discharge and spray-coating of unsintered hydroxyapatite powder to impart chemical (surface) compatibility and biological activity, respectively. Since the modified surface of the 3-DF was ascertained to have affinity and activity with simulated body fluid, an orthogonal 3-DF block was implanted in the tibia of a rabbit. Sufficient surrounding tissues entering into the textural space of the 3-DF could be observed at 4 weeks after implantation and the load necessary to break the block away from the bone reached a high value at 8 weeks. These results decisively showed that the 3-DFs could also acquire chemical (surface) and biological biocompatibilities and bonding capacity with bone and soft tissues through modification of the surface of the constructional fibre. The 3-DFs have definite potential in such applications as novel and effective artificial articular cartilages, intervertebral discs, menisci and materials for osteosynthesis and prosthesis, and the like.

  11. Quantitative Analysis of Ca, Mg, and K in the Roots of Angelica pubescens f. biserrata by Laser-Induced Breakdown Spectroscopy Combined with Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Shi, M.; Zheng, P.; Xue, Sh.; Peng, R.

    2018-03-01

    Laser-induced breakdown spectroscopy has been applied for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens Maxim. f. biserrata Shan et Yuan used in traditional Chinese medicine. Ca II 317.993 nm, Mg I 517.268 nm, and K I 769.896 nm spectral lines have been chosen to set up calibration models for the analysis using the external standard and artificial neural network methods. The linear correlation coefficients of the predicted concentrations versus the standard concentrations of six samples determined by the artificial neural network method are 0.9896, 0.9945, and 0.9911 for Ca, Mg, and K, respectively, which are better than for the external standard method. The artificial neural network method also gives better performance comparing with the external standard method for the average and maximum relative errors, average relative standard deviations, and most maximum relative standard deviations of the predicted concentrations of Ca, Mg, and K in the six samples. Finally, it is proved that the artificial neural network method gives better performance compared to the external standard method for the quantitative analysis of Ca, Mg, and K in the roots of Angelica pubescens.

  12. Compression techniques in tele-radiology

    NASA Astrophysics Data System (ADS)

    Lu, Tianyu; Xiong, Zixiang; Yun, David Y.

    1999-10-01

    This paper describes a prototype telemedicine system for remote 3D radiation treatment planning. Due to voluminous medical image data and image streams generated in interactive frame rate involved in the application, the importance of deploying adjustable lossy to lossless compression techniques is emphasized in order to achieve acceptable performance via various kinds of communication networks. In particular, the compression of the data substantially reduces the transmission time and therefore allows large-scale radiation distribution simulation and interactive volume visualization using remote supercomputing resources in a timely fashion. The compression algorithms currently used in the software we developed are JPEG and H.263 lossy methods and Lempel-Ziv (LZ77) lossless methods. Both objective and subjective assessment of the effect of lossy compression methods on the volume data are conducted. Favorable results are obtained showing that substantial compression ratio is achievable within distortion tolerance. From our experience, we conclude that 30dB (PSNR) is about the lower bound to achieve acceptable quality when applying lossy compression to anatomy volume data (e.g. CT). For computer simulated data, much higher PSNR (up to 100dB) is expectable. This work not only introduces such novel approach for delivering medical services that will have significant impact on the existing cooperative image-based services, but also provides a platform for the physicians to assess the effects of lossy compression techniques on the diagnostic and aesthetic appearance of medical imaging.

  13. Artificial life and Piaget.

    PubMed

    Mueller, Ulrich; Grobman, K H.

    2003-04-01

    Artificial life provides important theoretical and methodological tools for the investigation of Piaget's developmental theory. This new method uses artificial neural networks to simulate living phenomena in a computer. A recent study by Parisi and Schlesinger suggests that artificial life might reinvigorate the Piagetian framework. We contrast artificial life with traditional cognitivist approaches, discuss the role of innateness in development, and examine the relation between physiological and psychological explanations of intelligent behaviour.

  14. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  15. 5 CFR 610.404 - Requirement for time-accounting method.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...

  16. A Finite Element Analysis for Predicting the Residual Compressive Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compressive strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compressive loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  17. A Finite Element Analysis for Predicting the Residual Compression Strength of Impact-Damaged Sandwich Panels

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Jackson, Wade C.

    2008-01-01

    A simple analysis method has been developed for predicting the residual compression strength of impact-damaged sandwich panels. The method is tailored for honeycomb core-based sandwich specimens that exhibit an indentation growth failure mode under axial compression loading, which is driven largely by the crushing behavior of the core material. The analysis method is in the form of a finite element model, where the impact-damaged facesheet is represented using shell elements and the core material is represented using spring elements, aligned in the thickness direction of the core. The nonlinear crush response of the core material used in the analysis is based on data from flatwise compression tests. A comparison with a previous analysis method and some experimental data shows good agreement with results from this new approach.

  18. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.

    PubMed

    Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio

    2018-06-19

    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

  19. Enhancement of DRPE performance with a novel scheme based on new RAC: Principle, security analysis and FPGA implementation

    NASA Astrophysics Data System (ADS)

    Neji, N.; Jridi, M.; Alfalou, A.; Masmoudi, N.

    2016-02-01

    The double random phase encryption (DRPE) method is a well-known all-optical architecture which has many advantages especially in terms of encryption efficiency. However, the method presents some vulnerabilities against attacks and requires a large quantity of information to encode the complex output plane. In this paper, we present an innovative hybrid technique to enhance the performance of DRPE method in terms of compression and encryption. An optimized simultaneous compression and encryption method is applied simultaneously on the real and imaginary components of the DRPE output plane. The compression and encryption technique consists in using an innovative randomized arithmetic coder (RAC) that can well compress the DRPE output planes and at the same time enhance the encryption. The RAC is obtained by an appropriate selection of some conditions in the binary arithmetic coding (BAC) process and by using a pseudo-random number to encrypt the corresponding outputs. The proposed technique has the capabilities to process video content and to be standard compliant with modern video coding standards such as H264 and HEVC. Simulations demonstrate that the proposed crypto-compression system has presented the drawbacks of the DRPE method. The cryptographic properties of DRPE have been enhanced while a compression rate of one-sixth can be achieved. FPGA implementation results show the high performance of the proposed method in terms of maximum operating frequency, hardware occupation, and dynamic power consumption.

  20. A recursively formulated first-order semianalytic artificial satellite theory based on the generalized method of averaging. Volume 1: The generalized method of averaging applied to the artificial satellite problem

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.

    1977-01-01

    A recursively formulated, first-order, semianalytic artificial satellite theory, based on the generalized method of averaging is presented in two volumes. Volume I comprehensively discusses the theory of the generalized method of averaging applied to the artificial satellite problem. Volume II presents the explicit development in the nonsingular equinoctial elements of the first-order average equations of motion. The recursive algorithms used to evaluate the first-order averaged equations of motion are also presented in Volume II. This semianalytic theory is, in principle, valid for a term of arbitrary degree in the expansion of the third-body disturbing function (nonresonant cases only) and for a term of arbitrary degree and order in the expansion of the nonspherical gravitational potential function.

  1. Comparison of reversible methods for data compression

    NASA Astrophysics Data System (ADS)

    Heer, Volker K.; Reinfelder, Hans-Erich

    1990-07-01

    Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.

  2. Biomechanical analysis using FEA and experiments of a standard plate method versus three cable methods for fixing acetabular fractures with simultaneous THA.

    PubMed

    Aziz, Mina S R; Dessouki, Omar; Samiezadeh, Saeid; Bougherara, Habiba; Schemitsch, Emil H; Zdero, Radovan

    2017-08-01

    Acetabular fractures potentially account for up to half of all pelvic fractures, while pelvic fractures potentially account for over one-tenth of all human bone fractures. This is the first biomechanical study to assess acetabular fracture fixation using plates versus cables in the presence of a total hip arthroplasty, as done for the elderly. In Phase 1, finite element (FE) models compared a standard plate method versus 3 cable methods for repairing an acetabular fracture (type: anterior column plus posterior hemi-transverse) subjected to a physiological-type compressive load of 2207N representing 3 x body weight for a 75kg person during walking. FE stress maps were compared to choose the most mechanically stable cable method, i.e. lowest peak bone stress. In Phase 2, mechanical tests were then done in artificial hemipelvises to compare the standard plate method versus the optimal cable method selected from Phase 1. FE analysis results showed peak bone stresses of 255MPa (Plate method), 205MPa (Mears cable method), 250MPa (Kang cable method), and 181MPa (Mouhsine cable method). Mechanical tests then showed that the Plate method versus the Mouhsine cable method selected from Phase 1 had higher stiffness (662versus 385N/mm, p=0.001), strength (3210versus 2060N, p=0.009), and failure energy (8.8versus 6.2J, p=0.002), whilst they were statistically equivalent for interfragmentary sliding (p≥0.179) and interfragmentary gapping (p≥0.08). The Plate method had superior mechanical properties, but the Mouhsine cable method may be a reasonable alternative if osteoporosis prevents good screw thread interdigitation during plating. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. n-Gram-Based Text Compression.

    PubMed

    Nguyen, Vu H; Nguyen, Hien T; Duong, Hieu N; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n -gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n -grams and then encodes them based on n -gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n -gram is encoded by two to four bytes accordingly based on its corresponding n -gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n -gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  4. n-Gram-Based Text Compression

    PubMed Central

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  5. Authenticity examination of compressed audio recordings using detection of multiple compression and encoders' identification.

    PubMed

    Korycki, Rafal

    2014-05-01

    Since the appearance of digital audio recordings, audio authentication has been becoming increasingly difficult. The currently available technologies and free editing software allow a forger to cut or paste any single word without audible artifacts. Nowadays, the only method referring to digital audio files commonly approved by forensic experts is the ENF criterion. It consists in fluctuation analysis of the mains frequency induced in electronic circuits of recording devices. Therefore, its effectiveness is strictly dependent on the presence of mains signal in the recording, which is a rare occurrence. Recently, much attention has been paid to authenticity analysis of compressed multimedia files and several solutions were proposed for detection of double compression in both digital video and digital audio. This paper addresses the problem of tampering detection in compressed audio files and discusses new methods that can be used for authenticity analysis of digital recordings. Presented approaches consist in evaluation of statistical features extracted from the MDCT coefficients as well as other parameters that may be obtained from compressed audio files. Calculated feature vectors are used for training selected machine learning algorithms. The detection of multiple compression covers up tampering activities as well as identification of traces of montage in digital audio recordings. To enhance the methods' robustness an encoder identification algorithm was developed and applied based on analysis of inherent parameters of compression. The effectiveness of tampering detection algorithms is tested on a predefined large music database consisting of nearly one million of compressed audio files. The influence of compression algorithms' parameters on the classification performance is discussed, based on the results of the current study. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Techniques for information extraction from compressed GPS traces : final report.

    DOT National Transportation Integrated Search

    2015-12-31

    Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....

  7. Novel Near-Lossless Compression Algorithm for Medical Sequence Images with Adaptive Block-Based Spatial Prediction.

    PubMed

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2016-12-01

    To address the low compression efficiency of lossless compression and the low image quality of general near-lossless compression, a novel near-lossless compression algorithm based on adaptive spatial prediction is proposed for medical sequence images for possible diagnostic use in this paper. The proposed method employs adaptive block size-based spatial prediction to predict blocks directly in the spatial domain and Lossless Hadamard Transform before quantization to improve the quality of reconstructed images. The block-based prediction breaks the pixel neighborhood constraint and takes full advantage of the local spatial correlations found in medical images. The adaptive block size guarantees a more rational division of images and the improved use of the local structure. The results indicate that the proposed algorithm can efficiently compress medical images and produces a better peak signal-to-noise ratio (PSNR) under the same pre-defined distortion than other near-lossless methods.

  8. A new method of artificial latent fingerprint creation using artificial sweat and inkjet printer.

    PubMed

    Hong, Sungwook; Hong, Ingi; Han, Aleum; Seo, Jin Yi; Namgung, Juyoung

    2015-12-01

    In order to study fingerprinting in the field of forensic science, it is very important to have two or more latent fingerprints with identical chemical composition and intensity. However, it is impossible to obtain identical fingerprints, in reality, because fingerprinting comes out slightly differently every time. A previous research study had proposed an artificial fingerprint creation method in which inkjet ink was replaced with amino acids and sodium chloride solution: the components of human sweat. But, this method had some drawbacks: divalent cations were not added while formulating the artificial sweat solution, and diluted solutions were used for creating weakly deposited latent fingerprint. In this study, a method was developed for overcoming the drawbacks of the methods used in the previous study. Several divalent cations were added in this study because the amino acid-ninhydrin (or some of its analogues) complex is known to react with divalent cations to produce a photoluminescent product; and, similarly, the amino acid-1,2-indanedione complex is known to be catalyzed by a small amount of zinc ions to produce a highly photoluminescent product. Also, in this study, a new technique was developed which enables to adjust the intensity when printing the latent fingerprint patterns. In this method, image processing software is used to control the intensity of the master fingerprint patterns, which adjusts the printing intensity of the latent fingerprints. This new method opened the way to produce a more realistic artificial fingerprint in various strengths with one artificial sweat working solution. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  10. Formulation, Characterization and Physicochemical Evaluation of Ranitidine Effervescent Tablets

    PubMed Central

    Aslani, Abolfazl; Jahangiri, Hajar

    2013-01-01

    Purpose: The aim of this study was to design, formulate and physicochemically evaluate effervescent ranitidine hydrochloride (HCl) tablets since they are easily administered while the elderly and children sometimes have difficulties in swallowing oral dosage forms. Methods: Effervescent ranitidine HCl tablets were prepared in a dosage of 300 mg by fusion and direct compression methods. The powder blend and granule mixture were evaluated for various pre-compression characteristics, such as angle of repose, compressibility index, mean particle size and Hausner's ratio. The tablets were evaluated for post-compression features including weight variation, hardness, friability, drug content, dissolution time, carbon dioxide content, effervescence time, pH, content uniformity and water content. Effervescent systems with appropriate pre and post-compression qualities dissolved rapidly in water were selected as the best formulations. Results: The results showed that the flowability of fusion method is more than that of direct compression and the F5 and F6 formulations of 300 mg tablets were selected as the best formulations because of their physicochemical characteristics. Conclusion: In this study, citric acid, sodium bicarbonate and sweeteners (including mannitol, sucrose and aspartame) were selected. Aspartame, mint and orange flavors were more effective for masking the bitter taste of ranitidine. The fusion method is the best alternative in terms of physicochemical and physical properties. PMID:24312854

  11. Reconstruction of Complex Network based on the Noise via QR Decomposition and Compressed Sensing.

    PubMed

    Li, Lixiang; Xu, Dafei; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian

    2017-11-08

    It is generally known that the states of network nodes are stable and have strong correlations in a linear network system. We find that without the control input, the method of compressed sensing can not succeed in reconstructing complex networks in which the states of nodes are generated through the linear network system. However, noise can drive the dynamics between nodes to break the stability of the system state. Therefore, a new method integrating QR decomposition and compressed sensing is proposed to solve the reconstruction problem of complex networks under the assistance of the input noise. The state matrix of the system is decomposed by QR decomposition. We construct the measurement matrix with the aid of Gaussian noise so that the sparse input matrix can be reconstructed by compressed sensing. We also discover that noise can build a bridge between the dynamics and the topological structure. Experiments are presented to show that the proposed method is more accurate and more efficient to reconstruct four model networks and six real networks by the comparisons between the proposed method and only compressed sensing. In addition, the proposed method can reconstruct not only the sparse complex networks, but also the dense complex networks.

  12. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  13. Matched Filtering for Heart Rate Estimation on Compressive Sensing ECG Measurements.

    PubMed

    Da Poian, Giulia; Rozell, Christopher J; Bernardini, Riccardo; Rinaldo, Roberto; Clifford, Gari D

    2017-09-14

    Compressive Sensing (CS) has recently been applied as a low complexity compression framework for long-term monitoring of electrocardiogram signals using Wireless Body Sensor Networks. Long-term recording of ECG signals can be useful for diagnostic purposes and to monitor the evolution of several widespread diseases. In particular, beat to beat intervals provide important clinical information, and these can be derived from the ECG signal by computing the distance between QRS complexes (R-peaks). Numerous methods for R-peak detection are available for uncompressed ECG. However, in case of compressed sensed data, signal reconstruction can be performed with relatively complex optimisation algorithms, which may require significant energy consumption. This article addresses the problem of hearth rate estimation from compressive sensing electrocardiogram (ECG) recordings, avoiding the reconstruction of the entire signal. We consider a framework where the ECG signals are represented under the form of CS linear measurements. The QRS locations are estimated in the compressed domain by computing the correlation of the compressed ECG and a known QRS template. Experiments on actual ECG signals show that our novel solution is competitive with methods applied to the reconstructed signals. Avoiding the reconstruction procedure, the proposed method proves to be very convenient for real-time, low-power applications.

  14. 46 CFR 188.10-21 - Compressed gas.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PROVISIONS Definition of Terms Used in This Subchapter § 188.10-21 Compressed gas. This term includes any... by the Reid method covered by the American Society for Testing Materials Method of Test for Vapor...

  15. Artificial intelligence in the diagnosis of low back pain.

    PubMed

    Mann, N H; Brown, M D

    1991-04-01

    Computerized methods are used to recognize the characteristics of patient pain drawings. Artificial neural network (ANN) models are compared with expert predictions and traditional statistical classification methods when placing the pain drawings of low back pain patients into one of five clinically significant categories. A discussion is undertaken outlining the differences in these classifiers and the potential benefits of the ANN model as an artificial intelligence technique.

  16. Synthesis of Carbon Foam from Waste Artificial Marble Powder and Carboxymethyl Cellulose via Electron Beam Irradiation and Its Characterization

    PubMed Central

    Kim, Hong Gun; Kim, Yong Sun; Kwac, Lee Ku; Chae, Su-Hyeong; Shin, Hye Kyoung

    2018-01-01

    Carbon foams were prepared by carbonization of carboxymethyl cellulose (CMC)/waste artificial marble powder (WAMP) composites obtained via electron beam irradiation (EBI); these composites were prepared by mixing eco-friendly CMC with WAMP as the fillers for improved their poor mechanical strength. Gel fractions of the CMC/WAMP composites obtained at various EBI doses were investigated, and it was found that the CMC/WAMP composites obtained at an EBI dose of 80 kGy showed the highest gel fraction (95%); hence, the composite prepared at this dose was selected for preparing the carbon foam. The thermogravimetric analysis of the CMC/WAMP composites obtained at 80 kGy; showed that the addition of WAMP increased the thermal stability and carbon residues of the CMC/WAMP composites at 900 °C. SEM images showed that the cell walls of the CMC/WAMP carbon foams were thicker more than those of the CMC carbon foam. In addition, energy dispersive X-ray spectroscopy showed that the CMC/WAMP carbon foams contained small amounts of aluminum, derived from WAMP. The results confirmed that the increased WAMP content and hence increased aluminum content improved the thermal conductivity of the composites and their corresponding carbon foams. Moreover, the addition of WAMP increased the compressive strength of CMC/WAMP composites and hence the strength of their corresponding carbon foams. In conclusion, this synthesis method is encouraging, as it produces carbon foams of pore structure with good mechanical properties and thermal conductivity. PMID:29565300

  17. Comparative data compression techniques and multi-compression results

    NASA Astrophysics Data System (ADS)

    Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.

    2013-12-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.

  18. Lossless compression of otoneurological eye movement signals.

    PubMed

    Tossavainen, Timo; Juhola, Martti

    2002-12-01

    We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.

  19. Lossy compression of weak lensing data

    DOE PAGES

    Vanderveld, R. Ali; Bernstein, Gary M.; Stoughton, Chris; ...

    2011-07-12

    Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore » rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10 -4. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

  20. High-performance compression of astronomical images

    NASA Technical Reports Server (NTRS)

    White, Richard L.

    1993-01-01

    Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.

  1. Isentropic compressive wave generator impact pillow and method of making same

    DOEpatents

    Barker, Lynn M.

    1985-01-01

    An isentropic compressive wave generator and method of making same. The w generator comprises a disk or flat "pillow" member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.

  2. Isentropic compressive wave generator and method of making same

    DOEpatents

    Barker, L.M.

    An isentropic compressive wave generator and method of making same are disclosed. The wave generator comprises a disk or flat pillow member having component materials of different shock impedances formed in a configuration resulting in a smooth shock impedance gradient over the thickness thereof for interpositioning between an impactor member and a target specimen for producing a shock wave of a smooth predictable rise time. The method of making the pillow member comprises the reduction of the component materials to a powder form and forming the pillow member by sedimentation and compressive techniques.

  3. Complex-Difference Constrained Compressed Sensing Reconstruction for Accelerated PRF Thermometry with Application to MRI Induced RF Heating

    PubMed Central

    Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.

    2014-01-01

    Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099

  4. A hybrid data compression approach for online backup service

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  5. Evaluation of δ2H and δ18O of water in pores extracted by compression method-effects of closed pores and comparison to direct vapor equilibration and laser spectrometry method

    NASA Astrophysics Data System (ADS)

    Nakata, Kotaro; Hasegawa, Takuma; Oyama, Takahiro; Miyakawa, Kazuya

    2018-06-01

    Stable isotopes (δ2H and δ18O) of water can help our understanding of origin, mixing and migration of groundwater. In the formation with low permeability, it provides information about migration mechanism of ion such as diffusion and/or advection. Thus it has been realized as very important information to understand the migration of water and ions in it. However, in formation with low permeability it is difficult to obtain the ground water sample as liquid and water in pores needs to be extracted to estimate it. Compressing rock is the most common and widely used method of extracting water in pores. However, changes in δ2H and δ18O may take place during compression because changes in ion concentration have been reported in previous studies. In this study, two natural rocks were compressed, and the changes in the δ2H and δ18O with compression pressure were investigated. Mechanisms for the changes in water isotopes observed during the compression were then discussed. In addition, δ2H and δ18O of water in pores were also evaluated by direct vapor equilibration and laser spectrometry (DVE-LS) and δ2H and δ18O were compared with those obtained by compression. δ2H was found to change during the compression and a part of this change was found to be explained by the effect of water from closed pores extracted by compression. In addition, water isotopes in both open and closed pores were estimated by combining the results of 2 kinds of compression experiments. Water isotopes evaluated by compression that not be affected by water from closed pores showed good agreements with those obtained by DVE-LS indicating compression could show the mixed information of water from open and closed pores, while DVE-LS could show the information only for open pores. Thus, the comparison of water isotopes obtained by compression and DVE-LS could provide the information about water isotopes in closed and open pores.

  6. Design of two-dimensional channels with prescribed velocity distributions along the channel walls

    NASA Technical Reports Server (NTRS)

    Stanitz, John D

    1953-01-01

    A general method of design is developed for two-dimensional unbranched channels with prescribed velocities as a function of arc length along the channel walls. The method is developed for both compressible and incompressible, irrotational, nonviscous flow and applies to the design of elbows, diffusers, nozzles, and so forth. In part I solutions are obtained by relaxation methods; in part II solutions are obtained by a Green's function. Five numerical examples are given in part I including three elbow designs with the same prescribed velocity as a function of arc length along the channel walls but with incompressible, linearized compressible, and compressible flow. One numerical example is presented in part II for an accelerating elbow with linearized compressible flow, and the time required for the solution by a Green's function in part II was considerably less than the time required for the same solution by relaxation methods in part I.

  7. Phase unwinding for dictionary compression with multiple channel transmission in magnetic resonance fingerprinting.

    PubMed

    Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A

    2018-06-01

    Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Elastic MCF Rubber with Photovoltaics and Sensing on Hybrid Skin (H-Skin) for Artificial Skin by Utilizing Natural Rubber: 2nd Report on the Effect of Tension and Compression on the Hybrid Photo- and Piezo-Electricity Properties in Wet-Type Solar Cell Rubber.

    PubMed

    Shimada, Kunio

    2018-06-06

    In contrast to ordinary solid-state solar cells, a flexible, elastic, extensible and light-weight solar cell has the potential to be extremely useful in many new engineering applications, such as in the field of robotics. Therefore, we propose a new type of artificial skin for humanoid robots with hybrid functions, which we have termed hybrid skin (H-Skin). To realize the fabrication of such a solar cell, we have continued to utilize the principles of ordinary solid-state wet-type or dye-sensitized solar rubber as a follow-up study to the first report. In the first report, we dealt with both photovoltaic- and piezo-effects for dry-type magnetic compound fluid (MCF) rubber solar cells, which were generated because the polyisoprene, oleic acid of the magnetic fluid (MF), and water served as p- and n- semiconductors. In the present report, we deal with wet-type MCF rubber solar cells by using sensitized dyes and electrolytes. Photoreactions generated through the synthesis of these components were investigated by an experiment using irradiation with visible and ultraviolet light. In addition, magnetic clusters were formed by the aggregation of Fe₃O₄ in the MF and the metal particles created the hetero-junction structure of the semiconductors. In the MCF rubber solar cell, both photo- and piezo-electricity were generated using a physical model. The effects of tension and compression on their electrical properties were evaluated. Finally, we experimentally demonstrated the effect of the distance between the electrodes of the solar cell on photoelectricity and built-in electricity.

  9. Percutaneous osteoplasty with a bone marrow nail for fractures of long bones: experimental study.

    PubMed

    Nakata, Kouhei; Kawai, Nobuyuki; Sato, Morio; Cao, Guang; Sahara, Shinya; Tanihata, Hirohiko; Takasaka, Isao; Minamiguchi, Hiroyuki; Nakai, Tomoki

    2010-09-01

    To develop percutaneous osteoplasty with the use of a bone marrow nail for fixation of long-bone fractures, and to evaluate its feasibility and safety in vivo and in vitro. Six long bones in three healthy swine were used in the in vivo study. Acrylic cement was injected through an 11-gauge bone biopsy needle and a catheter into a covered metallic stent placed within the long bone, creating a bone marrow nail. In the in vitro study, we determined the bending, tug, and compression strengths of the acrylic cement nails 9 cm long and 8 mm in diameter (N = 10). The bending strength of the artificially fractured bones (N = 6) restored with the bone marrow nail and cement augmentation was then compared with that of normal long bones (N = 6). Percutaneous osteoplasty with a bone marrow nail was successfully achieved within 1 hour for all swine. After osteoplasty, all swine regained the ability to run until they were euthanized. Blood tests and pathologic findings showed no adverse effects. The mean bending, tug, and compression strengths of the nail were 91.4 N/mm(2) (range, 75.0-114.1 N/mm(2)), 20.9 N/mm(2) (range, 6.6-30.4 N/mm(2)), and 103.0 N/mm(2) (range, 96.3-110.0 N/mm(2)), respectively. The bending strength ratio of artificially fractured bones restored with bone marrow nail and cement augmentation to normal long bone was 0.32. Percutaneous osteoplasty with use of a bone marrow nail and cement augmentation appears to have potential in treating fractures of non-weight-bearing long bones. Copyright 2010 SIR. Published by Elsevier Inc. All rights reserved.

  10. Stability of compressible Taylor-Couette flow

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Chow, Chuen-Yen

    1991-01-01

    Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.

  11. Multiple-image encryption via lifting wavelet transform and XOR operation based on compressive ghost imaging scheme

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Yang, Xiulun; Wang, Yurong; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2018-03-01

    A multiple-image encryption method via lifting wavelet transform (LWT) and XOR operation is proposed, which is based on a row scanning compressive ghost imaging scheme. In the encryption process, the scrambling operation is implemented for the sparse images transformed by LWT, then the XOR operation is performed on the scrambled images, and the resulting XOR images are compressed in the row scanning compressive ghost imaging, through which the ciphertext images can be detected by bucket detector arrays. During decryption, the participant who possesses his/her correct key-group, can successfully reconstruct the corresponding plaintext image by measurement key regeneration, compression algorithm reconstruction, XOR operation, sparse images recovery, and inverse LWT (iLWT). Theoretical analysis and numerical simulations validate the feasibility of the proposed method.

  12. Technical Note: Validation of two methods to determine contact area between breast and compression paddle in mammography.

    PubMed

    Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A

    2017-08-01

    To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE PAGES

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    2018-03-20

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  14. Point-Cloud Compression for Vehicle-Based Mobile Mapping Systems Using Portable Network Graphics

    NASA Astrophysics Data System (ADS)

    Kohira, K.; Masuda, H.

    2017-09-01

    A mobile mapping system is effective for capturing dense point-clouds of roads and roadside objects Point-clouds of urban areas, residential areas, and arterial roads are useful for maintenance of infrastructure, map creation, and automatic driving. However, the data size of point-clouds measured in large areas is enormously large. A large storage capacity is required to store such point-clouds, and heavy loads will be taken on network if point-clouds are transferred through the network. Therefore, it is desirable to reduce data sizes of point-clouds without deterioration of quality. In this research, we propose a novel point-cloud compression method for vehicle-based mobile mapping systems. In our compression method, point-clouds are mapped onto 2D pixels using GPS time and the parameters of the laser scanner. Then, the images are encoded in the Portable Networking Graphics (PNG) format and compressed using the PNG algorithm. In our experiments, our method could efficiently compress point-clouds without deteriorating the quality.

  15. Application of the SeDeM Diagram and a new mathematical equation in the design of direct compression tablet formulation.

    PubMed

    Suñé-Negre, Josep M; Pérez-Lozano, Pilar; Miñarro, Montserrat; Roig, Manel; Fuster, Roser; Hernández, Carmen; Ruhí, Ramon; García-Montoya, Encarna; Ticó, Josep R

    2008-08-01

    Application of the new SeDeM Method is proposed for the study of the galenic properties of excipients in terms of the applicability of direct-compression technology. Through experimental studies of the parameters of the SeDeM Method and their subsequent mathematical treatment and graphical expression (SeDeM Diagram), six different DC diluents were analysed to determine whether they were suitable for direct compression (DC). Based on the properties of these diluents, a mathematical equation was established to identify the best DC diluent and the optimum amount to be used when defining a suitable formula for direct compression, depending on the SeDeM properties of the active pharmaceutical ingredient (API) to be used. The results obtained confirm that the SeDeM Method is an appropriate system, effective tool for determining a viable formulation for tablets prepared by direct compression, and can thus be used as the basis for the relevant pharmaceutical development.

  16. Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.

    A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less

  17. Analysis of axial compressive loaded beam under random support excitations

    NASA Astrophysics Data System (ADS)

    Xiao, Wensheng; Wang, Fengde; Liu, Jian

    2017-12-01

    An analytical procedure to investigate the response spectrum of a uniform Bernoulli-Euler beam with axial compressive load subjected to random support excitations is implemented based on the Mindlin-Goodman method and the mode superposition method in the frequency domain. The random response spectrum of the simply supported beam subjected to white noise excitation and to Pierson-Moskowitz spectrum excitation is investigated, and the characteristics of the response spectrum are further explored. Moreover, the effect of axial compressive load is studied and a method to determine the axial load is proposed. The research results show that the response spectrum mainly consists of the beam's additional displacement response spectrum when the excitation is white noise; however, the quasi-static displacement response spectrum is the main component when the excitation is the Pierson-Moskowitz spectrum. Under white noise excitation, the amplitude of the power spectral density function decreased as the axial compressive load increased, while the frequency band of the vibration response spectrum increased with the increase of axial compressive load.

  18. Lossless compression of AVIRIS data: Comparison of methods and instrument constraints

    NASA Technical Reports Server (NTRS)

    Roger, R. E.; Arnold, J. F.; Cavenor, M. C.; Richards, J. A.

    1992-01-01

    A family of lossless compression methods, allowing exact image reconstruction, are evaluated for compressing Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) image data. The methods are used on Differential Pulse Code Modulation (DPCM). The compressed data have an entropy of order 6 bits/pixel. A theoretical model indicates that significantly better lossless compression is unlikely to be achieved because of limits caused by the noise in the AVIRIS channels. AVIRIS data differ from data produced by other visible/near-infrared sensors, such as LANDSAT-TM or SPOT, in several ways. Firstly, the data are recorded at a greater resolution (12 bits, though packed into 16-bit words). Secondly, the spectral channels are relatively narrow and provide continuous coverage of the spectrum so that the data in adjacent channels are generally highly correlated. Thirdly, the noise characteristics of the AVIRIS are defined by the channels' Noise Equivalent Radiances (NER's), and these NER's show that, at some wavelengths, the least significant 5 or 6 bits of data are essentially noise.

  19. Evaluation of shear-compressive strength properties for laminated GFRP composites in electromagnet system

    NASA Astrophysics Data System (ADS)

    Song, Jun Hee; Kim, Hak Kun; Kim, Sam Yeon

    2014-07-01

    Laminated fiber-reinforced composites can be applied to an insulating structure of a nuclear fusion device. It is necessary to investigate the interlaminar fracture characteristics of the laminated composites for the assurance of design and structural integrity. The three methods used to prepare the glass fiber reinforced plastic composites tested in this study were vacuum pressure impregnation, high pressure laminate (HPL), and prepreg laminate. We discuss the design criteria for safe application of composites and the shear-compressive test methods for evaluating mechanical properties of the material. Shear-compressive tests could be performed successfully using series-type test jigs that were inclined 0°, 30°, 45°, 60°, and 75° to the normal axis. Shear strength depends strongly on the applied compressive stress. The design range of allowable shear stress was extended by use of the appropriate composite fabrication method. HPL had the largest design range, and the allowable interlaminar shear stress was 0.254 times the compressive stress.

  20. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  1. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  2. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  3. Method and apparatus for extracting water from air

    DOEpatents

    Spletzer, Barry L.; Callow, Diane Schafer; Marron, Lisa C.; Salton, Jonathan R.

    2002-01-01

    The present invention provides a method and apparatus for extracting liquid water from moist air using minimal energy input. The method comprises compressing moist air under conditions that foster the condensation of liquid water. The air can be decompressed under conditions that do not foster the vaporization of the condensate. The decompressed, dried air can be exchanged for a fresh charge of moist air and the process repeated. The liquid condensate can be removed for use. The apparatus can comprise a compression chamber having a variable internal volume. An intake port allows moist air into the compression chamber. An exhaust port allows dried air out of the compression chamber. A condensation device fosters condensation at the desired conditions. A condensate removal port allows liquid water to be removed.

  4. Video compression of coronary angiograms based on discrete wavelet transform with block classification.

    PubMed

    Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P

    1996-01-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  5. Construction of Optimal-Path Maps for Homogeneous-Cost-Region Path-Planning Problems

    DTIC Science & Technology

    1989-09-01

    of Artificial Inteligence , 9%,4. 24. Kirkpatrick, S., Gelatt Jr., C. D., and Vecchi, M. P., "Optinization by Sinmulated Ani- nealing", Science, Vol...studied in depth by researchers in such fields as artificial intelligence, robot;cs, and computa- tional geometry. Most methods require homogeneous...the results of the research. 10 U. L SLEVANT RESEARCH A. APPLICABLE CONCEPTS FROM ARTIFICIAL INTELLIGENCE 1. Search Methods One of the central

  6. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis.

    PubMed

    Cheng, Yezeng; Larin, Kirill V

    2006-12-20

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  7. Artificial fingerprint recognition by using optical coherence tomography with autocorrelation analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Yezeng; Larin, Kirill V.

    2006-12-01

    Fingerprint recognition is one of the most widely used methods of biometrics. This method relies on the surface topography of a finger and, thus, is potentially vulnerable for spoofing by artificial dummies with embedded fingerprints. In this study, we applied the optical coherence tomography (OCT) technique to distinguish artificial materials commonly used for spoofing fingerprint scanning systems from the real skin. Several artificial fingerprint dummies made from household cement and liquid silicone rubber were prepared and tested using a commercial fingerprint reader and an OCT system. While the artificial fingerprints easily spoofed the commercial fingerprint reader, OCT images revealed the presence of them at all times. We also demonstrated that an autocorrelation analysis of the OCT images could be potentially used in automatic recognition systems.

  8. A new method for identification of natural, artificial and in vitro cultured Calculus bovis using high-performance liquid chromatography-mass spectrometry

    PubMed Central

    Liu, Yonggang; Tan, Peng; Liu, Shanshan; Shi, Hang; Feng, Xin; Ma, Qun

    2015-01-01

    Objective: Calculus bovis have been widely used in Chinese herbology for the treatment of hyperpyrexia, convulsions, and epilepsy. Nowadays, due to the limited source and high market price, the substitutes, artificial and in vitro cultured Calculus bovis, are getting more and more commonly used. The adulteration phenomenon is serious. Therefore, it is crucial to establish a fast and simple method in discriminating the natural, artificial and in vitro cultured Calculus bovis. Bile acids, one of the main active constituents, are taken as an important indicator for evaluating the quality of Calculus bovis and the substitutes. Several techniques have been built to analyze bile acids in Calculus bovis. Whereas, as bile acids are with poor ultraviolet absorbance and high structural similarity, effective technology for identification and quality control is still lacking. Methods: In this study, high-performance liquid chromatography (HPLC) coupled with tandem mass spectrometry (LC/MS/MS) was applied in the analysis of bile acids, which effectively identified natural, artificial and in vitro cultured Calculus bovis and provide a new method for their quality control. Results: Natural, artificial and in vitro cultured Calculus bovis were differentiated by bile acids analysis. A new compound with protonated molecule at m/z 405 was found, which we called 3α, 12α-dihydroxy-7-oxo-5α-cholanic acid. This compound was discovered in in vitro cultured Calculus bovis, but almost not detected in natural and artificial Calculus bovis. A total of 13 constituents was identified. Among them, three bio-markers, including glycocholic acid, glycodeoxycholic acid and taurocholic acid (TCA) were detected in both natural and artificial Calculus bovis, but the density of TCA was different in two kinds of Calculus bovis. In addition, the characteristics of bile acids were illustrated. Conclusions: The HPLC coupled with tandem MS (LC/MS/MS) method was feasible, easy, rapid and accurate in identifying natural, artificial and in vitro cultured Calculus bovis. PMID:25829769

  9. Development of a Magnetic Attachment Method for Bionic Eye Applications.

    PubMed

    Fox, Kate; Meffin, Hamish; Burns, Owen; Abbott, Carla J; Allen, Penelope J; Opie, Nicholas L; McGowan, Ceara; Yeoh, Jonathan; Ahnood, Arman; Luu, Chi D; Cicione, Rosemary; Saunders, Alexia L; McPhedran, Michelle; Cardamone, Lisa; Villalobos, Joel; Garrett, David J; Nayagam, David A X; Apollo, Nicholas V; Ganesan, Kumaravelu; Shivdasani, Mohit N; Stacey, Alastair; Escudie, Mathilde; Lichter, Samantha; Shepherd, Robert K; Prawer, Steven

    2016-03-01

    Successful visual prostheses require stable, long-term attachment. Epiretinal prostheses, in particular, require attachment methods to fix the prosthesis onto the retina. The most common method is fixation with a retinal tack; however, tacks cause retinal trauma, and surgical proficiency is important to ensure optimal placement of the prosthesis near the macula. Accordingly, alternate attachment methods are required. In this study, we detail a novel method of magnetic attachment for an epiretinal prosthesis using two prostheses components positioned on opposing sides of the retina. The magnetic attachment technique was piloted in a feline animal model (chronic, nonrecovery implantation). We also detail a new method to reliably control the magnet coupling force using heat. It was found that the force exerted upon the tissue that separates the two components could be minimized as the measured force is proportionately smaller at the working distance. We thus detail, for the first time, a surgical method using customized magnets to position and affix an epiretinal prosthesis on the retina. The position of the epiretinal prosthesis is reliable, and its location on the retina is accurately controlled by the placement of a secondary magnet in the suprachoroidal location. The electrode position above the retina is less than 50 microns at the center of the device, although there were pressure points seen at the two edges due to curvature misalignment. The degree of retinal compression found in this study was unacceptably high; nevertheless, the normal structure of the retina remained intact under the electrodes. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  10. Application of artificial neural network for heat transfer in porous cone

    NASA Astrophysics Data System (ADS)

    Athani, Abdulgaphur; Ahamad, N. Ameer; Badruddin, Irfan Anjum

    2018-05-01

    Heat transfer in porous medium is one of the classical areas of research that has been active for many decades. The heat transfer in porous medium is generally studied by using numerical methods such as finite element method; finite difference method etc. that solves coupled partial differential equations by converting them into simpler forms. The current work utilizes an alternate method known as artificial neural network that mimics the learning characteristics of neurons. The heat transfer in porous medium fixed in a cone is predicted using backpropagation neural network. The artificial neural network is able to predict this behavior quite accurately.

  11. JPRS Report. Environmental Issues.

    DTIC Science & Technology

    1991-06-11

    the aspects of management of raising, artificial repro- duction, artificial insemination , raising of the young, male reproduction capability...method of fresh sperm artificial insemination to impreg- nate pandas. In 1980 Mei Mei, a panda in the Chengdu Zoo, success- fully gave birth to a... insemination for panda reproduction. Beginning in 1980, in 10 years of artificially raising giant pandas, the Chengdu Zoo has had 15 pregnancies

  12. Parametric study on single shot peening by dimensional analysis method incorporated with finite element method

    NASA Astrophysics Data System (ADS)

    Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang

    2012-06-01

    Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.

  13. A method of vehicle license plate recognition based on PCANet and compressive sensing

    NASA Astrophysics Data System (ADS)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  14. A novel high-frequency encoding algorithm for image compression

    NASA Astrophysics Data System (ADS)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  15. Compressive Sensing of Roller Bearing Faults via Harmonic Detection from Under-Sampled Vibration Signals

    PubMed Central

    Tang, Gang; Hou, Wei; Wang, Huaqing; Luo, Ganggang; Ma, Jianwei

    2015-01-01

    The Shannon sampling principle requires substantial amounts of data to ensure the accuracy of on-line monitoring of roller bearing fault signals. Challenges are often encountered as a result of the cumbersome data monitoring, thus a novel method focused on compressed vibration signals for detecting roller bearing faults is developed in this study. Considering that harmonics often represent the fault characteristic frequencies in vibration signals, a compressive sensing frame of characteristic harmonics is proposed to detect bearing faults. A compressed vibration signal is first acquired from a sensing matrix with information preserved through a well-designed sampling strategy. A reconstruction process of the under-sampled vibration signal is then pursued as attempts are conducted to detect the characteristic harmonics from sparse measurements through a compressive matching pursuit strategy. In the proposed method bearing fault features depend on the existence of characteristic harmonics, as typically detected directly from compressed data far before reconstruction completion. The process of sampling and detection may then be performed simultaneously without complete recovery of the under-sampled signals. The effectiveness of the proposed method is validated by simulations and experiments. PMID:26473858

  16. Low Complexity Compression and Speed Enhancement for Optical Scanning Holography

    PubMed Central

    Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.

    2016-01-01

    In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410

  17. Turbulent structure of stably stratified inhomogeneous flow

    NASA Astrophysics Data System (ADS)

    Iida, Oaki

    2018-04-01

    Effects of buoyancy force stabilizing disturbances are investigated on the inhomogeneous flow where disturbances are dispersed from the turbulent to non-turbulent field in the direction perpendicular to the gravity force. Attaching the fringe region, where disturbances are excited by the artificial body force, a Fourier spectral method is used for the inhomogeneous flow stirred at one side of the cuboid computational box. As a result, it is found that the turbulent kinetic energy is dispersed as layered structures elongated in the streamwise direction through the vibrating motion. A close look at the layered structures shows that they are flanked by colder fluids at the top and hotter fluids at the bottom, and hence vertically compressed and horizontally expanded by the buoyancy related to the countergradient heat flux, though they are punctuated by the vertical expansion of fluids at the forefront of the layered structures, which is related to the downgradient heat flux, indicating that the layered structures are gravity currents. However, the phase between temperature fluctuations and vertical velocity is shifted by π/2 rad, indicating that temperature fluctuations are generated by the propagation of internal gravity waves.

  18. Analysis of spurious oscillation modes for the shallow water and Navier-Stokes equations

    USGS Publications Warehouse

    Walters, R.A.; Carey, G.F.

    1983-01-01

    The origin and nature of spurious oscillation modes that appear in mixed finite element methods are examined. In particular, the shallow water equations are considered and a modal analysis for the one-dimensional problem is developed. From the resulting dispersion relations we find that the spurious modes in elevation are associated with zero frequency and large wave number (wavelengths of the order of the nodal spacing) and consequently are zero-velocity modes. The spurious modal behavior is the result of the finite spatial discretization. By means of an artificial compressibility and limiting argument we are able to resolve the similar problem for the Navier-Stokes equations. The relationship of this simpler analysis to alternative consistency arguments is explained. This modal approach provides an explanation of the phenomenon in question and permits us to deduce the cause of the very complex behavior of spurious modes observed in numerical experiments with the shallow water equations and Navier-Stokes equations. Furthermore, this analysis is not limited to finite element formulations, but is also applicable to finite difference formulations. ?? 1983.

  19. Hybrid position/force control of multi-arm cooperating robots

    NASA Technical Reports Server (NTRS)

    Hayati, Samad

    1986-01-01

    This paper extends the theory of hybrid position/force control to the case of multi-arm cooperating robots. Cooperation between n robot arms is achieved by controlling each arm such that the burden of actuation is shared between the arms in a nonconflicting way as they control the position of and force on a designated point on an object. The object, which may or may not be in contact with a rigid environment, is assumed to be held rigidly by n robot end-effectors. Natural and artificial position and force constraints are defined for a point on the object and two selection matrices are obtained to control the arms. The position control loops are designed based on each manipulator's Cartesian space dynamic equations. In the position control subspace, a feature is provided which allows the robot arms to exert additional forces/torques to achieve compression, tension, or torsion in the object without affecting the execution of the motion trajectories. In the force control subspace, a method is introduced to minimize the total force/torque magnitude square while realizing the net desired force/torque on the environment.

  20. Review: Regional land subsidence accompanying groundwater extraction

    USGS Publications Warehouse

    Galloway, Devin L.; Burbey, Thomas J.

    2011-01-01

    The extraction of groundwater can generate land subsidence by causing the compaction of susceptible aquifer systems, typically unconsolidated alluvial or basin-fill aquifer systems comprising aquifers and aquitards. Various ground-based and remotely sensed methods are used to measure and map subsidence. Many areas of subsidence caused by groundwater pumping have been identified and monitored, and corrective measures to slow or halt subsidence have been devised. Two principal means are used to mitigate subsidence caused by groundwater withdrawal—reduction of groundwater withdrawal, and artificial recharge. Analysis and simulation of aquifer-system compaction follow from the basic relations between head, stress, compressibility, and groundwater flow and are addressed primarily using two approaches—one based on conventional groundwater flow theory and one based on linear poroelasticity theory. Research and development to improve the assessment and analysis of aquifer-system compaction, the accompanying subsidence and potential ground ruptures are needed in the topic areas of the hydromechanical behavior of aquitards, the role of horizontal deformation, the application of differential synthetic aperture radar interferometry, and the regional-scale simulation of coupled groundwater flow and aquifer-system deformation to support resource management and hazard mitigation measures.

Top