Sample records for field method based

  1. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  2. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  3. Multiframe super resolution reconstruction method based on light field angular images

    NASA Astrophysics Data System (ADS)

    Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao

    2017-12-01

    The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.

  4. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  5. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Systems and Methods for Implementing Robust Carbon Nanotube-Based Field Emitters

    NASA Technical Reports Server (NTRS)

    Kristof, Valerie (Inventor); Manohara, Harish (Inventor); Toda, Risaku (Inventor)

    2015-01-01

    Systems and methods in accordance with embodiments of the invention implement carbon nanotube-based field emitters. In one embodiment, a method of fabricating a carbon nanotube field emitter includes: patterning a substrate with a catalyst, where the substrate has thereon disposed a diffusion barrier layer; growing a plurality of carbon nanotubes on at least a portion of the patterned catalyst; and heating the substrate to an extent where it begins to soften such that at least a portion of at least one carbon nanotube becomes enveloped by the softened substrate.

  7. An optical flow-based method for velocity field of fluid flow estimation

    NASA Astrophysics Data System (ADS)

    Głomb, Grzegorz; Świrniak, Grzegorz; Mroczka, Janusz

    2017-06-01

    The aim of this paper is to present a method for estimating flow-velocity vector fields using the Lucas-Kanade algorithm. The optical flow measurements are based on the Particle Image Velocimetry (PIV) technique, which is commonly used in fluid mechanics laboratories in both research institutes and industry. Common approaches for an optical characterization of velocity fields base on computation of partial derivatives of the image intensity using finite differences. Nevertheless, the accuracy of velocity field computations is low due to the fact that an exact estimation of spatial derivatives is very difficult in presence of rapid intensity changes in the PIV images, caused by particles having small diameters. The method discussed in this paper solves this problem by interpolating the PIV images using Gaussian radial basis functions. This provides a significant improvement in the accuracy of the velocity estimation but, more importantly, allows for the evaluation of the derivatives in intermediate points between pixels. Numerical analysis proves that the method is able to estimate even a separate vector for each particle with a 5× 5 px2 window, whereas a classical correlation-based method needs at least 4 particle images. With the use of a specialized multi-step hybrid approach to data analysis the method improves the estimation of the particle displacement far above 1 px.

  8. 3D SAPIV particle field reconstruction method based on adaptive threshold.

    PubMed

    Qu, Xiangju; Song, Yang; Jin, Ying; Li, Zhenhua; Wang, Xuezhen; Guo, ZhenYan; Ji, Yunjing; He, Anzhi

    2018-03-01

    Particle image velocimetry (PIV) is a necessary flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. Three-dimensional (3D) PIV methods can supply the full understanding of a 3D structure, the complete stress tensor, and the vorticity vector in the complex flows. In synthetic aperture particle image velocimetry (SAPIV), the flow field can be measured with large particle intensities from the same direction by different cameras. During SAPIV particle reconstruction, particles are commonly reconstructed by manually setting a threshold to filter out unfocused particles in the refocused images. In this paper, the particle intensity distribution in refocused images is analyzed, and a SAPIV particle field reconstruction method based on an adaptive threshold is presented. By using the adaptive threshold to filter the 3D measurement volume integrally, the three-dimensional location information of the focused particles can be reconstructed. The cross correlations between images captured from cameras and images projected by the reconstructed particle field are calculated for different threshold values. The optimal threshold is determined by cubic curve fitting and is defined as the threshold value that causes the correlation coefficient to reach its maximum. The numerical simulation of a 16-camera array and a particle field at two adjacent time events quantitatively evaluates the performance of the proposed method. An experimental system consisting of a camera array of 16 cameras was used to reconstruct the four adjacent frames in a vortex flow field. The results show that the proposed reconstruction method can effectively reconstruct the 3D particle fields.

  9. Object-based change detection method using refined Markov random field

    NASA Astrophysics Data System (ADS)

    Peng, Daifeng; Zhang, Yongjun

    2017-01-01

    In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.

  10. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  11. FLASHFLOOD: A 3D Field-based similarity search and alignment method for flexible molecules

    NASA Astrophysics Data System (ADS)

    Pitman, Michael C.; Huber, Wolfgang K.; Horn, Hans; Krämer, Andreas; Rice, Julia E.; Swope, William C.

    2001-07-01

    A three-dimensional field-based similarity search and alignment method for flexible molecules is introduced. The conformational space of a flexible molecule is represented in terms of fragments and torsional angles of allowed conformations. A user-definable property field is used to compute features of fragment pairs. Features are generalizations of CoMMA descriptors (Silverman, B.D. and Platt, D.E., J. Med. Chem., 39 (1996) 2129.) that characterize local regions of the property field by its local moments. The features are invariant under coordinate system transformations. Features taken from a query molecule are used to form alignments with fragment pairs in the database. An assembly algorithm is then used to merge the fragment pairs into full structures, aligned to the query. Key to the method is the use of a context adaptive descriptor scaling procedure as the basis for similarity. This allows the user to tune the weights of the various feature components based on examples relevant to the particular context under investigation. The property fields may range from simple, phenomenological fields, to fields derived from quantum mechanical calculations. We apply the method to the dihydrofolate/methotrexate benchmark system, and show that when one injects relevant contextual information into the descriptor scaling procedure, better results are obtained more efficiently. We also show how the method works and include computer times for a query from a database that represents approximately 23 million conformers of seventeen flexible molecules.

  12. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  13. Compressible cavitation with stochastic field method

    NASA Astrophysics Data System (ADS)

    Class, Andreas; Dumond, Julien

    2012-11-01

    Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.

  14. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array

    PubMed Central

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan

    2018-01-01

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742

  15. Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.

    PubMed

    Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan

    2018-05-05

    Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.

  16. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Hahn, Inseob (Inventor); Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor)

    2013-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  17. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2014-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  18. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2011-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  19. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H (Inventor); Hahn, Inseob (Inventor)

    2010-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  20. Teaching Geographic Field Methods Using Paleoecology

    ERIC Educational Resources Information Center

    Walsh, Megan K.

    2014-01-01

    Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…

  1. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    NASA Astrophysics Data System (ADS)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  2. A Systematic Evaluation of Field-Based Screening Methods for the Assessment of Anterior Cruciate Ligament (ACL) Injury Risk.

    PubMed

    Fox, Aaron S; Bonacci, Jason; McLean, Scott G; Spittle, Michael; Saunders, Natalie

    2016-05-01

    Laboratory-based measures provide an accurate method to identify risk factors for anterior cruciate ligament (ACL) injury; however, these methods are generally prohibitive to the wider community. Screening methods that can be completed in a field or clinical setting may be more applicable for wider community use. Examination of field-based screening methods for ACL injury risk can aid in identifying the most applicable method(s) for use in these settings. The objective of this systematic review was to evaluate and compare field-based screening methods for ACL injury risk to determine their efficacy of use in wider community settings. An electronic database search was conducted on the SPORTDiscus™, MEDLINE, AMED and CINAHL databases (January 1990-July 2015) using a combination of relevant keywords. A secondary search of the same databases, using relevant keywords from identified screening methods, was also undertaken. Studies identified as potentially relevant were independently examined by two reviewers for inclusion. Where consensus could not be reached, a third reviewer was consulted. Original research articles that examined screening methods for ACL injury risk that could be undertaken outside of a laboratory setting were included for review. Two reviewers independently assessed the quality of included studies. Included studies were categorized according to the screening method they examined. A description of each screening method, and data pertaining to the ability to prospectively identify ACL injuries, validity and reliability, recommendations for identifying 'at-risk' athletes, equipment and training required to complete screening, time taken to screen athletes, and applicability of the screening method across sports and athletes were extracted from relevant studies. Of 1077 citations from the initial search, a total of 25 articles were identified as potentially relevant, with 12 meeting all inclusion/exclusion criteria. From the secondary search, eight

  3. A Photoluminescence-Based Field Method for Detection of Traces of Explosives

    PubMed Central

    Menzel, E. Roland; Menzel, Laird W.; Schwierking, Jake R.

    2004-01-01

    We report a photoluminescence-based field method for detecting traces of explosives. In its standard version, the method utilizes a commercially available color spot test kit for treating explosive traces on filter paper after swabbing. The colored products are fluorescent under illumination with a laser that operates on three C-size flashlight batteries and delivers light at 532 nm. In the fluorescence detection mode, by visual inspection, the typical sensitivity gain is a factor of 100. The method is applicable to a wide variety of explosives. In its time-resolved version, intended for in situ work, explosives are tagged with europium complexes. Instrumentation-wise, the time-resolved detection, again visual, can be accomplished in facile fashion. The europium luminescence excitation utilizes a laser operating at 355 nm. We demonstrate the feasibility of CdSe quantum dot sensitization of europium luminescence for time-resolved purposes. This would allow the use of the above 532 nm laser. PMID:15349512

  4. A novel autonomous real-time position method based on polarized light and geomagnetic field.

    PubMed

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-04-08

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.

  5. A novel autonomous real-time position method based on polarized light and geomagnetic field

    NASA Astrophysics Data System (ADS)

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-04-01

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.

  6. Evolutionary programming-based univector field navigation method for past mobile robots.

    PubMed

    Kim, Y J; Kim, J H; Kwon, D S

    2001-01-01

    Most of navigation techniques with obstacle avoidance do not consider the robot orientation at the target position. These techniques deal with the robot position only and are independent of its orientation and velocity. To solve these problems this paper proposes a novel univector field method for fast mobile robot navigation which introduces a normalized two dimensional vector field. The method provides fast moving robots with the desired posture at the target position and obstacle avoidance. To obtain the sub-optimal vector field, a function approximator is used and trained by evolutionary programming. Two kinds of vector fields are trained, one for the final posture acquisition and the other for obstacle avoidance. Computer simulations and real experiments are carried out for a fast moving mobile robot to demonstrate the effectiveness of the proposed scheme.

  7. Harmonics analysis of the ITER poloidal field converter based on a piecewise method

    NASA Astrophysics Data System (ADS)

    Xudong, WANG; Liuwei, XU; Peng, FU; Ji, LI; Yanan, WU

    2017-12-01

    Poloidal field (PF) converters provide controlled DC voltage and current to PF coils. The many harmonics generated by the PF converter flow into the power grid and seriously affect power systems and electric equipment. Due to the complexity of the system, the traditional integral operation in Fourier analysis is complicated and inaccurate. This paper presents a piecewise method to calculate the harmonics of the ITER PF converter. The relationship between the grid input current and the DC output current of the ITER PF converter is deduced. The grid current is decomposed into the sum of some simple functions. By calculating simple function harmonics based on the piecewise method, the harmonics of the PF converter under different operation modes are obtained. In order to examine the validity of the method, a simulation model is established based on Matlab/Simulink and a relevant experiment is implemented in the ITER PF integration test platform. Comparative results are given. The calculated results are found to be consistent with simulation and experiment. The piecewise method is proved correct and valid for calculating the system harmonics.

  8. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  9. Evaluation of Three Field-Based Methods for Quantifying Soil Carbon

    PubMed Central

    Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.

    2013-01-01

    Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225

  10. A velocity probe-based method for continuous detonation and shock measurement in near-field underwater explosion.

    PubMed

    Li, Kebin; Li, Xiaojie; Yan, Honghao; Wang, Xiaohong; Miao, Yusong

    2017-12-01

    A new velocity probe which permits recording the time history of detonation and shock waves has been developed by improving the commercial on principle and structure. A method based on the probe is then designed to measure the detonation velocity and near-field shock parameters in a single underwater explosion, by which the oblique shock wave front of cylindrical charges and the peak pressure attenuation curve of spherical explosive are obtained. A further derivation of detonation pressure, adiabatic exponent, and other shock parameters is conducted. The present method offers a novel and reliable parameter determination for near-field underwater explosion.

  11. A velocity probe-based method for continuous detonation and shock measurement in near-field underwater explosion

    NASA Astrophysics Data System (ADS)

    Li, Kebin; Li, Xiaojie; Yan, Honghao; Wang, Xiaohong; Miao, Yusong

    2017-12-01

    A new velocity probe which permits recording the time history of detonation and shock waves has been developed by improving the commercial on principle and structure. A method based on the probe is then designed to measure the detonation velocity and near-field shock parameters in a single underwater explosion, by which the oblique shock wave front of cylindrical charges and the peak pressure attenuation curve of spherical explosive are obtained. A further derivation of detonation pressure, adiabatic exponent, and other shock parameters is conducted. The present method offers a novel and reliable parameter determination for near-field underwater explosion.

  12. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields.

    PubMed

    Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-01-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.

  13. A novel autonomous real-time position method based on polarized light and geomagnetic field

    PubMed Central

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-01-01

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance. PMID:25851793

  14. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm.

    PubMed

    Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart

    2008-11-21

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  15. Non-uniform refractive index field measurement based on light field imaging technique

    NASA Astrophysics Data System (ADS)

    Du, Xiaokun; Zhang, Yumin; Zhou, Mengjie; Xu, Dong

    2018-02-01

    In this paper, a method for measuring the non-uniform refractive index field based on the light field imaging technique is proposed. First, the light field camera is used to collect the four-dimensional light field data, and then the light field data is decoded according to the light field imaging principle to obtain image sequences with different acquisition angles of the refractive index field. Subsequently PIV (Particle Image Velocimetry) technique is used to extract ray offset of each image. Finally, the distribution of non-uniform refractive index field can be calculated by inversing the deflection of light rays. Compared with traditional optical methods which require multiple optical detectors from multiple angles to synchronously collect data, the method proposed in this paper only needs a light field camera and shoot once. The effectiveness of the method has been verified by the experiment which quantitatively measures the distribution of the refractive index field above the flame of the alcohol lamp.

  16. Comparison of on-site field measured inorganic arsenic in rice with laboratory measurements using a field deployable method: Method validation.

    PubMed

    Mlangeni, Angstone Thembachako; Vecchi, Valeria; Norton, Gareth J; Raab, Andrea; Krupp, Eva M; Feldmann, Joerg

    2018-10-15

    A commercial arsenic field kit designed to measure inorganic arsenic (iAs) in water was modified into a field deployable method (FDM) to measure iAs in rice. While the method has been validated to give precise and accurate results in the laboratory, its on-site field performance has not been evaluated. This study was designed to test the method on-site in Malawi in order to evaluate its accuracy and precision in determination of iAs on-site by comparing with a validated reference method and giving original data on inorganic arsenic in Malawian rice and rice-based products. The method was validated by using the established laboratory-based HPLC-ICPMS. Statistical tests indicated there were no significant differences between on-site and laboratory iAs measurements determined using the FDM (p = 0.263, ά = 0.05) and between on-site measurements and measurements determined using HPLC-ICP-MS (p = 0.299, ά = 0.05). This method allows quick (within 1 h) and efficient screening of rice containing iAs concentrations on-site. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  18. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  19. Field-based evaluation of a male-specific (F+) RNA coliphage concentration method.

    PubMed

    Chandler, J C; Pérez-Méndez, A; Paar, J; Doolittle, M M; Bisha, B; Goodridge, L D

    2017-01-01

    Fecal contamination of water poses a significant risk to public health due to the potential presence of pathogens, including enteric viruses. Therefore, sensitive, reliable and easy to use methods for the concentration, detection and quantification of microorganisms associated with the safety and quality of water are needed. In this study, we performed a field evaluation of an anion exchange resin-based method to concentrate male-specific (F+) RNA coliphages (FRNA), fecal indicator organisms, from diverse environmental waters that were suspected to be contaminated with feces. In this system, FRNA coliphages are adsorbed to anion exchange resin and direct nucleic acid isolation is performed, yielding a sample amenable to real-time reverse transcriptase (RT)-PCR detection. Matrix-dependent inhibition of this method was evaluated using known quantities of spiked FRNA coliphages belonging to four genogroups (GI, GII, GII and GIV). RT-PCR-based detection was successful in 97%, 72%, 85% and 98% of the samples spiked (10 6 pfu/l) with GI, GII, GIII and GIV, respectively. Differential FRNA coliphage genogroup detection was linked to inhibitors that altered RT-PCR assay efficiency. No association between inhibition and the physicochemical properties of the water samples was apparent. Additionally, the anion exchange resin method facilitated detection of naturally present FRNA coliphages in 40 of 65 environmental water samples (61.5%), demonstrating the viability of this system to concentrate FRNA coliphages from water. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  1. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  2. Force Field for Water Based on Neural Network.

    PubMed

    Wang, Hao; Yang, Weitao

    2018-05-18

    We developed a novel neural network based force field for water based on training with high level ab initio theory. The force field was built based on electrostatically embedded many-body expansion method truncated at binary interactions. Many-body expansion method is a common strategy to partition the total Hamiltonian of large systems into a hierarchy of few-body terms. Neural networks were trained to represent electrostatically embedded one-body and two-body interactions, which require as input only one and two water molecule calculations at the level of ab initio electronic structure method CCSD/aug-cc-pVDZ embedded in the molecular mechanics water environment, making it efficient as a general force field construction approach. Structural and dynamic properties of liquid water calculated with our force field show good agreement with experimental results. We constructed two sets of neural network based force fields: non-polarizable and polarizable force fields. Simulation results show that the non-polarizable force field using fixed TIP3P charges has already behaved well, since polarization effects and many-body effects are implicitly included due to the electrostatic embedding scheme. Our results demonstrate that the electrostatically embedded many-body expansion combined with neural network provides a promising and systematic way to build the next generation force fields at high accuracy and low computational costs, especially for large systems.

  3. Construction method of QC-LDPC codes based on multiplicative group of finite field in optical communication

    NASA Astrophysics Data System (ADS)

    Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui

    2016-09-01

    In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.

  4. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  5. Modeling the electrostatic field localization in nanostructures based on DLC films using the tunneling microscopy methods

    NASA Astrophysics Data System (ADS)

    Yakunin, Alexander N.; Aban'shin, Nikolay P.; Avetisyan, Yuri A.; Akchurin, Georgy G.; Akchurin, Garif G.

    2018-04-01

    A model for calculating the electrostatic field in the system "probe of a tunnel microscope - a nanostructure based on a DLC film" was developed. A finite-element modeling of the localization of the field was carried out, taking into account the morphological and topological features of the nanostructure. The obtained results and their interpretation contribute to the development of the concepts to the model of tunnel electric transport processes. The possibility for effective usage of the tunneling microscopy methods in the development of new nanophotonic devices is shown.

  6. Building MapObjects attribute field in cadastral database based on the method of Jackson system development

    NASA Astrophysics Data System (ADS)

    Chen, Zhu-an; Zhang, Li-ting; Liu, Lu

    2009-10-01

    ESRI's GIS components MapObjects are applied in many cadastral information system because of its miniaturization and flexibility. Some cadastral information was saved in cadastral database directly by MapObjects's Shape file format in this cadastral information system. However, MapObjects didn't provide the function of building attribute field for map layer's attribute data file in cadastral database and user cann't save the result of analysis. This present paper designed and realized the function of building attribute field in MapObjects based on the method of Jackson's system development.

  7. A new method for gravity field recovery based on frequency analysis of spherical harmonics

    NASA Astrophysics Data System (ADS)

    Cai, Lin; Zhou, Zebing

    2017-04-01

    All existing methods for gravity field recovery are mostly based on the space-wise and time-wise approach, whose core processes are constructing the observation equations and solving them by the least square method. It's should be pointed that the least square method means the approximation. On the other hand, we can directly and precisely obtain the coefficients of harmonics by computing the Fast Fourier Transform (FFT) when we do 1-D data (time series) analysis. So the question whether we directly and precisely obtain the coefficients of spherical harmonic by computing 2-D FFT of measurements of satellite gravity mission is of great significance, since this may guide us to a new understanding of the signal components of gravity field and make us determine it quickly by taking advantage of FFT. Like the 1-D data analysis, the 2-D FFT of measurements of satellite can be computed rapidly. If we can determine the relationship between spherical harmonics and 2-D Fourier frequencies and the transfer function from measurements to spherical coefficients, the question mentioned above can be solved. So the objective of this research project is to establish a new method based on frequency analysis of spherical harmonic, which directly compute the confidents of spherical harmonic of gravity field, which is differ from recovery by least squares. There is a one to one correspondence between frequency spectrum and the time series in 1-D FFT. The 2-D FFT has a similar relationship to 1-D FFT. Owing to the fact that any degree or order (higher than one) of spherical function has multi frequencies and these frequencies may be aliased. Fortunately, the elements and ratio of these frequencies of spherical function can be determined, and we can compute the coefficients of spherical function from 2-D FFT. This relationship can be written as equations and equivalent to a matrix, which is solid and can be derived in advance. Until now the relationship has be determined. Some preliminary

  8. Method for making field-structured memory materials

    DOEpatents

    Martin, James E.; Anderson, Robert A.; Tigges, Chris P.

    2002-01-01

    A method of forming a dual-level memory material using field structured materials. The field structured materials are formed from a dispersion of ferromagnetic particles in a polymerizable liquid medium, such as a urethane acrylate-based photopolymer, which are applied as a film to a support and then exposed in selected portions of the film to an applied magnetic or electric field. The field can be applied either uniaxially or biaxially at field strengths up to 150 G or higher to form the field structured materials. After polymerizing the field-structure materials, a magnetic field can be applied to selected portions of the polymerized field-structured material to yield a dual-level memory material on the support, wherein the dual-level memory material supports read-and-write binary data memory and write once, read many memory.

  9. Implementation of a Serial Replica Exchange Method in a Physics-Based United-Residue (UNRES) Force Field

    PubMed Central

    Shen, Hujun; Czaplewski, Cezary; Liwo, Adam; Scheraga, Harold A.

    2009-01-01

    The kinetic-trapping problem in simulating protein folding can be overcome by using a Replica Exchange Method (REM). However, in implementing REM in molecular dynamics simulations, synchronization between processors on parallel computers is required, and communication between processors limits its ability to sample conformational space in a complex system efficiently. To minimize communication between processors during the simulation, a Serial Replica Exchange Method (SREM) has been proposed recently by Hagan et al. (J. Phys. Chem. B 2007, 111, 1416–1423). Here, we report the implementation of this new SREM algorithm with our physics-based united-residue (UNRES) force field. The method has been tested on the protein 1E0L with a temperature-independent UNRES force field and on terminally blocked deca-alanine (Ala10) and 1GAB with the recently introduced temperature-dependent UNRES force field. With the temperature-independent force field, SREM reproduces the results of REM but is more efficient in terms of wall-clock time and scales better on distributed-memory machines. However, exact application of SREM to the temperature-dependent UNRES algorithm requires the determination of a four-dimensional distribution of UNRES energy components instead of a one-dimensional energy distribution for each temperature, which is prohibitively expensive. Hence, we assumed that the temperature dependence of the force field can be ignored for neighboring temperatures. This version of SREM worked for Ala10 which is a simple system but failed to reproduce the thermodynamic results as well as regular REM on the more complex 1GAB protein. Hence, SREM can be applied to the temperature-independent but not to the temperature-dependent UNRES force field. PMID:20011673

  10. Identification of active sources inside cavities using the equivalent source method-based free-field recovery technique

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian

    2015-06-01

    In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.

  11. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  12. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  13. Sheet metals characterization using the virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2018-05-01

    In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.

  14. A biomolecular detection method based on charge pumping in a nanogap embedded field-effect-transistor biosensor

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Ahn, Jae-Hyuk; Park, Tae Jung; Lee, Sang Yup; Choi, Yang-Kyu

    2009-06-01

    A unique direct electrical detection method of biomolecules, charge pumping, was demonstrated using a nanogap embedded field-effect-transistor (FET). With aid of a charge pumping method, sensitivity can fall below the 1 ng/ml concentration regime in antigen-antibody binding of an avian influenza case. Biomolecules immobilized in the nanogap are mainly responsible for the acute changes of the interface trap density due to modulation of the energy level of the trap. This finding is supported by a numerical simulation. The proposed detection method for biomolecules using a nanogap embedded FET represents a foundation for a chip-based biosensor capable of high sensitivity.

  15. A novel method for unsteady flow field segmentation based on stochastic similarity of direction

    NASA Astrophysics Data System (ADS)

    Omata, Noriyasu; Shirayama, Susumu

    2018-04-01

    Recent developments in fluid dynamics research have opened up the possibility for the detailed quantitative understanding of unsteady flow fields. However, the visualization techniques currently in use generally provide only qualitative insights. A method for dividing the flow field into physically relevant regions of interest can help researchers quantify unsteady fluid behaviors. Most methods at present compare the trajectories of virtual Lagrangian particles. The time-invariant features of an unsteady flow are also frequently of interest, but the Lagrangian specification only reveals time-variant features. To address these challenges, we propose a novel method for the time-invariant spatial segmentation of an unsteady flow field. This segmentation method does not require Lagrangian particle tracking but instead quantitatively compares the stochastic models of the direction of the flow at each observed point. The proposed method is validated with several clustering tests for 3D flows past a sphere. Results show that the proposed method reveals the time-invariant, physically relevant structures of an unsteady flow.

  16. [Research on the temperature field detection method of hot forging based on long-wavelength infrared spectrum].

    PubMed

    Zhang, Yu-Cun; Wei, Bin; Fu, Xian-Bin

    2014-02-01

    A temperature field detection method based on long-wavelength infrared spectrum for hot forging is proposed in the present paper. This method combines primary spectrum pyrometry and three-stage FP-cavity LCTF. By optimizing the solutions of three group nonlinear equations in the mathematical model of temperature detection, the errors are reduced, thus measuring results will be more objective and accurate. Then the system of three-stage FP-cavity LCTF was designed on the principle of crystal birefringence. The system realized rapid selection of any wavelength in a certain wavelength range. It makes the response of the temperature measuring system rapid and accurate. As a result, without the emissivity of hot forging, the method can acquire exact information of temperature field and effectively suppress the background light radiation around the hot forging and ambient light that impact the temperature detection accuracy. Finally, the results of MATLAB showed that the infrared spectroscopy through the three-stage FP-cavity LCTF could meet the requirements of design. And experiments verified the feasibility of temperature measuring method. Compared with traditional single-band thermal infrared imager, the accuracy of measuring result was improved.

  17. Graphene-based field-effect transistor biosensors

    DOEpatents

    Chen; , Junhong; Mao, Shun; Lu, Ganhua

    2017-06-14

    The disclosure provides a field-effect transistor (FET)-based biosensor and uses thereof. In particular, to FET-based biosensors using thermally reduced graphene-based sheets as a conducting channel decorated with nanoparticle-biomolecule conjugates. The present disclosure also relates to FET-based biosensors using metal nitride/graphene hybrid sheets. The disclosure provides a method for detecting a target biomolecule in a sample using the FET-based biosensor described herein.

  18. A new method for incoherent combining of far-field laser beams based on multiple faculae recognition

    NASA Astrophysics Data System (ADS)

    Ye, Demao; Li, Sichao; Yan, Zhihui; Zhang, Zenan; Liu, Yuan

    2018-03-01

    Compared to coherent beam combining, incoherent beam combining can complete the output of high power laser beam with high efficiency, simple structure, low cost and high thermal damage resistance, and it is easy to realize in engineering. Higher target power is achieved by incoherent beam combination which using technology of multi-channel optical path correction. However, each channel forms a spot in the far field respectively, which cannot form higher laser power density with low overlap ratio of faculae. In order to improve the combat effectiveness of the system, it is necessary to overlap different faculae that improve the target energy density. Hence, a novel method for incoherent combining of far-field laser beams is present. The method compromises piezoelectric ceramic technology and evaluation algorithm of faculae coincidence degree which based on high precision multi-channel optical path correction. The results show that the faculae recognition algorithm is low-latency(less than 10ms), which can meet the needs of practical engineering. Furthermore, the real time focusing ability of far field faculae is improved which was beneficial to the engineering of high-energy laser weapon or other laser jamming systems.

  19. A cavitation model based on Eulerian stochastic fields

    NASA Astrophysics Data System (ADS)

    Magagnato, F.; Dumond, J.

    2013-12-01

    Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  20. Investigation of base pairs containing oxidized guanine using ab initio method and ABEEMσπ polarizable force field.

    PubMed

    Liu, Cui; Wang, Yang; Zhao, Dongxia; Gong, Lidong; Yang, Zhongzhi

    2014-02-01

    The integrity of the genetic information is constantly threatened by oxidizing agents. Oxidized guanines have all been linked to different types of cancers. Theoretical approaches supplement the assorted experimental techniques, and bring new sight and opportunities to investigate the underlying microscopic mechanics. Unfortunately, there is no specific force field to DNA system including oxidized guanines. Taking high level ab initio calculations as benchmark, we developed the ABEEMσπ fluctuating charge force field, which uses multiple fluctuating charges per atom. And it was applied to study the energies, structures and mutations of base pairs containing oxidized guanines. The geometries were obtained in reference to other studies or using B3LYP/6-31+G* level optimization, which is more rational and timesaving among 24 quantum mechanical methods selected and tested by this work. The energies were determined at MP2/aug-cc-pVDZ level with BSSE corrections. Results show that the constructed potential function can accurately simulate the change of H-bond and the buckled angle formed by two base planes induced by oxidized guanine, and it provides reliable information of hydrogen bonding, stacking interaction and the mutation processes. The performance of ABEEMσπ polarizable force field in predicting the bond lengths, bond angles, dipole moments etc. is generally better than those of the common force fields. And the accuracy of ABEEMσπ PFF is close to that of the MP2 method. This shows that ABEEMσπ model is a reliable choice for further research of dynamics behavior of DNA fragment including oxidized guanine. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Wave field restoration using three-dimensional Fourier filtering method.

    PubMed

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer.

  2. Magnetic space-based field measurements

    NASA Technical Reports Server (NTRS)

    Langel, R. A.

    1981-01-01

    Satellite measurements of the geomagnetic field began with the launch of Sputnik 3 in May 1958 and have continued sporadically in the intervening years. A list of spacecraft that have made significant contributions to an understanding of the near-earth geomagnetic field is presented. A new era in near-earth magnetic field measurements began with NASA's launch of Magsat in October 1979. Attention is given to geomagnetic field modeling, crustal magnetic anomaly studies, and investigations of the inner earth. It is concluded that satellite-based magnetic field measurements make global surveys practical for both field modeling and for the mapping of large-scale crustal anomalies. They are the only practical method of accurately modeling the global secular variation. Magsat is providing a significant contribution, both because of the timeliness of the survey and because its vector measurement capability represents an advance in the technology of such measurements.

  3. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  4. Dynamic Test Method Based on Strong Electromagnetic Pulse for Electromagnetic Shielding Materials with Field-Induced Insulator-Conductor Phase Transition

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Zhao, Min; Wang, Qingguo

    2018-01-01

    In order to measure the pulse shielding performance of materials with the characteristic of field-induced insulator-conductor phase transition when materials are used for electromagnetic shielding, a dynamic test method was proposed based on a coaxial fixture. Experiment system was built by square pulse source, coaxial cable, coaxial fixture, attenuator, and oscilloscope and insulating components. S11 parameter of the test system was obtained, which suggested that the working frequency ranges from 300 KHz to 7.36 GHz. Insulating performance is good enough to avoid discharge between conductors when material samples is exposed in the strong electromagnetic pulse field up to 831 kV/m. This method is suitable for materials with annular shape, certain thickness and the characteristic of field-induced insulator-conductor phase transition to get their shielding performances of strong electromagnetic pulse.

  5. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  6. Template‐based field map prediction for rapid whole brain B0 shimming

    PubMed Central

    Shi, Yuhang; Vannesjo, S. Johanna; Miller, Karla L.

    2017-01-01

    Purpose In typical MRI protocols, time is spent acquiring a field map to calculate the shim settings for best image quality. We propose a fast template‐based field map prediction method that yields near‐optimal shims without measuring the field. Methods The template‐based prediction method uses prior knowledge of the B0 distribution in the human brain, based on a large database of field maps acquired from different subjects, together with subject‐specific structural information from a quick localizer scan. The shimming performance of using the template‐based prediction is evaluated in comparison to a range of potential fast shimming methods. Results Static B0 shimming based on predicted field maps performed almost as well as shimming based on individually measured field maps. In experimental evaluations at 7 T, the proposed approach yielded a residual field standard deviation in the brain of on average 59 Hz, compared with 50 Hz using measured field maps and 176 Hz using no subject‐specific shim. Conclusions This work demonstrates that shimming based on predicted field maps is feasible. The field map prediction accuracy could potentially be further improved by generating the template from a subset of subjects, based on parameters such as head rotation and body mass index. Magn Reson Med 80:171–180, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29193340

  7. Index cost estimate based BIM method - Computational example for sports fields

    NASA Astrophysics Data System (ADS)

    Zima, Krzysztof

    2017-07-01

    The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.

  8. An efficient method for the fusion of light field refocused images

    NASA Astrophysics Data System (ADS)

    Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei

    2018-04-01

    Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.

  9. Human exposure assessment in the near field of GSM base-station antennas using a hybrid finite element/method of moments technique.

    PubMed

    Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A

    2003-02-01

    A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas.

  10. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  11. Apparatuses and methods for generating electric fields

    DOEpatents

    Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L

    2013-08-06

    Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.

  12. New Multigrid Method Including Elimination Algolithm Based on High-Order Vector Finite Elements in Three Dimensional Magnetostatic Field Analysis

    NASA Astrophysics Data System (ADS)

    Hano, Mitsuo; Hotta, Masashi

    A new multigrid method based on high-order vector finite elements is proposed in this paper. Low level discretizations in this method are obtained by using low-order vector finite elements for the same mesh. Gauss-Seidel method is used as a smoother, and a linear equation of lowest level is solved by ICCG method. But it is often found that multigrid solutions do not converge into ICCG solutions. An elimination algolithm of constant term using a null space of the coefficient matrix is also described. In three dimensional magnetostatic field analysis, convergence time and number of iteration of this multigrid method are discussed with the convectional ICCG method.

  13. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO 2 (ffCO 2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  14. Field-gradient partitioning for fracture and frictional contact in the material point method: Field-gradient partitioning for fracture and frictional contact in the material point method [Fracture and frictional contact in material point method using damage-field gradients for velocity-field partitioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Homel, Michael A.; Herbold, Eric B.

    Contact and fracture in the material point method require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. We present a new method which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. Our approach avoids the need to construct and evolve explicit cracks or contact surfaces and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approachmore » permits frictional ‘self-contact’ between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Finally, numerical simulations of self contact and dynamic crack propagation are presented to demonstrate the accuracy of the approach.« less

  15. Field-gradient partitioning for fracture and frictional contact in the material point method: Field-gradient partitioning for fracture and frictional contact in the material point method [Fracture and frictional contact in material point method using damage-field gradients for velocity-field partitioning

    DOE PAGES

    Homel, Michael A.; Herbold, Eric B.

    2016-08-15

    Contact and fracture in the material point method require grid-scale enrichment or partitioning of material into distinct velocity fields to allow for displacement or velocity discontinuities at a material interface. We present a new method which a kernel-based damage field is constructed from the particle data. The gradient of this field is used to dynamically repartition the material into contact pairs at each node. Our approach avoids the need to construct and evolve explicit cracks or contact surfaces and is therefore well suited to problems involving complex 3-D fracture with crack branching and coalescence. A straightforward extension of this approachmore » permits frictional ‘self-contact’ between surfaces that are initially part of a single velocity field, enabling more accurate simulation of granular flow, porous compaction, fragmentation, and comminution of brittle materials. Finally, numerical simulations of self contact and dynamic crack propagation are presented to demonstrate the accuracy of the approach.« less

  16. Multiple field-based methods to assess the potential impacts of seismic surveys on scallops.

    PubMed

    Przeslawski, Rachel; Huang, Zhi; Anderson, Jade; Carroll, Andrew G; Edmunds, Matthew; Hurt, Lynton; Williams, Stefan

    2018-04-01

    Marine seismic surveys are an important tool to map geology beneath the seafloor and manage petroleum resources, but they are also a source of underwater noise pollution. A mass mortality of scallops in the Bass Strait, Australia occurred a few months after a marine seismic survey in 2010, and fishing groups were concerned about the potential relationship between the two events. The current study used three field-based methods to investigate the potential impact of marine seismic surveys on scallops in the region: 1) dredging and 2) deployment of Autonomous Underwater Vehicles (AUVs) were undertaken to examine the potential response of two species of scallops (Pecten fumatus, Mimachlamys asperrima) before, two months after, and ten months after a 2015 marine seismic survey; and 3) MODIS satellite data revealed patterns of sea surface temperatures from 2006-2016. Results from the dredging and AUV components show no evidence of scallop mortality attributable to the seismic survey, although sub-lethal effects cannot be excluded. The remote sensing revealed a pronounced thermal spike in the eastern Bass Strait between February and May 2010, overlapping the scallop beds that suffered extensive mortality and coinciding almost exactly with dates of operation for the 2010 seismic survey. The acquisition of in situ data coupled with consideration of commercial seismic arrays meant that results were ecologically realistic, while the paired field-based components (dredging, AUV imagery) provided a failsafe against challenges associated with working wholly in the field. This study expands our knowledge of the potential environmental impacts of marine seismic survey and will inform future applications for marine seismic surveys, as well as the assessment of such applications by regulatory authorities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Functional Assessment-Based Interventions for Students with or At-Risk for High-Incidence Disabilities: Field Testing Single-Case Synthesis Methods

    ERIC Educational Resources Information Center

    Common, Eric Alan; Lane, Kathleen Lynne; Pustejovsky, James E.; Johnson, Austin H.; Johl, Liane Elizabeth

    2017-01-01

    This systematic review investigated one systematic approach to designing, implementing, and evaluating functional assessment-based interventions (FABI) for use in supporting school-age students with or at-risk for high-incidence disabilities. We field tested several recently developed methods for single-case design syntheses. First, we appraised…

  18. Alternative methods of flexible base compaction acceptance.

    DOT National Transportation Integrated Search

    2012-05-01

    In the Texas Department of Transportation, flexible base construction is governed by a series of stockpile : and field tests. A series of concerns with these existing methods, along with some premature failures in the : field, led to this project inv...

  19. Reprocessing the GRACE-derived gravity field time series based on data-driven method for ocean tide alias error mitigation

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Sneeuw, Nico; Jiang, Weiping

    2017-04-01

    GRACE mission has contributed greatly to the temporal gravity field monitoring in the past few years. However, ocean tides cause notable alias errors for single-pair spaceborne gravimetry missions like GRACE in two ways. First, undersampling from satellite orbit induces the aliasing of high-frequency tidal signals into the gravity signal. Second, ocean tide models used for de-aliasing in the gravity field retrieval carry errors, which will directly alias into the recovered gravity field. GRACE satellites are in non-repeat orbit, disabling the alias error spectral estimation based on the repeat period. Moreover, the gravity field recovery is conducted in non-strictly monthly interval and has occasional gaps, which result in an unevenly sampled time series. In view of the two aspects above, we investigate the data-driven method to mitigate the ocean tide alias error in a post-processing mode.

  20. Generalized theoretical method for the interaction between arbitrary nonuniform electric field and molecular vibrations: Toward near-field infrared spectroscopy and microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwasa, Takeshi, E-mail: tiwasa@mail.sci.hokudai.ac.jp; Takenaka, Masato; Taketsugu, Tetsuya

    A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems.more » The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.« less

  1. Generalized theoretical method for the interaction between arbitrary nonuniform electric field and molecular vibrations: Toward near-field infrared spectroscopy and microscopy.

    PubMed

    Iwasa, Takeshi; Takenaka, Masato; Taketsugu, Tetsuya

    2016-03-28

    A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems. The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.

  2. Study of the method of water-injected meat identifying based on low-field nuclear magnetic resonance

    NASA Astrophysics Data System (ADS)

    Xu, Jianmei; Lin, Qing; Yang, Fang; Zheng, Zheng; Ai, Zhujun

    2018-01-01

    The aim of this study to apply low-field nuclear magnetic resonance technique was to study regular variation of the transverse relaxation spectral parameters of water-injected meat with the proportion of water injection. Based on this, the method of one-way ANOVA and discriminant analysis was used to analyse the differences between these parameters in the capacity of distinguishing water-injected proportion, and established a model for identifying water-injected meat. The results show that, except for T 21b, T 22e and T 23b, the other parameters of the T 2 relaxation spectrum changed regularly with the change of water-injected proportion. The ability of different parameters to distinguish water-injected proportion was different. Based on S, P 22 and T 23m as the prediction variable, the Fisher model and the Bayes model were established by discriminant analysis method, qualitative and quantitative classification of water-injected meat can be realized. The rate of correct discrimination of distinguished validation and cross validation were 88%, the model was stable.

  3. Field by field hybrid upwind splitting methods

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1993-01-01

    A new and general approach to upwind splitting is presented. The design principle combines the robustness of flux vector splitting schemes in the capture of nonlinear waves and the accuracy of some flux difference splitting schemes in the resolution of linear waves. The new schemes are derived following a general hybridization technique performed directly at the basic level of the field by field decomposition involved in FDS methods. The scheme does not use a spatial switch to be tuned up according to the local smoothness of the approximate solution.

  4. A novel background field removal method for MRI using projection onto dipole fields (PDF).

    PubMed

    Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi

    2011-11-01

    For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.

  5. The method of generating functions in exact scalar field inflationary cosmology

    NASA Astrophysics Data System (ADS)

    Chervon, Sergey V.; Fomin, Igor V.; Beesham, Aroonkumar

    2018-04-01

    The construction of exact solutions in scalar field inflationary cosmology is of growing interest. In this work, we review the results which have been obtained with the help of one of the most effective methods, viz., the method of generating functions for the construction of exact solutions in scalar field cosmology. We also include in the debate the superpotential method, which may be considered as the bridge to the slow roll approximation equations. Based on the review, we suggest a classification for the generating functions, and find a connection for all of them with the superpotential.

  6. Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method

    DOE PAGES

    Miao, C.; Sundaram, B. M.; Huang, L.; ...

    2016-08-09

    Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squaresmore » integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.« less

  7. Extending the Fellegi-Sunter probabilistic record linkage method for approximate field comparators.

    PubMed

    DuVall, Scott L; Kerber, Richard A; Thomas, Alun

    2010-02-01

    Probabilistic record linkage is a method commonly used to determine whether demographic records refer to the same person. The Fellegi-Sunter method is a probabilistic approach that uses field weights based on log likelihood ratios to determine record similarity. This paper introduces an extension of the Fellegi-Sunter method that incorporates approximate field comparators in the calculation of field weights. The data warehouse of a large academic medical center was used as a case study. The approximate comparator extension was compared with the Fellegi-Sunter method in its ability to find duplicate records previously identified in the data warehouse using different demographic fields and matching cutoffs. The approximate comparator extension misclassified 25% fewer pairs and had a larger Welch's T statistic than the Fellegi-Sunter method for all field sets and matching cutoffs. The accuracy gain provided by the approximate comparator extension grew as less information was provided and as the matching cutoff increased. Given the ubiquity of linkage in both clinical and research settings, the incremental improvement of the extension has the potential to make a considerable impact.

  8. Dual stage potential field method for robotic path planning

    NASA Astrophysics Data System (ADS)

    Singh, Pradyumna Kumar; Parida, Pramod Kumar

    2018-04-01

    Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.

  9. Detection of concrete dam leakage using an integrated geophysical technique based on flow-field fitting method

    NASA Astrophysics Data System (ADS)

    Dai, Qianwei; Lin, Fangpeng; Wang, Xiaoping; Feng, Deshan; Bayless, Richard C.

    2017-05-01

    An integrated geophysical investigation was performed at S dam located at Dadu basin in China to assess the condition of the dam curtain. The key methodology of the integrated technique used was flow-field fitting method, which allowed identification of the hydraulic connections between the dam foundation and surface water sources (upstream and downstream), and location of the anomalous leakage outlets in the dam foundation. Limitations of the flow-field fitting method were complemented with resistivity logging to identify the internal erosion which had not yet developed into seepage pathways. The results of the flow-field fitting method and resistivity logging were consistent when compared with data provided by seismic tomography, borehole television, water injection test, and rock quality designation.

  10. Bi-color near infrared thermoreflectometry: a method for true temperature field measurement.

    PubMed

    Sentenac, Thierry; Gilblas, Rémi; Hernandez, Daniel; Le Maoult, Yannick

    2012-12-01

    In a context of radiative temperature field measurement, this paper deals with an innovative method, called bicolor near infrared thermoreflectometry, for the measurement of true temperature fields without prior knowledge of the emissivity field of an opaque material. This method is achieved by a simultaneous measurement, in the near infrared spectral band, of the radiance temperature fields and of the emissivity fields measured indirectly by reflectometry. The theoretical framework of the method is introduced and the principle of the measurements at two wavelengths is detailed. The crucial features of the indirect measurement of emissivity are the measurement of bidirectional reflectivities in a single direction and the introduction of an unknown variable, called the "diffusion factor." Radiance temperature and bidirectional reflectivities are then merged into a bichromatic system based on Kirchhoff's laws. The assumption of the system, based on the invariance of the diffusion factor for two near wavelengths, and the value of the chosen wavelengths, are then discussed in relation to a database of several material properties. A thermoreflectometer prototype was developed, dimensioned, and evaluated. Experiments were carried out to outline its trueness in challenging cases. First, experiments were performed on a metallic sample with a high emissivity value. The bidirectional reflectivity was then measured from low signals. The results on erbium oxide demonstrate the power of the method with materials with high emissivity variations in near infrared spectral band.

  11. A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153

  12. Using Wavelet Bases to Separate Scales in Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Michlin, Tracie L.

    This thesis investigates the use of Daubechies wavelets to separate scales in local quantum field theory. Field theories have an infinite number of degrees of freedom on all distance scales. Quantum field theories are believed to describe the physics of subatomic particles. These theories have no known mathematically convergent approximation methods. Daubechies wavelet bases can be used separate degrees of freedom on different distance scales. Volume and resolution truncations lead to mathematically well-defined truncated theories that can be treated using established methods. This work demonstrates that flow equation methods can be used to block diagonalize truncated field theoretic Hamiltonians by scale. This eliminates the fine scale degrees of freedom. This may lead to approximation methods and provide an understanding of how to formulate well-defined fine resolution limits.

  13. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  14. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  15. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  16. Multi-phase-field method for surface tension induced elasticity

    NASA Astrophysics Data System (ADS)

    Schiedung, Raphael; Steinbach, Ingo; Varnik, Fathollah

    2018-01-01

    A method, based on the multi-phase-field framework, is proposed that adequately accounts for the effects of a coupling between surface free energy and elastic deformation in solids. The method is validated via a number of analytically solvable problems. In addition to stress states at mechanical equilibrium in complex geometries, the underlying multi-phase-field framework naturally allows us to account for the influence of surface energy induced stresses on phase transformation kinetics. This issue, which is of fundamental importance on the nanoscale, is demonstrated in the limit of fast diffusion for a solid sphere, which melts due to the well-known Gibbs-Thompson effect. This melting process is slowed down when coupled to surface energy induced elastic deformation.

  17. Improving chemical shift encoding‐based water–fat separation based on a detailed consideration of magnetic field contributions

    PubMed Central

    Ruschke, Stefan; Eggers, Holger; Meineke, Jakob; Rummeny, Ernst J.; Karampinos, Dimitrios C.

    2018-01-01

    Purpose To improve the robustness of existing chemical shift encoding‐based water–fat separation methods by incorporating a priori information of the magnetic field distortions in complex‐based water–fat separation. Methods Four major field contributions are considered: inhomogeneities of the scanner magnet, the shim field, an object‐based field map estimate, and a residual field. The former two are completely determined by spherical harmonic expansion coefficients directly available from the magnetic resonance (MR) scanner. The object‐based field map is forward simulated from air–tissue interfaces inside the field of view (FOV). The missing residual field originates from the object outside the FOV and is investigated by magnetic field simulations on a numerical whole body phantom. In vivo the spatially linear first‐order component of the residual field is estimated by measuring echo misalignments after demodulation of other field contributions resulting in a linear residual field. Gradient echo datasets of the cervical and the ankle region without and with shimming were acquired, where all four contributions were incorporated in the water–fat separation with two algorithms from the ISMRM water–fat toolbox and compared to water–fat separation with less incorporated field contributions. Results Incorporating all four field contributions as demodulation steps resulted in reduced temporal and spatial phase wraps leading to almost swap‐free water–fat separation results in all datasets. Conclusion Demodulating estimates of major field contributions reduces the phase evolution to be driven by only small differences in local tissue susceptibility, which supports the field smoothness assumption of existing water–fat separation techniques. PMID:29424458

  18. Perspective: Ab initio force field methods derived from quantum mechanics

    NASA Astrophysics Data System (ADS)

    Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.

    2018-03-01

    It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.

  19. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It

  20. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, J.; Lee, J.; Yadav, V.

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO 2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) andmore » fitting. Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO 2 (ffCO 2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO 2 emissions and synthetic observations of ffCO 2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It

  1. Laboratory and field based evaluation of chromatography ...

    EPA Pesticide Factsheets

    The Monitor for AeRosols and GAses in ambient air (MARGA) is an on-line ion-chromatography-based instrument designed for speciation of the inorganic gas and aerosol ammonium-nitrate-sulfate system. Previous work to characterize the performance of the MARGA has been primarily based on field comparison to other measurement methods to evaluate accuracy. While such studies are useful, the underlying reasons for disagreement among methods are not always clear. This study examines aspects of MARGA accuracy and precision specifically related to automated chromatography analysis. Using laboratory standards, analytical accuracy, precision, and method detection limits derived from the MARGA chromatography software are compared to an alternative software package (Chromeleon, Thermo Scientific Dionex). Field measurements are used to further evaluate instrument performance, including the MARGA’s use of an internal LiBr standard to control accuracy. Using gas/aerosol ratios and aerosol neutralization state as a case study, the impact of chromatography on measurement error is assessed. The new generation of on-line chromatography-based gas and particle measurement systems have many advantages, including simultaneous analysis of multiple pollutants. The Monitor for Aerosols and Gases in Ambient Air (MARGA) is such an instrument that is used in North America, Europe, and Asia for atmospheric process studies as well as routine monitoring. While the instrument has been evaluat

  2. Underwater electric field detection system based on weakly electric fish

    NASA Astrophysics Data System (ADS)

    Xue, Wei; Wang, Tianyu; Wang, Qi

    2018-04-01

    Weakly electric fish sense their surroundings in complete darkness by their active electric field detection system. However, due to the insufficient detection capacity of the electric field, the detection distance is not enough, and the detection accuracy is not high. In this paper, a method of underwater detection based on rotating current field theory is proposed to improve the performance of underwater electric field detection system. First of all, we built underwater detection system based on the theory of the spin current field mathematical model with the help of the results of previous researchers. Then we completed the principle prototype and finished the metal objects in the water environment detection experiments, laid the foundation for the further experiments.

  3. σ-SCF: A direct energy-targeting method to mean-field excited states

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  4. σ-SCF: A direct energy-targeting method to mean-field excited states.

    PubMed

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  5. A field based detection method for Rose rosette virus using isothermal probe-based Reverse transcription-recombinase polymerase amplification assay.

    PubMed

    Babu, Binoy; Washburn, Brian K; Ertek, Tülin Sarigül; Miller, Steven H; Riddle, Charles B; Knox, Gary W; Ochoa-Corona, Francisco M; Olson, Jennifer; Katırcıoğlu, Yakup Zekai; Paret, Mathews L

    2017-09-01

    Rose rosette disease, caused by Rose rosette virus (RRV; genus Emaravirus) is a major threat to the rose industry in the U.S. The only strategy currently available for disease management is early detection and eradication of the infected plants, thereby limiting its potential spread. Current RT-PCR based diagnostic methods for RRV are time consuming and are inconsistent in detecting the virus from symptomatic plants. Real-time RT-qPCR assay is highly sensitive for detection of RRV, but it is expensive and requires well-equipped laboratories. Both the RT-PCR and RT-qPCR cannot be used in a field-based testing for RRV. Hence a novel probe based, isothermal reverse transcription-recombinase polymerase amplification (RT-exoRPA) assay, using primer/probe designed based on the nucleocapsid gene of the RRV has been developed. The assay is highly specific and did not give a positive reaction to other viruses infecting roses belonging to both inclusive and exclusive genus. Dilution assays using the in vitro transcript showed that the primer/probe set is highly sensitive, with a detection limit of 1 fg/μl. In addition, a rapid technique for the extraction of viral RNA (<5min) has been standardized from RRV infected tissue sources, using PBS-T buffer (pH 7.4), which facilitates the virus adsorption onto the PCR tubes at 4°C for 2min, followed by denaturation to release the RNA. RT-exoRPA analysis of the infected plants using the primer/probe indicated that the virus could be detected from leaves, stems, petals, pollen, primary roots and secondary roots. In addition, the assay was efficiently used in the diagnosis of RRV from different rose varieties, collected from different states in the U.S. The entire process, including the extraction can be completed in 25min, with less sophisticated equipments. The developed assay can be used with high efficiency in large scale field testing for rapid detection of RRV in commercial nurseries and landscapes. Copyright © 2017 Elsevier B

  6. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    NASA Astrophysics Data System (ADS)

    Tattoli, F.; Pierron, F.; Rotinat, R.; Casavola, C.; Pappalettere, C.

    2011-01-01

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto—plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  7. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  8. Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method

    NASA Astrophysics Data System (ADS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami

    We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.

  9. A counting-weighted calibration method for a field-programmable-gate-array-based time-to-digital converter

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Ho

    2017-05-01

    In this work, we propose a counting-weighted calibration method for field-programmable-gate-array (FPGA)-based time-to-digital converter (TDC) to provide non-linearity calibration for use in positron emission tomography (PET) scanners. To deal with the non-linearity in FPGA, we developed a counting-weighted delay line (CWD) to count the delay time of the delay cells in the TDC in order to reduce the differential non-linearity (DNL) values based on code density counts. The performance of the proposed CWD-TDC with regard to linearity far exceeds that of TDC with a traditional tapped delay line (TDL) architecture, without the need for nonlinearity calibration. When implemented in a Xilinx Vertix-5 FPGA device, the proposed CWD-TDC achieved time resolution of 60 ps with integral non-linearity (INL) and DNL of [-0.54, 0.24] and [-0.66, 0.65] least-significant-bit (LSB), respectively. This is a clear indication of the suitability of the proposed FPGA-based CWD-TDC for use in PET scanners.

  10. A method for the estimate of the wall diffusion for non-axisymmetric fields using rotating external fields

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.

    2013-08-01

    A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.

  11. Grassmann phase space methods for fermions. II. Field theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalton, B.J., E-mail: bdalton@swin.edu.au; Jeffers, J.; Barnett, S.M.

    In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, thoughmore » fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.« less

  12. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  13. Novel Texture-based Visualization Methods for High-dimensional Multi-field Data Sets

    DTIC Science & Technology

    2013-07-06

    project: In standard format showing authors, title, journal, issue, pages, and date, for each category list the following: b) papers published...visual- isation [18]. Novel image acquisition and simulation tech- niques have made is possible to record a large number of co-located data fields...function, structure, anatomical changes, metabolic activity, blood perfusion, and cellular re- modelling. In this paper we investigate texture-based

  14. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  15. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  16. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  17. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  18. Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quadros, William Roshan; Owen, Steven James

    2010-04-01

    We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant featuremore » and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.« less

  19. An integral equation method for calculating sound field diffracted by a rigid barrier on an impedance ground.

    PubMed

    Zhao, Sipei; Qiu, Xiaojun; Cheng, Jianchun

    2015-09-01

    This paper proposes a different method for calculating a sound field diffracted by a rigid barrier based on the integral equation method, where a virtual boundary is assumed above the rigid barrier to divide the whole space into two subspaces. Based on the Kirchhoff-Helmholtz equation, the sound field in each subspace is determined with the source inside and the boundary conditions on the surface, and then the diffracted sound field is obtained by using the continuation conditions on the virtual boundary. Simulations are carried out to verify the feasibility of the proposed method. Compared to the MacDonald method and other existing methods, the proposed method is a rigorous solution for whole space and is also much easier to understand.

  20. Modelling of induced electric fields based on incompletely known magnetic fields

    NASA Astrophysics Data System (ADS)

    Laakso, Ilkka; De Santis, Valerio; Cruciani, Silvano; Campi, Tommaso; Feliziani, Mauro

    2017-08-01

    Determining the induced electric fields in the human body is a fundamental problem in bioelectromagnetics that is important for both evaluation of safety of electromagnetic fields and medical applications. However, existing techniques for numerical modelling of induced electric fields require detailed information about the sources of the magnetic field, which may be unknown or difficult to model in realistic scenarios. Here, we show how induced electric fields can accurately be determined in the case where the magnetic fields are known only approximately, e.g. based on field measurements. The robustness of our approach is shown in numerical simulations for both idealized and realistic scenarios featuring a personalized MRI-based head model. The approach allows for modelling of the induced electric fields in biological bodies directly based on real-world magnetic field measurements.

  1. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  2. SU-E-J-246: A Deformation-Field Map Based Liver 4D CBCT Reconstruction Method Using Gold Nanoparticles as Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, W; Zhang, Y; Ren, L

    2014-06-01

    Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor inmore » on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved

  3. Atmospheric Blocking and Intercomparison of Objective Detection Methods: Flow Field Characteristics

    NASA Astrophysics Data System (ADS)

    Pinheiro, M. C.; Ullrich, P. A.; Grotjahn, R.

    2017-12-01

    A number of objective methods for identifying and quantifying atmospheric blocking have been developed over the last couple of decades, but there is variable consensus on the resultant blocking climatology. This project examines blocking climatologies as produced by three different methods: two anomaly-based methods, and the geopotential height gradient method of Tibaldi and Molteni (1990). The results highlight the differences in blocking that arise from the choice of detection method, with emphasis on the physical characteristics of the flow field and the subsequent effects on the blocking patterns that emerge.

  4. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    NASA Astrophysics Data System (ADS)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  5. Application of Discrete Huygens Method for Diffraction of Transient Ultrasonic Field

    NASA Astrophysics Data System (ADS)

    Alia, A.

    2018-01-01

    Several time-domain methods have been widely used to predict impulse response in acoustics. Despite its great potential, Discrete Huygens Method (DHM) has not been as widely used in the domain of ultrasonic diffraction as in other fields. In fact, little can be found in literature about the application of the DHM to diffraction phenomenon that can be described in terms of direct and edge waves, a concept suggested by Young since 1802. In this paper, a simple axisymmetric DHM-model has been used to simulate the transient ultrasonic field radiation of a baffled transducer and its diffraction by a target located on axis. The results are validated by impulse response based calculations. They indicate the capability of DHM to simulate diffraction occurring at transducer and target edges and to predict the complicated transient field in pulse mode.

  6. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  7. Partial homogeneity based high-resolution nuclear magnetic resonance spectra under inhomogeneous magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn

    2014-09-29

    In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposedmore » method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.« less

  8. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  9. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  10. A novel construction method of QC-LDPC codes based on the subgroup of the finite field multiplicative group for optical transmission systems

    NASA Astrophysics Data System (ADS)

    Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-01-01

    According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.

  11. Self-consistent Green's function embedding for advanced electronic structure methods based on a dynamical mean-field concept

    NASA Astrophysics Data System (ADS)

    Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick

    2016-04-01

    We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.

  12. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  13. [Sub-field imaging spectrometer design based on Offner structure].

    PubMed

    Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu

    2013-08-01

    To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe.

  14. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    PubMed

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Wavelet-based hierarchical surface approximation from height fields

    Treesearch

    Sang-Mook Lee; A. Lynn Abbott; Daniel L. Schmoldt

    2004-01-01

    This paper presents a novel hierarchical approach to triangular mesh generation from height fields. A wavelet-based multiresolution analysis technique is used to estimate local shape information at different levels of resolution. Using predefined templates at the coarsest level, the method constructs an initial triangulation in which underlying object shapes are well...

  16. Magnetic tracking for TomoTherapy systems: gradiometer based methods to filter eddy-current magnetic fields.

    PubMed

    McGary, John E; Xiong, Zubiao; Chen, Ji

    2013-07-01

    TomoTherapy systems lack real-time, tumor tracking. A possible solution is to use electromagnetic markers; however, eddy-current magnetic fields generated in response to a magnetic source can be comparable to the signal, thus degrading the localization accuracy. Therefore, the tracking system must be designed to account for the eddy fields created along the inner bore conducting surfaces. The aim of this work is to investigate localization accuracy using magnetic field gradients to determine feasibility toward TomoTherapy applications. Electromagnetic models are used to simulate magnetic fields created by a source and its simultaneous generation of eddy currents within a conducting cylinder. The source position is calculated using a least-squares fit of simulated sensor data using the dipole equation as the model equation. To account for field gradients across the sensor area (≈ 25 cm(2)), an iterative method is used to estimate the magnetic field at the sensor center. Spatial gradients are calculated with two arrays of uniaxial, paired sensors that form a gradiometer array, where the sensors are considered ideal. Experimental measurements of magnetic fields within the TomoTherapy bore are shown to be 1%-10% less than calculated with the electromagnetic model. Localization results using a 5 × 5 array of gradiometers are, in general, 2-4 times more accurate than a planar array of sensors, depending on the solenoid orientation and position. Simulation results show that the localization accuracy using a gradiometer array is within 1.3 mm over a distance of 20 cm from the array plane. In comparison, localization errors using single array are within 5 mm. The results indicate that the gradiometer method merits further studies and work due to the accuracy achieved with ideal sensors. Future studies should include realistic sensor models and extensive numerical studies to estimate the expected magnetic tracking accuracy within a TomoTherapy system before proceeding

  17. Process system and method for fabricating submicron field emission cathodes

    DOEpatents

    Jankowski, Alan F.; Hayes, Jeffrey P.

    1998-01-01

    A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.

  18. Towards robust and repeatable sampling methods in eDNA based studies.

    PubMed

    Dickie, Ian A; Boyer, Stephane; Buckley, Hannah; Duncan, Richard P; Gardner, Paul; Hogg, Ian D; Holdaway, Robert J; Lear, Gavin; Makiola, Andreas; Morales, Sergio E; Powell, Jeff R; Weaver, Louise

    2018-05-26

    DNA based techniques are increasingly used for measuring the biodiversity (species presence, identity, abundance and community composition) of terrestrial and aquatic ecosystems. While there are numerous reviews of molecular methods and bioinformatic steps, there has been little consideration of the methods used to collect samples upon which these later steps are based. This represents a critical knowledge gap, as methodologically sound field sampling is the foundation for subsequent analyses. We reviewed field sampling methods used for metabarcoding studies of both terrestrial and freshwater ecosystem biodiversity over a nearly three-year period (n = 75). We found that 95% (n = 71) of these studies used subjective sampling methods, inappropriate field methods, and/or failed to provide critical methodological information. It would be possible for researchers to replicate only 5% of the metabarcoding studies in our sample, a poorer level of reproducibility than for ecological studies in general. Our findings suggest greater attention to field sampling methods and reporting is necessary in eDNA-based studies of biodiversity to ensure robust outcomes and future reproducibility. Methods must be fully and accurately reported, and protocols developed that minimise subjectivity. Standardisation of sampling protocols would be one way to help to improve reproducibility, and have additional benefits in allowing compilation and comparison of data from across studies. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  20. Single-camera displacement field correlation method for centrosymmetric 3D dynamic deformation measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin

    2018-05-01

    Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.

  1. A comparison of hydroponic and soil-based screening methods to identify salt tolerance in the field in barley

    PubMed Central

    Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K.

    2012-01-01

    Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na+, Cl–, and K+ at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at ECe 7.2 [Spearman’s rank correlation (rs)=0.79] and ECe 15.3 (rs=0.82) and the crucial parameter of leaf Na+ (rs=0.72) and Cl– (rs=0.82) concentrations at ECe 7.2 dS m−1. This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of salt

  2. Palaeomagnetic dating method accounting for post-depositional remanence and its application to geomagnetic field modelling

    NASA Astrophysics Data System (ADS)

    Nilsson, A.; Suttie, N.

    2016-12-01

    Sedimentary palaeomagnetic data may exhibit some degree of smoothing of the recorded field due to the gradual processes by which the magnetic signal is `locked-in' over time. Here we present a new Bayesian method to construct age-depth models based on palaeomagnetic data, taking into account and correcting for potential lock-in delay. The age-depth model is built on the widely used "Bacon" dating software by Blaauw and Christen (2011, Bayesian Analysis 6, 457-474) and is designed to combine both radiocarbon and palaeomagnetic measurements. To our knowledge, this is the first palaeomagnetic dating method that addresses the potential problems related post-depositional remanent magnetisation acquisition in age-depth modelling. Age-depth models, including site specific lock-in depth and lock-in filter function, produced with this method are shown to be consistent with independent results based on radiocarbon wiggle match dated sediment sections. Besides its primary use as a dating tool, our new method can also be used specifically to identify the most likely lock-in parameters for a specific record. We explore the potential to use these results to construct high-resolution geomagnetic field models based on sedimentary palaeomagnetic data, adjusting for smoothing induced by post-depositional remanent magnetisation acquisition. Potentially, this technique could enable reconstructions of Holocene geomagnetic field with the same amplitude of variability observed in archaeomagnetic field models for the past three millennia.

  3. New Methods of Low-Field Magnetic Resonance Imaging for Application to Traumatic Brain Injury

    DTIC Science & Technology

    2013-02-01

    magnet based ), the development of novel high-speed parallel imaging detection systems, and work on advanced adaptive reconstruction methods ...signal many times within the acquisition time . We present here a new method for 3D OMRI based on b-SSFP at a constant field of 6.5 mT that provides up...developing injury-sensitive MRI based on the detection of free radicals associat- ed with injury using the Overhauser effect and subsequently imaging that

  4. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  5. Study on copper phthalocyanine and perylene-based ambipolar organic light-emitting field-effect transistors produced using neutral beam deposition method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dae-Kyu; Oh, Jeong-Do; Shin, Eun-Sol

    2014-04-28

    The neutral cluster beam deposition (NCBD) method has been applied to the production and characterization of ambipolar, heterojunction-based organic light-emitting field-effect transistors (OLEFETs) with a top-contact, multi-digitated, long-channel geometry. Organic thin films of n-type N,N′-ditridecylperylene-3,4,9,10-tetracarboxylic diimide and p-type copper phthalocyanine were successively deposited on the hydroxyl-free polymethyl-methacrylate (PMMA)-coated SiO{sub 2} dielectrics using the NCBD method. Characterization of the morphological and structural properties of the organic active layers was performed using atomic force microscopy and X-ray diffraction. Various device parameters such as hole- and electron-carrier mobilities, threshold voltages, and electroluminescence (EL) were derived from the fits of the observed current-voltage andmore » current-voltage-light emission characteristics of OLEFETs. The OLEFETs demonstrated good field-effect characteristics, well-balanced ambipolarity, and substantial EL under ambient conditions. The device performance, which is strongly correlated with the surface morphology and the structural properties of the organic active layers, is discussed along with the operating conduction mechanism.« less

  6. Radio frequency electromagnetic field compliance assessment of multi-band and MIMO equipped radio base stations.

    PubMed

    Thors, Björn; Thielens, Arno; Fridén, Jonas; Colombi, Davide; Törnevik, Christer; Vermeeren, Günter; Martens, Luc; Joseph, Wout

    2014-05-01

    In this paper, different methods for practical numerical radio frequency exposure compliance assessments of radio base station products were investigated. Both multi-band base station antennas and antennas designed for multiple input multiple output (MIMO) transmission schemes were considered. For the multi-band case, various standardized assessment methods were evaluated in terms of resulting compliance distance with respect to the reference levels and basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Both single frequency and multiple frequency (cumulative) compliance distances were determined using numerical simulations for a mobile communication base station antenna transmitting in four frequency bands between 800 and 2600 MHz. The assessments were conducted in terms of root-mean-squared electromagnetic fields, whole-body averaged specific absorption rate (SAR) and peak 10 g averaged SAR. In general, assessments based on peak field strengths were found to be less computationally intensive, but lead to larger compliance distances than spatial averaging of electromagnetic fields used in combination with localized SAR assessments. For adult exposure, the results indicated that even shorter compliance distances were obtained by using assessments based on localized and whole-body SAR. Numerical simulations, using base station products employing MIMO transmission schemes, were performed as well and were in agreement with reference measurements. The applicability of various field combination methods for correlated exposure was investigated, and best estimate methods were proposed. Our results showed that field combining methods generally considered as conservative could be used to efficiently assess compliance boundary dimensions of single- and dual-polarized multicolumn base station antennas with only minor increases in compliance distances. © 2014 Wiley Periodicals, Inc.

  7. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  8. Process system and method for fabricating submicron field emission cathodes

    DOEpatents

    Jankowski, A.F.; Hayes, J.P.

    1998-05-05

    A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape. 3 figs.

  9. Hamiltonian lattice field theory: Computer calculations using variational methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zako, Robert L.

    1991-12-03

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less

  10. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the

  11. Integrating Field-Based Research into the Classroom: An Environmental Sampling Exercise

    ERIC Educational Resources Information Center

    DeSutter, T.; Viall, E.; Rijal, I.; Murdoff, M.; Guy, A.; Pang, X.; Koltes, S.; Luciano, R.; Bai, X.; Zitnick, K.; Wang, S.; Podrebarac, F.; Casey, F.; Hopkins, D.

    2010-01-01

    A field-based, soil methods, and instrumentation course was developed to expose graduate students to numerous strategies for measuring soil parameters. Given the northern latitude of North Dakota State University and the rapid onset of winter, this course met once per week for the first 8 weeks of the fall semester and centered on the field as a…

  12. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  13. Evanescent Field Based Photoacoustics: Optical Property Evaluation at Surfaces

    PubMed Central

    Goldschmidt, Benjamin S.; Rudy, Anna M.; Nowak, Charissa A.; Tsay, Yowting; Whiteside, Paul J. D.; Hunt, Heather K.

    2016-01-01

    Here, we present a protocol to estimate material and surface optical properties using the photoacoustic effect combined with total internal reflection. Optical property evaluation of thin films and the surfaces of bulk materials is an important step in understanding new optical material systems and their applications. The method presented can estimate thickness, refractive index, and use absorptive properties of materials for detection. This metrology system uses evanescent field-based photoacoustics (EFPA), a field of research based upon the interaction of an evanescent field with the photoacoustic effect. This interaction and its resulting family of techniques allow the technique to probe optical properties within a few hundred nanometers of the sample surface. This optical near field allows for the highly accurate estimation of material properties on the same scale as the field itself such as refractive index and film thickness. With the use of EFPA and its sub techniques such as total internal reflection photoacoustic spectroscopy (TIRPAS) and optical tunneling photoacoustic spectroscopy (OTPAS), it is possible to evaluate a material at the nanoscale in a consolidated instrument without the need for many instruments and experiments that may be cost prohibitive. PMID:27500652

  14. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  15. Optical properties in GaAs/AlGaAs semiparabolic quantum wells by the finite difference method: Combined effects of electric field and magnetic field

    NASA Astrophysics Data System (ADS)

    Yan, Ru-Yu; Tang, Jian; Zhang, Zhi-Hai; Yuan, Jian-Hui

    2018-05-01

    In the present work, the optical properties of GaAs/AlGaAs semiparabolic quantum wells (QWs) are studied under the effect of applied electric field and magnetic field by using the compact-density-matrix method. The energy eigenvalues and their corresponding eigenfunctions of the system are calculated by using the differential method. Simultaneously, the nonlinear optical rectification (OR) and optical absorption coefficients (OACs) are investigated, which are modulated by the applied electric field and magnetic field. It is found that the position and the magnitude of the resonant peaks of the nonlinear OR and OACs can depend strongly on the applied electric field, magnetic field and confined potential frequencies. This gives a new way to control the device applications based on the intersubband transitions of electrons in this system.

  16. A zonal method for modeling powered-lift aircraft flow fields

    NASA Technical Reports Server (NTRS)

    Roberts, D. W.

    1989-01-01

    A zonal method for modeling powered-lift aircraft flow fields is based on the coupling of a three-dimensional Navier-Stokes code to a potential flow code. By minimizing the extent of the viscous Navier-Stokes zones the zonal method can be a cost effective flow analysis tool. The successful coupling of the zonal solutions provides the viscous/inviscid interations that are necessary to achieve convergent and unique overall solutions. The feasibility of coupling the two vastly different codes is demonstrated. The interzone boundaries were overlapped to facilitate the passing of boundary condition information between the codes. Routines were developed to extract the normal velocity boundary conditions for the potential flow zone from the viscous zone solution. Similarly, the velocity vector direction along with the total conditions were obtained from the potential flow solution to provide boundary conditions for the Navier-Stokes solution. Studies were conducted to determine the influence of the overlap of the interzone boundaries and the convergence of the zonal solutions on the convergence of the overall solution. The zonal method was applied to a jet impingement problem to model the suckdown effect that results from the entrainment of the inviscid zone flow by the viscous zone jet. The resultant potential flow solution created a lower pressure on the base of the vehicle which produces the suckdown load. The feasibility of the zonal method was demonstrated. By enhancing the Navier-Stokes code for powered-lift flow fields and optimizing the convergence of the coupled analysis a practical flow analysis tool will result.

  17. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    PubMed Central

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  18. Direct magnetic field estimation based on echo planar raw data.

    PubMed

    Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim

    2010-07-01

    Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.

  19. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  20. A component compensation method for magnetic interferential field

    NASA Astrophysics Data System (ADS)

    Zhang, Qi; Wan, Chengbiao; Pan, Mengchun; Liu, Zhongyan; Sun, Xiaoyong

    2017-04-01

    A new component searching with scalar restriction method (CSSRM) is proposed for magnetometer to compensate magnetic interferential field caused by ferromagnetic material of platform and improve measurement performance. In CSSRM, the objection function for parameter estimation is to minimize magnetic field (components and magnitude) difference between its measurement value and reference value. Two scalar compensation method is compared with CSSRM and the simulation results indicate that CSSRM can estimate all interferential parameters and external magnetic field vector with high accuracy. The magnetic field magnitude and components, compensated with CSSRM, coincide with true value very well. Experiment is carried out for a tri-axial fluxgate magnetometer, mounted in a measurement system with inertial sensors together. After compensation, error standard deviation of both magnetic field components and magnitude are reduced from more than thousands nT to less than 20 nT. It suggests that CSSRM provides an effective way to improve performance of magnetic interferential field compensation.

  1. Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kipp, C. R.; Bernhard, R. J.

    1985-01-01

    A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.

  2. Method to optimize patch size based on spatial frequency response in image rendering of the light field

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Wang, Yanan; Zhu, Zhenhao; Su, Jinhui

    2018-05-01

    A focused plenoptic camera can effectively transform angular and spatial information to yield a refocused rendered image with high resolution. However, choosing a proper patch size poses a significant problem for the image-rendering algorithm. By using a spatial frequency response measurement, a method to obtain a suitable patch size is presented. By evaluating the spatial frequency response curves, the optimized patch size can be obtained quickly and easily. Moreover, the range of depth over which images can be rendered without artifacts can be estimated. Experiments show that the results of the image rendered based on frequency response measurement are in accordance with the theoretical calculation, which indicates that this is an effective way to determine the patch size. This study may provide support to light-field image rendering.

  3. Micro-resonator-based electric field sensors with long durations of sensitivity

    NASA Astrophysics Data System (ADS)

    Ali, Amir R.

    2017-05-01

    In this paper, we present a new fabrication method for the whispering gallery mode (WGM) micro-sphere based electric field sensor that which allows for longer time periods of sensitivity. Recently, a WGM-based photonic electric field sensor was proposed using a coupled dielectric microsphere-beam. The external electric field imposes an electrtrostriction force on the dielectric beam, deflecting it. The beam, in turn compresses the sphere causing a shift in its WGM. As part of the fabrication process, the PDMS micro-beams and the spheres are curied at high-temperature (100oC) and subsequently poled by exposing to strong external electric field ( 8 MV/m) for two hours. The poling process allows for the deposition of surface charges thereby increasing the electrostriction effect. This methodology is called curing-then-poling (CTP). Although the sensors do become sufficiently sensitive to electric field, they start de-poling after a short period (within 10 minutes) after poling, hence losing sensitivity. In an attempt to mitigate this problem and to lock the polarization for a longer period, we use an alternate methodology whereby the beam is poled and cured simultaneously (curing-while-poling or CWP). The new fabrication method allows for the retention of polarization (and hence, sensitivity to electric field) longer ( 1500 minutes). An analysis is carried out along with preliminary experiments. Results show that electric fields as small as 100 V/m can be detected with a 300 μm diameter sphere sensor a day after poling.

  4. Historic Methods for Capturing Magnetic Field Images

    ERIC Educational Resources Information Center

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  5. Trajectory control method of stratospheric airship based on the sliding mode control and prediction in wind field

    NASA Astrophysics Data System (ADS)

    Zhang, Jia-shi; Yang, Xi-xiang

    2017-11-01

    The stratospheric airship has the characteristics of large inertia, long time delay and large disturbance of wind field , so the trajectory control is very difficult .Build the lateral three degrees of freedom dynamic model which consider the wind interference , the dynamics equation is linearized by the small perturbation theory, propose a trajectory control method Combine with the sliding mode control and prediction, design the trajectory controller , takes the HAA airship as the reference to carry out simulation analysis. Results show that the improved sliding mode control with front-feedback method not only can solve well control problems of airship trajectory in wind field, but also can effectively improve the control accuracy of the traditional sliding mode control method, solved problems that using the traditional sliding mode control to control. It provides a useful reference for dynamic modeling and trajectory control of stratospheric airship.

  6. [Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter].

    PubMed

    Bieńkowski, Paweł; Cała, Paweł; Zubrzak, Bartłomiej

    2015-01-01

    This paper presents the characteristics of the mobile phone base station (BS) as an electromagnetic field (EMF) source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method) or more complex measuring procedure (sources exclusion method) are also presented). This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  7. Evaluation of field methods for vertical high resolution aquifer characterization

    NASA Astrophysics Data System (ADS)

    Vienken, T.; Tinter, M.; Rogiers, B.; Leven, C.; Dietrich, P.

    2012-12-01

    The delineation and characterization of subsurface (hydro)-stratigraphic structures is one of the challenging tasks of hydrogeological site investigations. The knowledge about the spatial distribution of soil specific properties and hydraulic conductivity (K) is the prerequisite for understanding flow and fluid transport processes. This is especially true for heterogeneous unconsolidated sedimentary deposits with a complex sedimentary architecture. One commonly used approach to investigate and characterize sediment heterogeneity is soil sampling and lab analyses, e.g. grain size distribution. Tests conducted on 108 samples show that calculation of K based on grain size distribution is not suitable for high resolution aquifer characterization of highly heterogeneous sediments due to sampling effects and large differences of calculated K values between applied formulas (Vienken & Dietrich 2011). Therefore, extensive tests were conducted at two test sites under different geological conditions to evaluate the performance of innovative Direct Push (DP) based approaches for the vertical high resolution determination of K. Different DP based sensor probes for the in-situ subsurface characterization based on electrical, hydraulic, and textural soil properties were used to obtain high resolution vertical profiles. The applied DP based tools proved to be a suitable and efficient alternative to traditional approaches. Despite resolution differences, all of the applied methods captured the main aquifer structure. Correlation of the DP based K estimates and proxies with DP based slug tests show that it is possible to describe the aquifer hydraulic structure on less than a meter scale by combining DP slug test data and continuous DP measurements. Even though correlations are site specific and appropriate DP tools must be chosen, DP is reliable and efficient alternative for characterizing even strongly heterogeneous sites with complex structured sedimentary aquifers (Vienken et

  8. Qualitative Methods in Field Research: An Indonesian Experience in Community Based Practice.

    ERIC Educational Resources Information Center

    Lysack, Catherine L.; Krefting, Laura

    1994-01-01

    Cross-cultural evaluation of a community-based rehabilitation project in Indonesia used three methods: focus groups, questionnaires, and key informant interviews. A continuous cyclical approach to data collection and concern for cultural sensitivity increased the rigor of the research. (SK)

  9. 76 FR 28664 - Method 301-Field Validation of Pollutant Measurement Methods From Various Waste Media

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-18

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 63 [OAR-2004-0080, FRL-9306-8] RIN 2060-AF00 Method 301--Field Validation of Pollutant Measurement Methods From Various Waste Media AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: This action amends EPA's Method 301, Field Validation...

  10. Repeatability of Brain Volume Measurements Made with the Atlas-based Method from T1-weighted Images Acquired Using a 0.4 Tesla Low Field MR Scanner.

    PubMed

    Goto, Masami; Suzuki, Makoto; Mizukami, Shinya; Abe, Osamu; Aoki, Shigeki; Miyati, Tosiaki; Fukuda, Michinari; Gomi, Tsutomu; Takeda, Tohoru

    2016-10-11

    An understanding of the repeatability of measured results is important for both the atlas-based and voxel-based morphometry (VBM) methods of magnetic resonance (MR) brain volumetry. However, many recent studies that have investigated the repeatability of brain volume measurements have been performed using static magnetic fields of 1-4 tesla, and no study has used a low-strength static magnetic field. The aim of this study was to investigate the repeatability of measured volumes using the atlas-based method and a low-strength static magnetic field (0.4 tesla). Ten healthy volunteers participated in this study. Using a 0.4 tesla magnetic resonance imaging (MRI) scanner and a quadrature head coil, three-dimensional T 1 -weighted images (3D-T 1 WIs) were obtained from each subject, twice on the same day. VBM8 software was used to construct segmented normalized images [gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images]. The regions-of-interest (ROIs) of GM, WM, CSF, hippocampus (HC), orbital gyrus (OG), and cerebellum posterior lobe (CPL) were generated using WFU PickAtlas. The percentage change was defined as[100 × (measured volume with first segmented image - mean volume in each subject)/(mean volume in each subject)]The average percentage change was calculated as the percentage change in the 6 ROIs of the 10 subjects. The mean of the average percentage changes for each ROI was as follows: GM, 0.556%; WM, 0.324%; CSF, 0.573%; HC, 0.645%; OG, 1.74%; and CPL, 0.471%. The average percentage change was higher for the orbital gyrus than for the other ROIs. We consider that repeatability of the atlas-based method is similar between 0.4 and 1.5 tesla MR scanners. To our knowledge, this is the first report to show that the level of repeatability with a 0.4 tesla MR scanner is adequate for the estimation of brain volume change by the atlas-based method.

  11. Geochemical field method for determination of nickel in plants

    USGS Publications Warehouse

    Reichen, L.E.

    1951-01-01

    The use of biogeochemical data in prospecting for nickel emphasizes the need for a simple, moderately accurate field method for the determination of nickel in plants. In order to follow leads provided by plants of unusual nickel content without loss of time, the plants should be analyzed and the results given to the field geologist promptly. The method reported in this paper was developed to meet this need. Speed is acquired by elimination of the customary drying and controlled ashing; the fresh vegetation is ashed in an open dish over a gasoline stove. The ash is put into solution with hydrochloric acid and the solution buffered. A chromograph is used to make a confined spot with an aliquot of the ash solution on dimethylglyoxime reagent paper. As little as 0.025% nickel in plant ash can be determined. With a simple modification, 0.003% can be detected. Data are given comparing the results obtained by an accepted laboratory procedure. Results by the field method are within 30% of the laboratory values. The field method for nickel in plants meets the requirements of biogeochemical prospecting with respect to accuracy, simplicity, speed, and ease of performance in the field. With experience, an analyst can make 30 determinations in an 8-hour work day in the field.

  12. Maximizing research study effectiveness in malaria elimination settings: a mixed methods study to capture the experiences of field-based staff.

    PubMed

    Canavati, Sara E; Quintero, Cesia E; Haller, Britt; Lek, Dysoley; Yok, Sovann; Richards, Jack S; Whittaker, Maxine Anne

    2017-09-11

    In a drug-resistant, malaria elimination setting like Western Cambodia, field research is essential for the development of novel anti-malarial regimens and the public health solutions necessary to monitor the spread of resistance and eliminate infection. Such field studies often face a variety of similar implementation challenges, but these are rarely captured in a systematic way or used to optimize future study designs that might overcome similar challenges. Field-based research staff often have extensive experience and can provide valuable insight regarding these issues, but their perspectives and experiences are rarely documented and seldom integrated into future research protocols. This mixed-methods analysis sought to gain an understanding of the daily challenges encountered by research field staff in the artemisinin-resistant, malaria elimination setting of Western Cambodia. In doing so, this study seeks to understand how the experiences and opinions of field staff can be captured, and used to inform future study designs. Twenty-two reports from six field-based malaria studies conducted in Western Cambodia were reviewed using content analysis to identify challenges to conducting the research. Informal Interviews, Focus Group Discussions and In-depth Interviews were also conducted among field research staff. Thematic analysis of the data was undertaken using Nvivo 9 ® software. Triangulation and critical case analysis was also used. There was a lack of formalized avenues through which field workers could report challenges experienced when conducting the malaria studies. Field research staff faced significant logistical barriers to participant recruitment and data collection, including a lack of available transportation to cover long distances, and the fact that mobile and migrant populations (MMPs) are usually excluded from studies because of challenges in follow-up. Cultural barriers to communication also hindered participant recruitment and created

  13. A simple field method to identify foot strike pattern during running.

    PubMed

    Giandolini, Marlène; Poupard, Thibaut; Gimenez, Philippe; Horvais, Nicolas; Millet, Guillaume Y; Morin, Jean-Benoît; Samozino, Pierre

    2014-05-07

    Identifying foot strike patterns in running is an important issue for sport clinicians, coaches and footwear industrials. Current methods allow the monitoring of either many steps in laboratory conditions or only a few steps in the field. Because measuring running biomechanics during actual practice is critical, our purpose is to validate a method aiming at identifying foot strike patterns during continuous field measurements. Based on heel and metatarsal accelerations, this method requires two uniaxial accelerometers. The time between heel and metatarsal acceleration peaks (THM) was compared to the foot strike angle in the sagittal plane (αfoot) obtained by 2D video analysis for various conditions of speed, slope, footwear, foot strike and state of fatigue. Acceleration and kinematic measurements were performed at 1000Hz and 120Hz, respectively, during 2-min treadmill running bouts. Significant correlations were observed between THM and αfoot for 14 out of 15 conditions. The overall correlation coefficient was r=0.916 (P<0.0001, n=288). The THM method is thus highly reliable for a wide range of speeds and slopes, and for all types of foot strike except for extreme forefoot strike during which the heel rarely or never strikes the ground, and for different footwears and states of fatigue. We proposed a classification based on THM: FFS<-5.49msmethod, it is reliable for distinguishing rearfoot and non-rearfoot strikers in situ. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. A Method to Measure the Transverse Magnetic Field and Orient the Rotational Axis of Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leone, Francesco; Scalia, Cesare; Gangi, Manuele

    Direct measurements of stellar magnetic fields are based on the splitting of spectral lines into polarized Zeeman components. With a few exceptions, Zeeman signatures are hidden in data noise, and a number of methods have been developed to measure the average, over the visible stellar disk, of longitudinal components of the magnetic field. At present, faint stars are only observable via low-resolution spectropolarimetry, which is a method based on the regression of the Stokes V signal against the first derivative of Stokes I . Here, we present an extension of this method to obtain a direct measurement of the transversemore » component of stellar magnetic fields by the regression of high-resolution Stokes Q and U as a function of the second derivative of Stokes I . We also show that it is possible to determine the orientation in the sky of the rotation axis of a star on the basis of the periodic variability of the transverse component due to its rotation. The method is applied to data, obtained with the Catania Astrophysical Observatory Spectropolarimeter along the rotational period of the well known magnetic star β CrB.« less

  15. [Modeling and analysis of volume conduction based on field-circuit coupling].

    PubMed

    Tang, Zhide; Liu, Hailong; Xie, Xiaohui; Chen, Xiufa; Hou, Deming

    2012-08-01

    Numerical simulations of volume conduction can be used to analyze the process of energy transfer and explore the effects of some physical factors on energy transfer efficiency. We analyzed the 3D quasi-static electric field by the finite element method, and developed A 3D coupled field-circuit model of volume conduction basing on the coupling between the circuit and the electric field. The model includes a circuit simulation of the volume conduction to provide direct theoretical guidance for energy transfer optimization design. A field-circuit coupling model with circular cylinder electrodes was established on the platform of the software FEM3.5. Based on this, the effects of electrode cross section area, electrode distance and circuit parameters on the performance of volume conduction system were obtained, which provided a basis for optimized design of energy transfer efficiency.

  16. A GPU-based calculation using the three-dimensional FDTD method for electromagnetic field analysis.

    PubMed

    Nagaoka, Tomoaki; Watanabe, Soichi

    2010-01-01

    Numerical simulations with the numerical human model using the finite-difference time domain (FDTD) method have recently been performed frequently in a number of fields in biomedical engineering. However, the FDTD calculation runs too slowly. We focus, therefore, on general purpose programming on the graphics processing unit (GPGPU). The three-dimensional FDTD method was implemented on the GPU using Compute Unified Device Architecture (CUDA). In this study, we used the NVIDIA Tesla C1060 as a GPGPU board. The performance of the GPU is evaluated in comparison with the performance of a conventional CPU and a vector supercomputer. The results indicate that three-dimensional FDTD calculations using a GPU can significantly reduce run time in comparison with that using a conventional CPU, even a native GPU implementation of the three-dimensional FDTD method, while the GPU/CPU speed ratio varies with the calculation domain and thread block size.

  17. Force-free magnetic fields - The magneto-frictional method

    NASA Technical Reports Server (NTRS)

    Yang, W. H.; Sturrock, P. A.; Antiochos, S. K.

    1986-01-01

    The problem under discussion is that of calculating magnetic field configurations in which the Lorentz force j x B is everywhere zero, subject to specified boundary conditions. We choose to represent the magnetic field in terms of Clebsch variables in the form B = grad alpha x grad beta. These variables are constant on any field line so that each field line is labeled by the corresponding values of alpha and beta. When the field is described in this way, the most appropriate choice of boundary conditions is to specify the values of alpha and beta on the bounding surface. We show that such field configurations may be calculated by a magneto-frictional method. We imagine that the field lines move through a stationary medium, and that each element of magnetic field is subject to a frictional force parallel to and opposing the velocity of the field line. This concept leads to an iteration procedure for modifying the variables alpha and beta, that tends asymptotically towards the force-free state. We apply the method first to a simple problem in two rectangular dimensions, and then to a problem of cylindrical symmetry that was previously discussed by Barnes and Sturrock (1972). In one important respect, our new results differ from the earlier results of Barnes and Sturrock, and we conclude that the earlier article was in error.

  18. Methods of measuring soil moisture in the field

    USGS Publications Warehouse

    Johnson, A.I.

    1962-01-01

    For centuries, the amount of moisture in the soil has been of interest in agriculture. The subject of soil moisture is also of great importance to the hydrologist, forester, and soils engineer. Much equipment and many methods have been developed to measure soil moisture under field conditions. This report discusses and evaluates the various methods for measurement of soil moisture and describes the equipment needed for each method. The advantages and disadvantages of each method are discussed and an extensive list of references is provided for those desiring to study the subject in more detail. The gravimetric method is concluded to be the most satisfactory method for most problems requiring onetime moisture-content data. The radioactive method is normally best for obtaining repeated measurements of soil moisture in place. It is concluded that all methods have some limitations and that the ideal method for measurement of soil moisture under field conditions has yet to be perfected.

  19. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    NASA Technical Reports Server (NTRS)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  20. An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods.

    PubMed

    Frank, Florian; Liu, Chen; Scanziani, Alessio; Alpak, Faruk O; Riviere, Beatrice

    2018-08-01

    We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from μCT (micro computed tomography) imaging of porous rock and approximate a (on μm scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/topologically rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90°, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90° on jagged voxel surfaces is amplified. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method. However, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Method for confining the magnetic field of the cross-tail current inside the magnetopause

    NASA Technical Reports Server (NTRS)

    Sotirelis, T.; Tsyganenko, N. A.; Stern, D. P.

    1994-01-01

    A method is presented for analytically representing the magnetic field due to the cross-tail current and its closure on the magnetopause. It is an extension of a method used by Tsyganenko (1989b) to confine the dipole field inside an ellipsoidal magnetopause using a scalar potential. Given a model of the cross-tail current, the implied net magnetic field is obtained by adding to the cross-tail current field a potential field B = - del gamma, which makes all field lines divide into two disjoint groups, separated by the magnetopause (i.e., the combined field is made to have zero normal component with the magnetopause). The magnetopause is assumed to be an ellipsoid of revolution (a prolate spheroid) as an approximation to observations (Sibeck et al., 1991). This assumption permits the potential gamma to be expressed in spheroidal coordinates, expanded in spheroidal harmonics and its terms evaluated by performing inversion integrals. Finally, the field outside the magnetopause is replaced by zero, resulting in a consistent current closure along the magnetopause. This procedure can also be used to confine the modeled field of any other interior magnetic source, though the model current must always flow in closed circuits. The method is demonstrated on the T87 cross-tail current, examples illustrate the effect of changing the size and shape of the prescribed magnetopause and a comparison is made to an independent numerical scheme based on the Biot-Savart equation.

  2. Prediction of sonic boom from experimental near-field overpressure data. Volume 1: Method and results

    NASA Technical Reports Server (NTRS)

    Glatt, C. R.; Hague, D. S.; Reiners, S. J.

    1975-01-01

    A computerized procedure for predicting sonic boom from experimental near-field overpressure data has been developed. The procedure extrapolates near-field pressure signatures for a specified flight condition to the ground by the Thomas method. Near-field pressure signatures are interpolated from a data base of experimental pressure signatures. The program is an independently operated ODIN (Optimal Design Integration) program which obtains flight path information from other ODIN programs or from input.

  3. Image restoration method based on Hilbert transform for full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha

    2008-01-01

    A full-field optical coherence tomography (FF-OCT) system utilizing a simple but novel image restoration method suitable for a high-speed system is demonstrated. An en-face image is retrieved from only two phase-shifted interference fringe images through using the mathematical Hilbert transform. With a thermal light source, a high-resolution FF-OCT system having axial and transverse resolutions of 1 and 2.2 μm, respectively, was implemented. The feasibility of the proposed scheme is confirmed by presenting the obtained en-face images of biological samples such as a piece of garlic and a gold beetle. The proposed method is robust to the error in the amount of the phase shift and does not leave residual fringes. The use of just two interference images and the strong immunity to phase errors provide great advantages in the imaging speed and the system design flexibility of a high-speed high-resolution FF-OCT system.

  4. Novel method for detecting weak magnetic fields at low frequencies

    NASA Astrophysics Data System (ADS)

    González-Martínez, S.; Castillo-Torres, J.; Mendoza-Santos, J. C.; Zamorano-Ulloa, R.

    2005-06-01

    A low-level-intensity magnetic field detection system has been designed and developed based on the amplification-selection process of signals. This configuration is also very sensitive to magnetic field changes produced by harmonic-like electrical currents transported in finite-length wires. Experimental and theoretical results of magnetic fields detection as low as 10-9T at 120Hz are also presented with an accuracy of around 13%. The assembled equipment is designed to measure an electromotive force induced in a free-magnetic-core coil in order to recover signals which are previously selected, despite the fact that their intensities are much lower than the environment electromagnetic radiation. The prototype has a signal-to-noise ratio of 60dB. This system also presents the advantage for using it as a portable unit of measurement. The concept and prototype may be applied, for example, as a nondestructive method to analyze any corrosion formation in metallic oil pipelines which are subjected to cathodic protection.

  5. Determining the tensile response of materials at high temperature using DIC and the Virtual Fields Method

    NASA Astrophysics Data System (ADS)

    Valeri, Guillermo; Koohbor, Behrad; Kidane, Addis; Sutton, Michael A.

    2017-04-01

    An experimental approach based on Digital Image Correlation (DIC) is successfully applied to predict the uniaxial stress-strain response of 304 stainless steel specimens subjected to nominally uniform temperatures ranging from room temperature to 900 °C. A portable induction heating device equipped with custom made water-cooled copper coils is used to heat the specimen. The induction heater is used in conjunction with a conventional tensile frame to enable high temperature tension experiments. A stereovision camera system equipped with appropriate band pass filters is employed to facilitate the study of full-field deformation response of the material at elevated temperatures. Using the temperature and load histories along with the full-field strain data, a Virtual Fields Method (VFM) based approach is implemented to identify constitutive parameters governing the plastic deformation of the material at high temperature conditions. Results from these experiments confirm that the proposed method can be used to measure the full field deformation of materials subjected to thermo-mechanical loading.

  6. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  7. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  8. Perspectives on the simulation of protein–surface interactions using empirical force field methods

    PubMed Central

    Latour, Robert A.

    2014-01-01

    Protein–surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein–surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242

  9. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  10. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  11. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  12. Method of depositing multi-layer carbon-based coatings for field emission

    DOEpatents

    Sullivan, John P.; Friedmann, Thomas A.

    1999-01-01

    A novel field emitter device for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials.

  13. Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Larchev, Gregory V.; Lohn, Jason D.

    2006-01-01

    The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.

  14. Method of depositing multi-layer carbon-based coatings for field emission

    DOEpatents

    Sullivan, J.P.; Friedmann, T.A.

    1999-08-10

    A novel field emitter device is disclosed for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials. 8 figs.

  15. Field-Based Teacher Education in Literacy: Preparing Teachers in Real Classroom Contexts

    ERIC Educational Resources Information Center

    DeGraff, Tricia L.; Schmidt, Cynthia M.; Waddell, Jennifer H.

    2015-01-01

    For the past two decades, scholars have advocated for reforms in teacher education that emphasize relevant connections between theory and practice in university coursework and focus on clinical experiences. This paper is based on our experiences in designing and implementing an integrated literacy methods course in a field-based teacher education…

  16. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    PubMed

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Hand-Held Refractometer-Based Measurement and Excess Permittivity Analysis Method for Detection of Diesel Oils Adulterated by Kerosene in Field Conditions

    PubMed Central

    Peiponen, Kai-Erik

    2018-01-01

    Adulteration of fuels is a major problem, especially in developing and third world countries. One such case is the adulteration of diesel oil by kerosene. This problem contributes to air pollution, which leads to other far-reaching adverse effects, such as climate change. The objective of this study was to develop a relatively easy measurement method based on an inexpensive, handheld Abbe refractometer for the detection of adulteration and estimation of the ascending order of the amount of kerosene present in adulterated samples in field conditions. We achieved this by increasing the volume of pure diesel sample in the adulterated diesel oil, and measuring the trend of refractive index change, and next, exploiting the true and ideal permittivities of the binary mixture. The permittivity can be obtained with the aid of the measured refractive index of a liquid. Due to the molecular interactions, the true and ideal permittivities of diesel–kerosene binary liquid mixture have a mismatch which can be used to screen for adulterated diesel oils. The difference between the true and the ideal permittivity is the so-called excess permittivity. We first investigated a training set of diesel oils in laboratory in Finland, using the accurate table model Abbe refractometer and depicting the behavior of the excess permittivity of the mixture of diesel oil and kerosene. Then, we measured same samples in the laboratory using a handheld refractometer. Finally, preliminary field measurements using the handheld device were performed in Tanzania to assess the accuracy and possibility of applying the suggested method in field conditions. We herein show that it is not only possible to detect even relatively low adulteration levels of diesel in kerosene—namely, 5%, 10%, and 15%—but also it is possible to monitor the ascending order of adulteration for different adulterated diesel samples. We propose that the method of increasing the volume of an unknown (suspected) diesel oil

  18. Hand-Held Refractometer-Based Measurement and Excess Permittivity Analysis Method for Detection of Diesel Oils Adulterated by Kerosene in Field Conditions.

    PubMed

    Kanyathare, Boniphace; Peiponen, Kai-Erik

    2018-05-14

    Adulteration of fuels is a major problem, especially in developing and third world countries. One such case is the adulteration of diesel oil by kerosene. This problem contributes to air pollution, which leads to other far-reaching adverse effects, such as climate change. The objective of this study was to develop a relatively easy measurement method based on an inexpensive, handheld Abbe refractometer for the detection of adulteration and estimation of the ascending order of the amount of kerosene present in adulterated samples in field conditions. We achieved this by increasing the volume of pure diesel sample in the adulterated diesel oil, and measuring the trend of refractive index change, and next, exploiting the true and ideal permittivities of the binary mixture. The permittivity can be obtained with the aid of the measured refractive index of a liquid. Due to the molecular interactions, the true and ideal permittivities of diesel⁻kerosene binary liquid mixture have a mismatch which can be used to screen for adulterated diesel oils. The difference between the true and the ideal permittivity is the so-called excess permittivity. We first investigated a training set of diesel oils in laboratory in Finland, using the accurate table model Abbe refractometer and depicting the behavior of the excess permittivity of the mixture of diesel oil and kerosene. Then, we measured same samples in the laboratory using a handheld refractometer. Finally, preliminary field measurements using the handheld device were performed in Tanzania to assess the accuracy and possibility of applying the suggested method in field conditions. We herein show that it is not only possible to detect even relatively low adulteration levels of diesel in kerosene-namely, 5%, 10%, and 15%-but also it is possible to monitor the ascending order of adulteration for different adulterated diesel samples. We propose that the method of increasing the volume of an unknown (suspected) diesel oil sample by

  19. New Method for Solving Inductive Electric Fields in the Ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.

    2005-12-01

    We present a new method for calculating inductive electric fields in the ionosphere. It is well established that on large scales the ionospheric electric field is a potential field. This is understandable, since the temporal variations of large scale current systems are generally quite slow, in the timescales of several minutes, so inductive effects should be small. However, studies of Alfven wave reflection have indicated that in some situations inductive phenomena could well play a significant role in the reflection process, and thus modify the nature of ionosphere-magnetosphere coupling. The input to our calculation method are the time series of the potential part of the ionospheric electric field together with the Hall and Pedersen conductances. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfven wave reflection from uniformly conducting ionosphere.

  20. Changes in Teaching Efficacy during a Professional Development School-Based Science Methods Course

    ERIC Educational Resources Information Center

    Swars, Susan L.; Dooley, Caitlin McMunn

    2010-01-01

    This mixed methods study offers a theoretically grounded description of a field-based science methods course within a Professional Development School (PDS) model (i.e., PDS-based course). The preservice teachers' (n = 21) experiences within the PDS-based course prompted significant changes in their personal teaching efficacy, with the…

  1. Determination of the maximum-depth to potential field sources by a maximum structural index method

    NASA Astrophysics Data System (ADS)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  2. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  3. Non-invasive continuous imaging of drug release from soy-based skin equivalent using wide-field interferometry

    NASA Astrophysics Data System (ADS)

    Gabai, Haniel; Baranes-Zeevi, Maya; Zilberman, Meital; Shaked, Natan T.

    2013-04-01

    We propose an off-axis interferometric imaging system as a simple and unique modality for continuous, non-contact and non-invasive wide-field imaging and characterization of drug release from its polymeric device used in biomedicine. In contrast to the current gold-standard methods in this field, usually based on chromatographic and spectroscopic techniques, our method requires no user intervention during the experiment, and only one test-tube is prepared. We experimentally demonstrate imaging and characterization of drug release from soy-based protein matrix, used as skin equivalent for wound dressing with controlled anesthetic, Bupivacaine drug release. Our preliminary results demonstrate the high potential of our method as a simple and low-cost modality for wide-field imaging and characterization of drug release from drug delivery devices.

  4. A Novel Weighted Kernel PCA-Based Method for Optimization and Uncertainty Quantification

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Chen, X.; Tong, C. H.

    2016-12-01

    It has been demonstrated that machine learning methods can be successfully applied to uncertainty quantification for geophysical systems through the use of the adjoint method coupled with kernel PCA-based optimization. In addition, it has been shown through weighted linear PCA how optimization with respect to both observation weights and feature space control variables can accelerate convergence of such methods. Linear machine learning methods, however, are inherently limited in their ability to represent features of non-Gaussian stochastic random fields, as they are based on only the first two statistical moments of the original data. Nonlinear spatial relationships and multipoint statistics leading to the tortuosity characteristic of channelized media, for example, are captured only to a limited extent by linear PCA. With the aim of coupling the kernel-based and weighted methods discussed, we present a novel mathematical formulation of kernel PCA, Weighted Kernel Principal Component Analysis (WKPCA), that both captures nonlinear relationships and incorporates the attribution of significance levels to different realizations of the stochastic random field of interest. We also demonstrate how new instantiations retaining defining characteristics of the random field can be generated using Bayesian methods. In particular, we present a novel WKPCA-based optimization method that minimizes a given objective function with respect to both feature space random variables and observation weights through which optimal snapshot significance levels and optimal features are learned. We showcase how WKPCA can be applied to nonlinear optimal control problems involving channelized media, and in particular demonstrate an application of the method to learning the spatial distribution of material parameter values in the context of linear elasticity, and discuss further extensions of the method to stochastic inversion.

  5. Identification of rice field using Multi-Temporal NDVI and PCA method on Landsat 8 (Case Study: Demak, Central Java)

    NASA Astrophysics Data System (ADS)

    Sukmono, Abdi; Ardiansyah

    2017-01-01

    Paddy is one of the most important agricultural crop in Indonesia. Indonesia’s consumption of rice per capita in 2013 amounted to 78,82 kg/capita/year. In 2017, the Indonesian government has the mission of realizing Indonesia became self-sufficient in food. Therefore, the Indonesian government should be able to seek the stability of the fulfillment of basic needs for food, such as rice field mapping. The accurate mapping for rice field can use a quick and easy method such as Remote Sensing. In this study, multi-temporal Landsat 8 are used for identification of rice field based on Rice Planting Time. It was combined with other method for extract information from the imagery. The methods which was used Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA) and band combination. Image classification is processed by using nine classes, those are water, settlements, mangrove, gardens, fields, rice fields 1st, rice fields 2nd, rice fields 3rd and rice fields 4th. The results showed the rice fields area obtained from the PCA method was 50,009 ha, combination bands was 51,016 ha and NDVI method was 45,893 ha. The accuracy level was obtained PCA method (84.848%), band combination (81.818%), and NDVI method (75.758%).

  6. On-orbit assembly of a team of flexible spacecraft using potential field based method

    NASA Astrophysics Data System (ADS)

    Chen, Ti; Wen, Hao; Hu, Haiyan; Jin, Dongping

    2017-04-01

    In this paper, a novel control strategy is developed based on artificial potential field for the on-orbit autonomous assembly of four flexible spacecraft without inter-member collision. Each flexible spacecraft is simplified as a hub-beam model with truncated beam modes in the floating frame of reference and the communication graph among the four spacecraft is assumed to be a ring topology. The four spacecraft are driven to a pre-assembly configuration first and then to the assembly configuration. In order to design the artificial potential field for the first step, each spacecraft is outlined by an ellipse and a virtual leader of circle is introduced. The potential field mainly depends on the attitude error between the flexible spacecraft and its neighbor, the radial Euclidian distance between the ellipse and the circle and the classical Euclidian distance between the centers of the ellipse and the circle. It can be demonstrated that there are no local minima for the potential function and the global minimum is zero. If the function is equal to zero, the solution is not a certain state, but a set. All the states in the set are corresponding to the desired configurations. The Lyapunov analysis guarantees that the four spacecraft asymptotically converge to the target configuration. Moreover, the other potential field is also included to avoid the inter-member collision. In the control design of the second step, only small modification is made for the controller in the first step. Finally, the successful application of the proposed control law to the assembly mission is verified by two case studies.

  7. Calorimetric method of ac loss measurement in a rotating magnetic field.

    PubMed

    Ghoshal, P K; Coombs, T A; Campbell, A M

    2010-07-01

    A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.

  8. Method for evaluating human exposure to 60 HZ electric fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deno, D.W.; Silva, M.

    1984-07-01

    This paper describes a method that has been successfully used to evaluate human exposure to 60 Hz electric fields. An exposure measuring system that uses an electric field sensor vest and data collection instrumentation is presented. Exposure concepts and activity factors are discussed and experimental data collected with the exposure system are provided. This method can be used to measure exposure to a wide range of electric field with intensities from less than 1 V/m to more than 10 kV/m. Results may be translated to characterize various exposure criteria (time histogram of unperturbed field, surface fields, internal current density, totalmore » body current, etc).« less

  9. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID

  10. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  11. Local sensor based on nanowire field effect transistor from inhomogeneously doped silicon on insulator

    NASA Astrophysics Data System (ADS)

    Presnov, Denis E.; Bozhev, Ivan V.; Miakonkikh, Andrew V.; Simakin, Sergey G.; Trifonov, Artem S.; Krupenin, Vladimir A.

    2018-02-01

    We present the original method for fabricating a sensitive field/charge sensor based on field effect transistor (FET) with a nanowire channel that uses CMOS-compatible processes only. A FET with a kink-like silicon nanowire channel was fabricated from the inhomogeneously doped silicon on insulator wafer very close (˜100 nm) to the extremely sharp corner of a silicon chip forming local probe. The single e-beam lithographic process with a shadow deposition technique, followed by separate two reactive ion etching processes, was used to define the narrow semiconductor nanowire channel. The sensors charge sensitivity was evaluated to be in the range of 0.1-0.2 e /√{Hz } from the analysis of their transport and noise characteristics. The proposed method provides a good opportunity for the relatively simple manufacture of a local field sensor for measuring the electrical field distribution, potential profiles, and charge dynamics for a wide range of mesoscopic objects. Diagnostic systems and devices based on such sensors can be used in various fields of physics, chemistry, material science, biology, electronics, medicine, etc.

  12. General design method for three-dimensional potential flow fields. 1: Theory

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1980-01-01

    A general design method was developed for steady, three dimensional, potential, incompressible or subsonic-compressible flow. In this design method, the flow field, including the shape of its boundary, was determined for arbitrarily specified, continuous distributions of velocity as a function of arc length along the boundary streamlines. The method applied to the design of both internal and external flow fields, including, in both cases, fields with planar symmetry. The analytic problems associated with stagnation points, closure of bodies in external flow fields, and prediction of turning angles in three dimensional ducts were reviewed.

  13. Norman Ramsey and the Separated Oscillatory Fields Method

    Science.gov Websites

    methods of investigation; in particular, he contributed many refinements of the molecular beam method for the study of atomic and molecular properties, he invented the separated oscillatory field method of atomic and molecular spectroscopy and it is the practical basis for the most precise atomic clocks

  14. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  15. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  16. Stress field modeling of the Carpathian Basin based on compiled tectonic maps

    NASA Astrophysics Data System (ADS)

    Albert, Gáspár; Ungvári, Zsuzsanna; Szentpéteri, Krisztián

    2014-05-01

    The estimation of the stress field in the Carpathian Basin is tackled by several authors. Their modeling methods usually based on measurements (borehole-, focal mechanism- and geodesic data) and the result is a possible structural pattern of the region. Our method works indirectly: the analysis is aimed to project a possible 2D stress field over the already mapped/known/compiled lineament pattern. This includes a component-wise interpolation of the tensor-field, which is based on the generated irregular point cloud in the puffer zone of the mapped lineaments. The interpolated values appear on contour and tensor maps, and show the relative stress field of the area. In 2006 Horváth et al. compiled the 'Atlas of the present-day geodynamics of the Pannonian basin'. To test our method we processed the lineaments of the 1:1 500 000 scale 'Map of neotectonic (active) structures' published in this atlas. The geodynamic parameters (i.e. normal, reverse, right- and left lateral strike-slip faults, etc.) of the lines on this map were mostly explained in the legend. We classified the linear elements according to these parameters and created a geo-referenced mapping database. This database contains the polyline sections of the map lineaments as vectors (i.e. line sections), and the directions of the stress field as attributes of these vectors. The directions of the dip-parallel-, strike-parallel- and vertical stress-vectors are calculated from the geodynamical parameters of the line section. Since we created relative stress field properties, the eigenvalues of the vectors were maximized to one. Each point in the point cloud inherits the stress property of the line section, from which it was derived. During the modeling we tried several point-cloud generating- and interpolation methods. The analysis of the interpolated tensor fields revealed that the model was able to reproduce a geodynamic synthesis of the Carpathian Basin, which can be correlated with the synthesis of the

  17. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  18. Adaptive Set-Based Methods for Association Testing

    PubMed Central

    Su, Yu-Chen; Gauderman, W. James; Kiros, Berhane; Lewinger, Juan Pablo

    2017-01-01

    With a typical sample size of a few thousand subjects, a single genomewide association study (GWAS) using traditional one-SNP-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. While self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly ‘adapt’ to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a LASSO based test. PMID:26707371

  19. Therapy Decision Support Based on Recommender System Methods

    PubMed Central

    Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen

    2017-01-01

    We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system. PMID:29065657

  20. Field-induced phase transitions in chiral smectic liquid crystals studied by the constant current method

    NASA Astrophysics Data System (ADS)

    H, Dhaouadi; R, Zgueb; O, Riahi; F, Trabelsi; T, Othman

    2016-05-01

    In ferroelectric liquid crystals, phase transitions can be induced by an electric field. The current constant method allows these transition to be quickly localized and thus the (E,T) phase diagram of the studied product can be obtained. In this work, we make a slight modification to the measurement principles based on this method. This modification allows the characteristic parameters of ferroelectric liquid crystal to be quantitatively measured. The use of a current square signal highlights a phenomenon of ferroelectric hysteresis with remnant polarization at null field, which points out an effect of memory in this compound.

  1. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  2. Field demonstration of on-site analytical methods for TNT and RDX in ground water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, H.; Ferguson, G.; Markos, A.

    1996-12-31

    A field demonstration was conducted to assess the performance of eight commercially-available and emerging colorimetric, immunoassay, and biosensor on-site analytical methods for explosives 2,4,6-trinitrotoluene (TNT) and hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in ground water and leachate at the Umatilla Army Depot Activity, Hermiston, Oregon and US Naval Submarine Base, Bangor, Washington, Superfund sites. Ground water samples were analyzed by each of the on-site methods and results compared to laboratory analysis using high performance liquid chromatography (HPLC) with EPA SW-846 Method 8330. The commercial methods evaluated include the EnSys, Inc., TNT and RDX colorimetric test kits (EPA SW-846 Methods 8515 and 8510) with amore » solid phase extraction (SPE) step, the DTECH/EM Science TNT and RDX immunoassay test kits (EPA SW-846 Methods 4050 and 4051), and the Ohmicron TNT immunoassay test kit. The emerging methods tested include the antibody-based Naval Research Laboratory (NRL) Continuous Flow Immunosensor (CFI) for TNT and RDX, and the Fiber Optic Biosensor (FOB) for TNT. Accuracy of the on-site methods were evaluated using linear regression analysis and relative percent difference (RPD) comparison criteria. Over the range of conditions tested, the colorimetric methods for TNT and RDX showed the highest accuracy of the emerging methods for TNT and RDX. The colorimetric method was selected for routine ground water monitoring at the Umatilla site, and further field testing on the NRL CFI and FOB biosensors will continue at both Superfund sites.« less

  3. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that

  4. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  5. An Evaluation of a Numerical Prediction Method for Electric Field Strength of Low Frequency Radio Waves based on Wave-Hop Ionospheric Propagation

    NASA Astrophysics Data System (ADS)

    Kitauchi, H.; Nozaki, K.; Ito, H.; Kondo, T.; Tsuchiya, S.; Imamura, K.; Nagatsuma, T.; Ishii, M.

    2014-12-01

    We present our recent efforts on an evaluation of the numerical prediction method of electric field strength for ionospheric propagation of low frequency (LF) radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012), "Prediction of field strength at frequencies below about 150 kHz," made by International Telecommunication Union Radiocommunication Sector (ITU-R). As part of the Japanese Antarctic Research Expedition (JARE), we conduct on-board measurements of the electric field strengths and phases of LF 40 kHz and 60 kHz of radio signals (call sign JJY) continuously along both the ways between Tokyo, Japan and Syowa Station, the Japanese Antarctic station, at 69° 00' S, 39° 35' E on East Ongul Island, Lützow-Holm Bay, East Antarctica. The measurements are made by a newly developed, highly sensitive receiving system installed on board the Japanese Antarctic research vessel (RV) Shirase. We obtained new data sets of the electric field strength up to approximately 13,000-14,000 km propagation of LF JJY 40 kHz and 60 kHz radio waves by utilizing a newly developed, highly sensitive receiving system, comprised of an orthogonally crossed double-loop antenna and digital-signal-processing lock-in amplifiers, on board RV Shirase during the 55th JARE from November 2013 to April 2014. We have made comparisons between those on-board measurements and the numerical predictions of field strength for long-range propagation of low frequency radio waves based on a wave-hop propagation theory described in Section 2.4 of Recommendation ITU-R P.684-6 (2012) to show that our results qualitatively support the recommended wave-hop theory for the great-circle paths approximately 7,000-8,000 km and 13,000-14,000 km propagations.

  6. Sodium and potassium content of 24 h urinary collections: a comparison between field- and laboratory-based analysers.

    PubMed

    Yin, Xuejun; Neal, Bruce; Tian, Maoyi; Li, Zhifang; Petersen, Kristina; Komatsu, Yuichiro; Feng, Xiangxian; Wu, Yangfeng

    2018-04-01

    Measurement of mean population Na and K intakes typically uses laboratory-based assays, which can add significant logistical burden and costs. A valid field-based measurement method would be a significant advance. In the current study, we used 24 h urine samples to compare estimates of Na, K and Na:K ratio based upon assays done using the field-based Horiba twin meter v. laboratory-based methods. The performance of the Horiba twin meter was determined by comparing field-based estimates of mean Na and K against those obtained using laboratory-based methods. The reported 95 % limits of agreement of Bland-Altman plots were calculated based on a regression approach for non-uniform differences. The 24 h urine samples were collected as part of an ongoing study being done in rural China. One hundred and sixty-six complete 24 h urine samples were qualified for estimating 24 h urinary Na and K excretion. Mean Na and K excretion were estimated as 170·4 and 37·4 mmol/d, respectively, using the meter-based assays; and 193·4 and 43·8 mmol/d, respectively, using the laboratory-based assays. There was excellent relative reliability (intraclass correlation coefficient) for both Na (0·986) and K (0·986). Bland-Altman plots showed moderate-to-good agreement between the two methods. Na and K intake estimations were moderately underestimated using assays based upon the Horiba twin meter. Compared with standard laboratory-based methods, the portable device was more practical and convenient.

  7. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  8. Ferroelectric field-effect transistors based on solution-processed electrochemically exfoliated graphene

    NASA Astrophysics Data System (ADS)

    Heidler, Jonas; Yang, Sheng; Feng, Xinliang; Müllen, Klaus; Asadi, Kamal

    2018-06-01

    Memories based on graphene that could be mass produced using low-cost methods have not yet received much attention. Here we demonstrate graphene ferroelectric (dual-gate) field effect transistors. The graphene has been obtained using electrochemical exfoliation of graphite. Field-effect transistors are realized using a monolayer of graphene flakes deposited by the Langmuir-Blodgett protocol. Ferroelectric field effect transistor memories are realized using a random ferroelectric copolymer poly(vinylidenefluoride-co-trifluoroethylene) in a top gated geometry. The memory transistors reveal ambipolar behaviour with both electron and hole accumulation channels. We show that the non-ferroelectric bottom gate can be advantageously used to tune the on/off ratio.

  9. Asymmetrical flow field flow fractionation methods to characterize submicron particles: application to carbon-based aggregates and nanoplastics.

    PubMed

    Gigault, Julien; El Hadri, Hind; Reynaud, Stéphanie; Deniau, Elise; Grassl, Bruno

    2017-11-01

    In the last 10 years, asymmetrical flow field flow fractionation (AF4) has been one of the most promising approaches to characterize colloidal particles. Nevertheless, despite its potentialities, it is still considered a complex technique to set up, and the theory is difficult to apply for the characterization of complex samples containing submicron particles and nanoparticles. In the present work, we developed and propose a simple analytical strategy to rapidly determine the presence of several submicron populations in an unknown sample with one programmed AF4 method. To illustrate this method, we analyzed polystyrene particles and fullerene aggregates of size covering the whole colloidal size distribution. A global and fast AF4 method (method O) allowed us to screen the presence of particles with size ranging from 1 to 800 nm. By examination of the fractionating power F d , as proposed in the literature, convenient fractionation resolution was obtained for size ranging from 10 to 400 nm. The global F d values, as well as the steric inversion diameter, for the whole colloidal size distribution correspond to the predicted values obtained by model studies. On the basis of this method and without the channel components or mobile phase composition being changed, four isocratic subfraction methods were performed to achieve further high-resolution separation as a function of different size classes: 10-100 nm, 100-200 nm, 200-450 nm, and 450-800 nm in diameter. Finally, all the methods developed were applied in characterization of nanoplastics, which has received great attention in recent years. Graphical Absract Characterization of the nanoplastics by asymmetrical flow field flow fractionation within the colloidal size range.

  10. Appearance-based face recognition and light-fields.

    PubMed

    Gross, Ralph; Matthews, Iain; Baker, Simon

    2004-04-01

    Arguably the most important decision to be made when developing an object recognition algorithm is selecting the scene measurements or features on which to base the algorithm. In appearance-based object recognition, the features are chosen to be the pixel intensity values in an image of the object. These pixel intensities correspond directly to the radiance of light emitted from the object along certain rays in space. The set of all such radiance values over all possible rays is known as the plenoptic function or light-field. In this paper, we develop a theory of appearance-based object recognition from light-fields. This theory leads directly to an algorithm for face recognition across pose that uses as many images of the face as are available, from one upwards. All of the pixels, whichever image they come from, are treated equally and used to estimate the (eigen) light-field of the object. The eigen light-field is then used as the set of features on which to base recognition, analogously to how the pixel intensities are used in appearance-based face and object recognition.

  11. Learning outcomes of in-person and virtual field-based geoscience instruction at Grand Canyon National Park: complementary mixed-methods analyses

    NASA Astrophysics Data System (ADS)

    Semken, S. C.; Ruberto, T.; Mead, C.; Bruce, G.; Buxner, S.; Anbar, A. D.

    2017-12-01

    Students with limited access to field-based geoscience learning can benefit from immersive, student-centered virtual-reality and augmented-reality field experiences. While no digital modalities currently envisioned can truly supplant field-based learning, they afford students access to geologically illustrative but inaccessible places on Earth and beyond. As leading producers of immersive virtual field trips (iVFTs), we investigate complementary advantages and disadvantages of iVFTs and in-person field trips (ipFTs). Settings for our mixed-methods study were an intro historical-geology class (n = 84) populated mostly by non-majors and an advanced Southwest geology class (n = 39) serving mostly majors. Both represent the diversity of our urban Southwestern research university. For the same credit, students chose either an ipFT to the Trail of Time (ToT) Exhibition at Grand Canyon National Park (control group) or an online Grand Canyon iVFT (experimental group), in the same time interval. Learning outcomes for each group were identically drawn from elements of the ToT and assessed using pre/post concept sketching and inquiry exercises. Student attitudes and cognitive-load factors for both groups were assessed pre/post using the PANAS instrument (Watson et al., 1998) and with affective surveys. Analysis of pre/post concept sketches indicated improved knowledge in both groups and classes, but more so in the iVFT group. PANAS scores from the intro class showed the ipFT students having significantly stronger (p = .004) positive affect immediately prior to the experience than the iVFT students, possibly reflecting their excitement about the trip to come. Post-experience, the two groups were no longer significantly different, possibly due to the fatigue associated with a full-day ipFT. Two lines of evidence suggest that the modalities were comparable in expected effectiveness. First, the information relevant for the concept sketch was specifically covered in both

  12. Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.

    2016-12-19

    Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less

  13. Junction-based field emission structure for field emission display

    DOEpatents

    Dinh, Long N.; Balooch, Mehdi; McLean, II, William; Schildbach, Marcus A.

    2002-01-01

    A junction-based field emission display, wherein the junctions are formed by depositing a semiconducting or dielectric, low work function, negative electron affinity (NEA) silicon-based compound film (SBCF) onto a metal or n-type semiconductor substrate. The SBCF can be doped to become a p-type semiconductor. A small forward bias voltage is applied across the junction so that electron transport is from the substrate into the SBCF region. Upon entering into this NEA region, many electrons are released into the vacuum level above the SBCF surface and accelerated toward a positively biased phosphor screen anode, hence lighting up the phosphor screen for display. To turn off, simply switch off the applied potential across the SBCF/substrate. May be used for field emission flat panel displays.

  14. A field day of soil regulation methods

    NASA Astrophysics Data System (ADS)

    Kempter, Axel; Kempter, Carmen

    2015-04-01

    The subject Soil plays an important role in the school subject geography. In particular in the upper classes it is expected that the knowledge from the area of Soil can be also be applied in other subjects. Thus, e.g., an assessment of economy and agricultural development and developing potential requires the interweaving of natural- geographic and human-geographic factors. The treatment of the subject Soil requires the desegregation of the results of different fields like Physics, Chemistry and Biology. Accordingly the subject gives cause to professional-covering lessons and offers the opportunity for practical work as well as excursions. Beside the mediation of specialist knowledge and with the support of the methods and action competences, the independent learning and the practical work should have a special emphasis on the field excursion by using stimulating exercises oriented to solving problems and mastering the methods. This aim should be achieved by the interdisciplinary treatment of the subject Soil in the task-oriented learning process on the field day. The methods and experiments should be sensibly selected for both the temporal and material supply constraints. During the field day the pupils had to categorize soil texture, soil colour, soil profile, soil skeleton, lime content, ion exchanger (Soils filter materials), pH-Value, water retention capacity and evidence of different ions like e.g. Fe3+, Mg2+, Cl- and NO3-. The pupils worked on stations and evaluated the data to receive a general view of the ground at the end. According to numbers of locations, amount of time and group size, different procedures can be offered. There are groups of experts who carry out the same experiment at all locations and split for the evaluation in different groups or each group ran through all stations. The results were compared and discussed at the end.

  15. Design method of ARM based embedded iris recognition system

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting

    2008-03-01

    With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.

  16. Percent body fat estimations in college men using field and laboratory methods: a three-compartment model approach.

    PubMed

    Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2008-04-21

    Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Thirty-one Caucasian men (22.5 +/- 2.7 yrs; 175.6 +/- 6.3 cm; 76.4 +/- 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex(R) 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population.

  17. Are rapid population estimates accurate? A field trial of two different assessment methods.

    PubMed

    Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent

    2006-09-01

    Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.

  18. A novel wide-field-of-view display method with higher central resolution for hyper-realistic head dome projector

    NASA Astrophysics Data System (ADS)

    Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko

    2007-02-01

    In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.

  19. A new method to measure galaxy bias by combining the density and weak lensing fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pujol, Arnau; Chang, Chihway; Gaztañaga, Enrique

    We present a new method to measure redshift-dependent galaxy bias by combining information from the galaxy density field and the weak lensing field. This method is based on the work of Amara et al., who use the galaxy density field to construct a bias-weighted convergence field κg. The main difference between Amara et al.'s work and our new implementation is that here we present another way to measure galaxy bias, using tomography instead of bias parametrizations. The correlation between κg and the true lensing field κ allows us to measure galaxy bias using different zero-lag correlations, such as / ormore » /. Our method measures the linear bias factor on linear scales, under the assumption of no stochasticity between galaxies and matter. We use the Marenostrum Institut de Ciències de l'Espai (MICE) simulation to measure the linear galaxy bias for a flux-limited sample (i < 22.5) in tomographic redshift bins using this method. This article is the first that studies the accuracy and systematic uncertainties associated with the implementation of the method and the regime in which it is consistent with the linear galaxy bias defined by projected two-point correlation functions (2PCF). We find that our method is consistent with a linear bias at the per cent level for scales larger than 30 arcmin, while non-linearities appear at smaller scales. This measurement is a good complement to other measurements of bias, since it does not depend strongly on σ8 as do the 2PCF measurements. We will apply this method to the Dark Energy Survey Science Verification data in a follow-up article.« less

  20. Current trends in nanomaterial embedded field effect transistor-based biosensor.

    PubMed

    Nehra, Anuj; Pal Singh, Krishna

    2015-12-15

    Recently, as metal-, polymer-, and carbon-based biocompatible nanomaterials have been increasingly incorporated into biosensing applications, with various nanostructures having been used to increase the efficacy and sensitivity of most of the detecting devices, including field effect transistor (FET)-based devices. These nanomaterial-based methods also became the ideal for the amalgamation of biomolecules, especially for the fabrication of ultrasensitive, low-cost, and robust FET-based biosensors; these are categorically very successful at binding the target specified entities in the confined gated micro-region for high functionality. Furthermore, the contemplation of nanomaterial-based FET biosensors to various applications encompasses the desire for detection of many targets with high selectivity, and specificity. We assess how such devices have empowered the achievement of elevated biosensor performance in terms of high sensitivity, selectivity and low detection limits. We review the recent literature here to illustrate the diversity of FET-based biosensors, based on various kinds of nanomaterials in different applications and sum up that graphene or its assisted composite based FET devices are comparatively more efficient and sensitive with highest signal to noise ratio. Lastly, the future prospects and limitations of the field are also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  2. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.

  3. Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu

    This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less

  4. Linear least-squares method for global luminescent oil film skin friction field analysis

    NASA Astrophysics Data System (ADS)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  5. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  6. Improvements of the Profil Cultural Method for a better Low-tech Field Assessment of Soil Structure under no-till

    NASA Astrophysics Data System (ADS)

    Roger-Estrade, Jean; Boizard, Hubert; Peigné, Josephine; Sasal, Maria Carolina; Guimaraes, Rachel; Piron, Denis; Tomis, Vincent; Vian, Jean-François; Cadoux, Stephane; Ralisch, Ricardo; Filho, Tavares; Heddadj, Djilali; de Battista, Juan; Duparque, Annie

    2016-04-01

    In France, agronomists have studied the effects of cropping systems on soil structure, using a field method based on a visual description of soil structure. The "profil cultural" method (Manichon and Gautronneau, 1987) has been designed to perform a field diagnostic of the effects of tillage and compaction on soil structure dynamics. This method is of great use to agronomists improving crop management for a better preservation of soil structure. However, this method was developed and mainly used in conventional tillage systems, with ploughing. As several forms of reduced, minimum and no tillage systems are expanding in many parts of the world, it is necessary to re-evaluate the ability of this method to describe and interpret soil macrostructure in unploughed situations. In unploughed fields, soil structure dynamics of untilled layers is mainly driven by compaction and regeneration by natural agents (climatic conditions, root growth and macrofauna) and it is of major importance to evaluate the importance of these natural processes on soil structure regeneration. These concerns have led us to adapt the standard method and to propose amendments based on a series of field observations and experimental work in different situations of cropping systems, soil types and climatic conditions. We improved the description of crack type and we introduced an index of biological activity, based on the visual examination of clods. To test the improved method, a comparison with the reference method was carried out and the ability of the "profil cultural" method to make a diagnosis was tested on five experiments in France, Brazil and Argentina. Using the improved method, the impact of cropping systems on soil functioning was better assessed when natural processes were integrated into the description.

  7. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  8. Light-field camera-based 3D volumetric particle image velocimetry with dense ray tracing reconstruction technique

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; New, T. H.; Soria, Julio

    2017-07-01

    This paper presents a dense ray tracing reconstruction technique for a single light-field camera-based particle image velocimetry. The new approach pre-determines the location of a particle through inverse dense ray tracing and reconstructs the voxel value using multiplicative algebraic reconstruction technique (MART). Simulation studies were undertaken to identify the effects of iteration number, relaxation factor, particle density, voxel-pixel ratio and the effect of the velocity gradient on the performance of the proposed dense ray tracing-based MART method (DRT-MART). The results demonstrate that the DRT-MART method achieves higher reconstruction resolution at significantly better computational efficiency than the MART method (4-50 times faster). Both DRT-MART and MART approaches were applied to measure the velocity field of a low speed jet flow which revealed that for the same computational cost, the DRT-MART method accurately resolves the jet velocity field with improved precision, especially for the velocity component along the depth direction.

  9. Numerical Solutions of the Mean-Value Theorem: New Methods for Downward Continuation of Potential Fields

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang

    2018-04-01

    Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.

  10. Ocean Wave Simulation Based on Wind Field

    PubMed Central

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  11. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates.

  12. Percent body fat estimations in college men using field and laboratory methods: A three-compartment model approach

    PubMed Central

    Moon, Jordan R; Tobkin, Sarah E; Smith, Abbie E; Roberts, Michael D; Ryan, Eric D; Dalbo, Vincent J; Lockwood, Chris M; Walter, Ashley A; Cramer, Joel T; Beck, Travis W; Stout, Jeffrey R

    2008-01-01

    Background Methods used to estimate percent body fat can be classified as a laboratory or field technique. However, the validity of these methods compared to multiple-compartment models has not been fully established. The purpose of this study was to determine the validity of field and laboratory methods for estimating percent fat (%fat) in healthy college-age men compared to the Siri three-compartment model (3C). Methods Thirty-one Caucasian men (22.5 ± 2.7 yrs; 175.6 ± 6.3 cm; 76.4 ± 10.3 kg) had their %fat estimated by bioelectrical impedance analysis (BIA) using the BodyGram™ computer program (BIA-AK) and population-specific equation (BIA-Lohman), near-infrared interactance (NIR) (Futrex® 6100/XL), four circumference-based military equations [Marine Corps (MC), Navy and Air Force (NAF), Army (A), and Friedl], air-displacement plethysmography (BP), and hydrostatic weighing (HW). Results All circumference-based military equations (MC = 4.7% fat, NAF = 5.2% fat, A = 4.7% fat, Friedl = 4.7% fat) along with NIR (NIR = 5.1% fat) produced an unacceptable total error (TE). Both laboratory methods produced acceptable TE values (HW = 2.5% fat; BP = 2.7% fat). The BIA-AK, and BIA-Lohman field methods produced acceptable TE values (2.1% fat). A significant difference was observed for the MC and NAF equations compared to both the 3C model and HW (p < 0.006). Conclusion Results indicate that the BP and HW are valid laboratory methods when compared to the 3C model to estimate %fat in college-age Caucasian men. When the use of a laboratory method is not feasible, BIA-AK, and BIA-Lohman are acceptable field methods to estimate %fat in this population. PMID:18426582

  13. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  14. Electric Field Quantitative Measurement System and Method

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2016-01-01

    A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.

  15. Multigrid Methods for the Computation of Propagators in Gauge Fields

    NASA Astrophysics Data System (ADS)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  16. Developing Preservice Teachers' Self-Efficacy through Field-Based Science Teaching Practice with Elementary Students

    ERIC Educational Resources Information Center

    Flores, Ingrid M.

    2015-01-01

    Thirty preservice teachers enrolled in a field-based science methods course were placed at a public elementary school for coursework and for teaching practice with elementary students. Candidates focused on building conceptual understanding of science content and pedagogical methods through innovative curriculum development and other course…

  17. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  18. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  19. Determination of traces of cobalt in soils: A field method

    USGS Publications Warehouse

    Almond, H.

    1953-01-01

    The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.

  20. Method of electric field flow fractionation wherein the polarity of the electric field is periodically reversed

    DOEpatents

    Stevens, Fred J.

    1992-01-01

    A novel method of electric field flow fractionation for separating solute molecules from a carrier solution is disclosed. The method of the invention utilizes an electric field that is periodically reversed in polarity, in a time-dependent, wave-like manner. The parameters of the waveform, including amplitude, frequency and wave shape may be varied to optimize separation of solute species. The waveform may further include discontinuities to enhance separation.

  1. MEMS-based fuel cells with integrated catalytic fuel processor and method thereof

    DOEpatents

    Jankowski, Alan F [Livermore, CA; Morse, Jeffrey D [Martinez, CA; Upadhye, Ravindra S [Pleasanton, CA; Havstad, Mark A [Davis, CA

    2011-08-09

    Described herein is a means to incorporate catalytic materials into the fuel flow field structures of MEMS-based fuel cells, which enable catalytic reforming of a hydrocarbon based fuel, such as methane, methanol, or butane. Methods of fabrication are also disclosed.

  2. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  3. Forced Ignition Study Based On Wavelet Method

    NASA Astrophysics Data System (ADS)

    Martelli, E.; Valorani, M.; Paolucci, S.; Zikoski, Z.

    2011-05-01

    The control of ignition in a rocket engine is a critical problem for combustion chamber design. Therefore it is essential to fully understand the mechanism of ignition during its earliest stages. In this paper the characteristics of flame kernel formation and initial propagation in a hydrogen-argon-oxygen mixing layer are studied using 2D direct numerical simulations with detailed chemistry and transport properties. The flame kernel is initiated by adding an energy deposition source term in the energy equation. The effect of unsteady strain rate is studied by imposing a 2D turbulence velocity field, which is initialized by means of a synthetic field. An adaptive wavelet method, based on interpolating wavelets is used in this study to solve the compressible reactive Navier- Stokes equations. This method provides an alternative means to refine the computational grid points according to local demands of the physical solution. The present simulations show that in the very early instants the kernel perturbed by the turbulent field is characterized by an increased burning area and a slightly increased rad- ical formation. In addition, the calculations show that the wavelet technique yields a significant reduction in the number of degrees of freedom necessary to achieve a pre- scribed solution accuracy.

  4. How robust are burn severity indices when applied in a new region? Evaluation of alternate field-based and remote-sensing methods

    Treesearch

    C. Alina Cansler; Donald McKenzie

    2012-01-01

    Remotely sensed indices of burn severity are now commonly used by researchers and land managers to assess fire effects, but their relationship to field-based assessments of burn severity has been evaluated only in a few ecosystems. This analysis illustrates two cases in which methodological refinements to field-based and remotely sensed indices of burn severity...

  5. Field calibration of blowfly-derived DNA against traditional methods for assessing mammal diversity in tropical forests.

    PubMed

    Lee, Ping-Shin; Gan, Han Ming; Clements, Gopalasamy Reuben; Wilson, John-James

    2016-11-01

    Mammal diversity assessments based on DNA derived from invertebrates have been suggested as alternatives to assessments based on traditional methods; however, no study has field-tested both approaches simultaneously. In Peninsular Malaysia, we calibrated the performance of mammal DNA derived from blowflies (Diptera: Calliphoridae) against traditional methods used to detect species. We first compared five methods (cage trapping, mist netting, hair trapping, scat collection, and blowfly-derived DNA) in a forest reserve with no recent reports of megafauna. Blowfly-derived DNA and mist netting detected the joint highest number of species (n = 6). Only one species was detected by multiple methods. Compared to the other methods, blowfly-derived DNA detected both volant and non-volant species. In another forest reserve, rich in megafauna, we calibrated blowfly-derived DNA against camera traps. Blowfly-derived DNA detected more species (n = 11) than camera traps (n = 9), with only one species detected by both methods. The rarefaction curve indicated that blowfly-derived DNA would continue to detect more species with greater sampling effort. With further calibration, blowfly-derived DNA may join the list of traditional field methods. Areas for further investigation include blowfly feeding and dispersal biology, primer biases, and the assembly of a comprehensive and taxonomically-consistent DNA barcode reference library.

  6. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods.

    PubMed

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community.

  7. Clustering Scientific Publications Based on Citation Relations: A Systematic Comparison of Different Methods

    PubMed Central

    Šubelj, Lovro; van Eck, Nees Jan; Waltman, Ludo

    2016-01-01

    Clustering methods are applied regularly in the bibliometric literature to identify research areas or scientific fields. These methods are for instance used to group publications into clusters based on their relations in a citation network. In the network science literature, many clustering methods, often referred to as graph partitioning or community detection techniques, have been developed. Focusing on the problem of clustering the publications in a citation network, we present a systematic comparison of the performance of a large number of these clustering methods. Using a number of different citation networks, some of them relatively small and others very large, we extensively study the statistical properties of the results provided by different methods. In addition, we also carry out an expert-based assessment of the results produced by different methods. The expert-based assessment focuses on publications in the field of scientometrics. Our findings seem to indicate that there is a trade-off between different properties that may be considered desirable for a good clustering of publications. Overall, map equation methods appear to perform best in our analysis, suggesting that these methods deserve more attention from the bibliometric community. PMID:27124610

  8. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method.

    PubMed

    Deng, Yongbo; Korvink, Jan G

    2016-05-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable.

  9. Topology optimization for three-dimensional electromagnetic waves using an edge element-based finite-element method

    PubMed Central

    Korvink, Jan G.

    2016-01-01

    This paper develops a topology optimization procedure for three-dimensional electromagnetic waves with an edge element-based finite-element method. In contrast to the two-dimensional case, three-dimensional electromagnetic waves must include an additional divergence-free condition for the field variables. The edge element-based finite-element method is used to both discretize the wave equations and enforce the divergence-free condition. For wave propagation described in terms of the magnetic field in the widely used class of non-magnetic materials, the divergence-free condition is imposed on the magnetic field. This naturally leads to a nodal topology optimization method. When wave propagation is described using the electric field, the divergence-free condition must be imposed on the electric displacement. In this case, the material in the design domain is assumed to be piecewise homogeneous to impose the divergence-free condition on the electric field. This results in an element-wise topology optimization algorithm. The topology optimization problems are regularized using a Helmholtz filter and a threshold projection method and are analysed using a continuous adjoint method. In order to ensure the applicability of the filter in the element-wise topology optimization version, a regularization method is presented to project the nodal into an element-wise physical density variable. PMID:27279766

  10. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  11. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  12. How to Plan a Theme Based Field Day

    ERIC Educational Resources Information Center

    Shea, Scott A.; Fagala, Lisa M.

    2006-01-01

    Having a theme-based field day is a great way to get away from doing the traditional track-and-field type events, such as the softball throw, 50 yard dash, and sack race, year after year. In a theme-based field day format all stations or events are planned around a particular theme. This allows the teacher to be creative while also adding…

  13. Bootstrapping conformal field theories with the extremal functional method.

    PubMed

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.

  14. Primary combination of phase-field and discrete dislocation dynamics methods for investigating athermal plastic deformation in various realistic Ni-base single crystal superalloy microstructures

    NASA Astrophysics Data System (ADS)

    Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo

    2015-10-01

    Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.

  15. Robustness-Based Simplification of 2D Steady and Unsteady Vector Fields.

    PubMed

    Skraba, Primoz; Bei Wang; Guoning Chen; Rosen, Paul

    2015-08-01

    Vector field simplification aims to reduce the complexity of the flow by removing features in order of their relevance and importance, to reveal prominent behavior and obtain a compact representation for interpretation. Most existing simplification techniques based on the topological skeleton successively remove pairs of critical points connected by separatrices, using distance or area-based relevance measures. These methods rely on the stable extraction of the topological skeleton, which can be difficult due to instability in numerical integration, especially when processing highly rotational flows. In this paper, we propose a novel simplification scheme derived from the recently introduced topological notion of robustness which enables the pruning of sets of critical points according to a quantitative measure of their stability, that is, the minimum amount of vector field perturbation required to remove them. This leads to a hierarchical simplification scheme that encodes flow magnitude in its perturbation metric. Our novel simplification algorithm is based on degree theory and has minimal boundary restrictions. Finally, we provide an implementation under the piecewise-linear setting and apply it to both synthetic and real-world datasets. We show local and complete hierarchical simplifications for steady as well as unsteady vector fields.

  16. A field method for making a quantitative estimate of altered tuff in sandstone

    USGS Publications Warehouse

    Cadigan, R.A.

    1954-01-01

    The use of benzidine to identify altered tuff in sandstone is practical for field or field laboratory studies associated with stratigraphic correlations, mineral deposit investigations, or paleogeographic interpretations. The method is based on the ability of saturated benzidine (C12H12N2) solution to produce a blue stain on montmorillonite-bearing tuff grains. The method is substantiated by the results of microscopic, X-ray spectrometer, and spectrographic tests which lead to the conclusion that: (1) the benzidine stain test differentiates grains of different composition, (2) the white or gray grains which are stained a uniform blue color are fragments of altered tuff, and (3) white or gray grains which stain in a few small spots are probably silicified tuff. The amount of sand grains taken from a hand specimen or an outcrop which will be held by a penny is spread out on a nonabsorbent white surface and soaked with benzidine for 5 minutes. The approximate number blue grains and the average grain size are used in a chart to determine a reference number which measures relative order of abundance. The chart, based on a volume relationship, corrects for the variation in the number of grains in the sample as the grain size varies. Practical use of the method depends on a knowledge of several precautionary measures as well as an understanding of the limitations of benzidine staining tests.

  17. Field trip method as an effort to reveal student environmental literacy on biodiversity issue and context

    NASA Astrophysics Data System (ADS)

    Rijal, M.; Saefudin; Amprasto

    2018-05-01

    Field trip method through investigation of local biodiversity cases can give educational experiences for students. This learning activity was efforts to reveal students environmental literacy on biodiversity. The aim of study were (1) to describe the activities of students get information about the biodiversity issue and its context through field trip, (2) to describe the students findings during field trip, and (3) to reveal students environmental literacy based on pre test and post test. The research method used weak-experiment and involved 34 participants at senior high school students in Bandung-Indonesia. The research instruments for collecting data were environmental literacy test, observation sheets and questionnaire sheets for students. The analysis of data was quantitative descriptive. The results show that more than 79% of the students gave positive view for each field trip activity, i.e students activity during work (97%-100%); students activity during gather information (79%- 100%); students activity during exchange information with friend (82%-100%); and students interested to Biodiversity after field trip activity (85%-100%). Students gain knowledge about the diversity of animal vertebrate and its characteristics, the status and condition of animals, and the source of animal with the cases of animal diversity. The students environmental literacy tends to be moderate level based on test. Meanwhile, the average of the attitudes and action greater than the components of knowledge and cognitive skills.

  18. Helical magnetic fields in molecular clouds?. A new method to determine the line-of-sight magnetic field structure in molecular clouds

    NASA Astrophysics Data System (ADS)

    Tahani, M.; Plume, R.; Brown, J. C.; Kainulainen, J.

    2018-06-01

    Context. Magnetic fields pervade in the interstellar medium (ISM) and are believed to be important in the process of star formation, yet probing magnetic fields in star formation regions is challenging. Aims: We propose a new method to use Faraday rotation measurements in small-scale star forming regions to find the direction and magnitude of the component of magnetic field along the line of sight. We test the proposed method in four relatively nearby regions of Orion A, Orion B, Perseus, and California. Methods: We use rotation measure data from the literature. We adopt a simple approach based on relative measurements to estimate the rotation measure due to the molecular clouds over the Galactic contribution. We then use a chemical evolution code along with extinction maps of each cloud to find the electron column density of the molecular cloud at the position of each rotation measure data point. Combining the rotation measures produced by the molecular clouds and the electron column density, we calculate the line-of-sight magnetic field strength and direction. Results: In California and Orion A, we find clear evidence that the magnetic fields at one side of these filamentary structures are pointing towards us and are pointing away from us at the other side. Even though the magnetic fields in Perseus might seem to suggest the same behavior, not enough data points are available to draw such conclusions. In Orion B, as well, there are not enough data points available to detect such behavior. This magnetic field reversal is consistent with a helical magnetic field morphology. In the vicinity of available Zeeman measurements in OMC-1, OMC-B, and the dark cloud Barnard 1, we find magnetic field values of - 23 ± 38 μG, - 129 ± 28 μG, and 32 ± 101 μG, respectively, which are in agreement with the Zeeman measurements. Tables 1 to 7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http

  19. Field-based generation and social validation managers and staff competencies for small community residences.

    PubMed

    Thousand, J S; Burchard, S N; Hasazi, J E

    1986-01-01

    Characteristics and competencies for four staff positions in community residences for individuals with mental retardation were identified utilizing multiple empirical and deductive methods with field-based practitioners and field-based experts. The more commonly used competency generation methods of expert opinion and job performance analysis generated a high degree of knowledge and skill-based competencies similar to course curricula. Competencies generated by incumbent practitioners through open-ended methods of personal structured interview and critical incident analysis were ones which related to personal style, interpersonal interaction, and humanistic orientation. Although seldom included in staff, paraprofessional, or professional training curricula, these latter competencies include those identified by Carl Rogers as essential for developing an effective helping relationship in a therapeutic situation (i.e., showing liking, interest, and respect for the clients; being able to communicate positive regard to the client). Of 21 core competency statements selected as prerequisites to employment for all four staff positions, the majority (17 of 21) represented interpersonal skills important to working with others, including responsiveness to resident needs, personal valuation of persons with mental retardation, and normalization principles.

  20. Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.

    PubMed

    Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto

    2018-05-01

    This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.

  1. A new method of quantitative cavitation assessment in the field of a lithotripter.

    PubMed

    Jöchle, K; Debus, J; Lorenz, W J; Huber, P

    1996-01-01

    Transient cavitation seems to be a very important effect regarding the interaction of pulsed high-energy ultrasound with biologic tissues. Using a newly developed laser optical system we are able to determine the life-span of transient cavities (relative error less than +/- 5%) in the focal region of a lithotripter (Lithostar, Siemens). The laser scattering method is based on the detection of scattered laser light reflected during a bubble's life. This method requires no sort of sensor material in the pathway of the sound field. Thus, the method avoids any interference with bubble dynamics during the measurement. The knowledge of the time of bubble decay allows conclusions to be reached on the destructive power of the cavities. By combining the results of life-span measurements with the maximum bubble radius using stroboscopic photographs we found that the measured time of bubble decay and the predicted time using Rayleigh's law only differs by about 13% even in the case of complex bubble fields. It can be shown that the laser scattering method is feasible to assess cavitation events quantitatively. Moreover, it will enable us to compare different medical ultrasound sources that have the capability to generate cavitation.

  2. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    PubMed

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  3. Laser-based methods for the analysis of low molecular weight compounds in biological matrices.

    PubMed

    Kiss, András; Hopfgartner, Gérard

    2016-07-15

    Laser-based desorption and/or ionization methods play an important role in the field of the analysis of low molecular-weight compounds (LMWCs) because they allow direct analysis with high-throughput capabilities. In the recent years there were several new improvements in ionization methods with the emergence of novel atmospheric ion sources such as laser ablation electrospray ionization or laser diode thermal desorption and atmospheric pressure chemical ionization and in sample preparation methods with the development of new matrix compounds for matrix-assisted laser desorption/ionization (MALDI). Also, the combination of ion mobility separation with laser-based ionization methods starts to gain popularity with access to commercial systems. These developments have been driven mainly by the emergence of new application fields such as MS imaging and non-chromatographic analytical approaches for quantification. This review aims to present these new developments in laser-based methods for the analysis of low-molecular weight compounds by MS and several potential applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. A Refined Crop Drought Monitoring Method Based on the Chinese GF-1 Wide Field View Data

    PubMed Central

    Chang, Sheng; Wu, Bingfang; Yan, Nana; Zhu, Jianjun; Wen, Qi; Xu, Feng

    2018-01-01

    In this study, modified perpendicular drought index (MPDI) models based on the red-near infrared spectral space are established for the first time through the analysis of the spectral characteristics of GF-1 wide field view (WFV) data, with a high spatial resolution of 16 m and the highest frequency as high as once every 4 days. GF-1 data was from the Chinese-made, new-generation high-resolution GF-1 remote sensing satellites. Soil-type spatial data are introduced for simulating soil lines in different soil types for reducing errors of using same soil line. Multiple vegetation indices are employed to analyze the response to the MPDI models. Relative soil moisture content (RSMC) and precipitation data acquired at selected stations are used to optimize the drought models, and the best one is the Two-band enhanced vegetation index (EVI2)-based MPDI model. The crop area that was statistically significantly affected by drought from a local governmental department, and used for validation. High correlations and small differences in drought-affected crop area was detected between the field observation data from the local governmental department and the EVI2-based MPDI results. The percentage of bias is between −21.8% and 14.7% in five sub-areas, with an accuracy above 95% when evaluating the performance via the data for the whole study region. Generally the proposed EVI2-based MPDI for GF-1 WFV data has great potential for reliably monitoring crop drought at a relatively high frequency and spatial scale. Currently there is almost no drought model based on GF-1 data, a full exploitation of the advantages of GF-1 satellite data and further improvement of the capacity to observe ground surface objects can provide high temporal and spatial resolution data source for refined monitoring of crop droughts. PMID:29690639

  5. A Refined Crop Drought Monitoring Method Based on the Chinese GF-1 Wide Field View Data.

    PubMed

    Chang, Sheng; Wu, Bingfang; Yan, Nana; Zhu, Jianjun; Wen, Qi; Xu, Feng

    2018-04-23

    In this study, modified perpendicular drought index (MPDI) models based on the red-near infrared spectral space are established for the first time through the analysis of the spectral characteristics of GF-1 wide field view (WFV) data, with a high spatial resolution of 16 m and the highest frequency as high as once every 4 days. GF-1 data was from the Chinese-made, new-generation high-resolution GF-1 remote sensing satellites. Soil-type spatial data are introduced for simulating soil lines in different soil types for reducing errors of using same soil line. Multiple vegetation indices are employed to analyze the response to the MPDI models. Relative soil moisture content (RSMC) and precipitation data acquired at selected stations are used to optimize the drought models, and the best one is the Two-band enhanced vegetation index (EVI2)-based MPDI model. The crop area that was statistically significantly affected by drought from a local governmental department, and used for validation. High correlations and small differences in drought-affected crop area was detected between the field observation data from the local governmental department and the EVI2-based MPDI results. The percentage of bias is between −21.8% and 14.7% in five sub-areas, with an accuracy above 95% when evaluating the performance via the data for the whole study region. Generally the proposed EVI2-based MPDI for GF-1 WFV data has great potential for reliably monitoring crop drought at a relatively high frequency and spatial scale. Currently there is almost no drought model based on GF-1 data, a full exploitation of the advantages of GF-1 satellite data and further improvement of the capacity to observe ground surface objects can provide high temporal and spatial resolution data source for refined monitoring of crop droughts.

  6. Verification of a ground-based method for simulating high-altitude, supersonic flight conditions

    NASA Astrophysics Data System (ADS)

    Zhou, Xuewen; Xu, Jian; Lv, Shuiyan

    Ground-based methods for accurately representing high-altitude, high-speed flight conditions have been an important research topic in the aerospace field. Based on an analysis of the requirements for high-altitude supersonic flight tests, a ground-based test bed was designed combining Laval nozzle, which is often found in wind tunnels, with a rocket sled system. Sled tests were used to verify the performance of the test bed. The test results indicated that the test bed produced a uniform-flow field with a static pressure and density equivalent to atmospheric conditions at an altitude of 13-15km and at a flow velocity of approximately M 2.4. This test method has the advantages of accuracy, fewer experimental limitations, and reusability.

  7. Characterization for elastic constants of fused deposition modelling-fabricated materials based on the virtual fields method and digital image correlation

    NASA Astrophysics Data System (ADS)

    Cao, Quankun; Xie, Huimin

    2017-12-01

    Fused deposition modelling (FDM), a widely used rapid prototyping process, is a promising technique in manufacturing engineering. In this work, a method for characterizing elastic constants of FDM-fabricated materials is proposed. First of all, according to the manufacturing process of FDM, orthotropic constitutive model is used to describe the mechanical behavior. Then the virtual fields method (VFM) is applied to characterize all the mechanical parameters (Q_{11}, Q_{22}, Q_{12}, Q_{66}) using the full-field strain, which is measured by digital image correlation (DIC). Since the principal axis of the FDM-fabricated structure is sometimes unknown due to the complexity of the manufacturing process, a disk in diametrical compression is used as the load configuration so that the loading angle can be changed conveniently. To verify the feasibility of the proposed method, finite element method (FEM) simulation is conducted to obtain the strain field of the disk. The simulation results show that higher accuracy can be achieved when the loading angle is close to 30°. Finally, a disk fabricated by FDM was used for the experiment. By rotating the disk, several tests with different loading angles were conducted. To determine the position of the principal axis in each test, two groups of parameters (Q_{11}, Q_{22}, Q_{12}, Q_{66}) are calculated by two different groups of virtual fields. Then the corresponding loading angle can be determined by minimizing the deviation between two groups of the parameters. After that, the four constants (Q_{11}, Q_{22}, Q_{12}, Q_{66}) were determined from the test with an angle of 27°.

  8. Estimation of phase derivatives using discrete chirp-Fourier-transform-based method.

    PubMed

    Gorthi, Sai Siva; Rastogi, Pramod

    2009-08-15

    Estimation of phase derivatives is an important task in many interferometric measurements in optical metrology. This Letter introduces a method based on discrete chirp-Fourier transform for accurate and direct estimation of phase derivatives, even in the presence of noise. The method is introduced in the context of the analysis of reconstructed interference fields in digital holographic interferometry. We present simulation and experimental results demonstrating the utility of the proposed method.

  9. Dewey's Concept of Experience for Inquiry-Based Landscape Drawing during Field Studies

    ERIC Educational Resources Information Center

    Tillmann, Alexander; Albrecht, Volker; Wunderlich, Jürgen

    2017-01-01

    The epistemological and educational philosophy of John Dewey is used as a theoretical basis to analyze processes of knowledge construction during geographical field studies. The experience of landscape drawing as a method of inquiry and a starting point for research-based learning is empirically evaluated. The basic drawing skills are acquired…

  10. FIELD OPERATIONS AND METHODS FOR MEASURING THE ECOLOGICAL CONDITION OF NON-WADEABLE RIVERS AND STREAMS

    EPA Science Inventory

    The methods and instructions for field operations presented in this manual for surveys of non-wadeable streams and rivers were developed and tested based on 55 sample sites in the Mid-Atlantic region and 53 sites in an Oregon study during two years of pilot and demonstration proj...

  11. Field Analysis of Microbial Contamination Using Three Molecular Methods in Parallel

    NASA Technical Reports Server (NTRS)

    Morris, H.; Stimpson, E.; Schenk, A.; Kish, A.; Damon, M.; Monaco, L.; Wainwright, N.; Steele, A.

    2010-01-01

    Advanced technologies with the capability of detecting microbial contamination remain an integral tool for the next stage of space agency proposed exploration missions. To maintain a clean, operational spacecraft environment with minimal potential for forward contamination, such technology is a necessity, particularly, the ability to analyze samples near the point of collection and in real-time both for conducting biological scientific experiments and for performing routine monitoring operations. Multiple molecular methods for detecting microbial contamination are available, but many are either too large or not validated for use on spacecraft. Two methods, the adenosine- triphosphate (ATP) and Limulus Amebocyte Lysate (LAL) assays have been approved by the NASA Planetary Protection Office for the assessment of microbial contamination on spacecraft surfaces. We present the first parallel field analysis of microbial contamination pre- and post-cleaning using these two methods as well as universal primer-based polymerase chain reaction (PCR).

  12. The Investigation of Attitude Changes of Elementary Preservice Teachers in a Competency-Based, Field-Oriented Science Methods Course and Attitude Changes of Classroom Teachers Cooperating with the Field Component.

    ERIC Educational Resources Information Center

    Piper, Martha K.

    Thirty-six students enrolled in an elementary science methods course were randomly selected and given an instrument using Osgood's semantic differential approach the first week of class, the sixth week on campus prior to field experiences, and the thirteenth week following field experiences. The elementary teachers who had observed the university…

  13. Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR

    PubMed Central

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894

  14. Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.

    PubMed

    Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng

    2010-01-01

    Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.

  15. Domain Adaptation Methods for Improving Lab-to-field Generalization of Cocaine Detection using Wearable ECG.

    PubMed

    Natarajan, Annamalai; Angarita, Gustavo; Gaiser, Edward; Malison, Robert; Ganesan, Deepak; Marlin, Benjamin M

    2016-09-01

    Mobile health research on illicit drug use detection typically involves a two-stage study design where data to learn detectors is first collected in lab-based trials, followed by a deployment to subjects in a free-living environment to assess detector performance. While recent work has demonstrated the feasibility of wearable sensors for illicit drug use detection in the lab setting, several key problems can limit lab-to-field generalization performance. For example, lab-based data collection often has low ecological validity, the ground-truth event labels collected in the lab may not be available at the same level of temporal granularity in the field, and there can be significant variability between subjects. In this paper, we present domain adaptation methods for assessing and mitigating potential sources of performance loss in lab-to-field generalization and apply them to the problem of cocaine use detection from wearable electrocardiogram sensor data.

  16. A data base of geologic field spectra

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Goetz, A. F. H.; Paley, H. N.; Alley, R. E.; Abbott, E. A.

    1981-01-01

    It is noted that field samples measured in the laboratory do not always present an accurate picture of the ground surface sensed by airborne or spaceborne instruments because of the heterogeneous nature of most surfaces and because samples are disturbed and surface characteristics changed by collection and handling. The development of new remote sensing instruments relies on the analysis of surface materials in their natural state. The existence of thousands of Portable Field Reflectance Spectrometer (PFRS) spectra has necessitated a single, all-inclusive data base that permits greatly simplified searching and sorting procedures and facilitates further statistical analyses. The data base developed at JPL for cataloging geologic field spectra is discussed.

  17. Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model

    NASA Astrophysics Data System (ADS)

    Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya

    2017-12-01

    We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.

  18. A GIS-based method for household recruitment in a prospective pesticide exposure study.

    PubMed

    Allpress, Justine L E; Curry, Ross J; Hanchette, Carol L; Phillips, Michael J; Wilcosky, Timothy C

    2008-04-30

    Recent advances in GIS technology and remote sensing have provided new opportunities to collect ecologic data on agricultural pesticide exposure. Many pesticide studies have used historical or records-based data on crops and their associated pesticide applications to estimate exposure by measuring residential proximity to agricultural fields. Very few of these studies collected environmental and biological samples from study participants. One of the reasons for this is the cost of identifying participants who reside near study fields and analyzing samples obtained from them. In this paper, we present a cost-effective, GIS-based method for crop field selection and household recruitment in a prospective pesticide exposure study in a remote location. For the most part, our multi-phased approach was carried out in a research facility, but involved two brief episodes of fieldwork for ground truthing purposes. This method was developed for a larger study designed to examine the validity of indirect pesticide exposure estimates by comparing measured exposures in household dust, water and urine with records-based estimates that use crop location, residential proximity and pesticide application data. The study focused on the pesticide atrazine, a broadleaf herbicide used in corn production and one of the most widely-used pesticides in the U.S. We successfully used a combination of remotely-sensed data, GIS-based methods and fieldwork to select study fields and recruit participants in Illinois, a state with high corn production and heavy atrazine use. Our several-step process consisted of the identification of potential study fields and residential areas using aerial photography; verification of crop patterns and land use via site visits; development of a GIS-based algorithm to define recruitment areas around crop fields; acquisition of geocoded household-level data within each recruitment area from a commercial vendor; and confirmation of final participant household

  19. A method for gear fatigue life prediction considering the internal flow field of the gear pump

    NASA Astrophysics Data System (ADS)

    Shen, Haidong; Li, Zhiqiang; Qi, Lele; Qiao, Liang

    2018-01-01

    Gear pump is the most widely used volume type hydraulic pump, and it is the main power source of the hydraulic system. Its performance is influenced by many factors, such as working environment, maintenance, fluid pressure and so on. It is different from the gear transmission system, the internal flow field of gear pump has a greater impact on the gear life, therefore it needs to consider the internal hydraulic system when predicting the gear fatigue life. In this paper, a certain aircraft gear pump as the research object, aim at the typical failure forms, gear contact fatigue, of gear pump, proposing the prediction method based on the virtual simulation. The method use CFD (Computational fluid dynamics) software to analyze pressure distribution of internal flow field of the gear pump, and constructed the unidirectional flow-solid coupling model of gear to acquire the contact stress of tooth surface on Ansys workbench software. Finally, employing nominal stress method and Miner cumulative damage theory to calculated the gear contact fatigue life based on modified material P-S-N curve. Engineering practice show that the method is feasible and efficient.

  20. Measurement method of magnetic field for the wire suspended micro-pendulum accelerometer.

    PubMed

    Lu, Yongle; Li, Leilei; Hu, Ning; Pan, Yingjun; Ren, Chunhua

    2015-04-13

    Force producer is one of the core components of a Wire Suspended Micro-Pendulum Accelerometer; and the stability of permanent magnet in the force producer determines the consistency of the acceleration sensor's scale factor. For an assembled accelerometer; direct measurement of magnetic field strength is not a feasible option; as the magnetometer probe cannot be laid inside the micro-space of the sensor. This paper proposed an indirect measurement method of the remnant magnetization of Micro-Pendulum Accelerometer. The measurement is based on the working principle of the accelerometer; using the current output at several different scenarios to resolve the remnant magnetization of the permanent magnet. Iterative Least Squares algorithm was used for the adjustment of the data due to nonlinearity of this problem. The calculated remnant magnetization was 1.035 T. Compared to the true value; the error was less than 0.001 T. The proposed method provides an effective theoretical guidance for measuring the magnetic field of the Wire Suspended Micro-Pendulum Accelerometer; correcting the scale factor and temperature influence coefficients; etc.

  1. Traffic Video Image Segmentation Model Based on Bayesian and Spatio-Temporal Markov Random Field

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Bao, Xu; Li, Dawei; Yin, Yongwen

    2017-10-01

    Traffic video image is a kind of dynamic image and its background and foreground is changed at any time, which results in the occlusion. In this case, using the general method is more difficult to get accurate image segmentation. A segmentation algorithm based on Bayesian and Spatio-Temporal Markov Random Field is put forward, which respectively build the energy function model of observation field and label field to motion sequence image with Markov property, then according to Bayesian' rule, use the interaction of label field and observation field, that is the relationship of label field’s prior probability and observation field’s likelihood probability, get the maximum posterior probability of label field’s estimation parameter, use the ICM model to extract the motion object, consequently the process of segmentation is finished. Finally, the segmentation methods of ST - MRF and the Bayesian combined with ST - MRF were analyzed. Experimental results: the segmentation time in Bayesian combined with ST-MRF algorithm is shorter than in ST-MRF, and the computing workload is small, especially in the heavy traffic dynamic scenes the method also can achieve better segmentation effect.

  2. Combining phase-field crystal methods with a Cahn-Hilliard model for binary alloys

    NASA Astrophysics Data System (ADS)

    Balakrishna, Ananya Renuka; Carter, W. Craig

    2018-04-01

    Diffusion-induced phase transitions typically change the lattice symmetry of the host material. In battery electrodes, for example, Li ions (diffusing species) are inserted between layers in a crystalline electrode material (host). This diffusion induces lattice distortions and defect formations in the electrode. The structural changes to the lattice symmetry affect the host material's properties. Here, we propose a 2D theoretical framework that couples a Cahn-Hilliard (CH) model, which describes the composition field of a diffusing species, with a phase-field crystal (PFC) model, which describes the host-material lattice symmetry. We couple the two continuum models via coordinate transformation coefficients. We introduce the transformation coefficients in the PFC method to describe affine lattice deformations. These transformation coefficients are modeled as functions of the composition field. Using this coupled approach, we explore the effects of coarse-grained lattice symmetry and distortions on a diffusion-induced phase transition process. In this paper, we demonstrate the working of the CH-PFC model through three representative examples: First, we describe base cases with hexagonal and square symmetries for two composition fields. Next, we illustrate how the CH-PFC method interpolates lattice symmetry across a diffuse phase boundary. Finally, we compute a Cahn-Hilliard type of diffusion and model the accompanying changes to lattice symmetry during a phase transition process.

  3. Two-Wavelength Multi-Gigahertz Frequency Comb-Based Interferometry for Full-Field Profilometry

    NASA Astrophysics Data System (ADS)

    Choi, Samuel; Kashiwagi, Ken; Kojima, Shuto; Kasuya, Yosuke; Kurokawa, Takashi

    2013-10-01

    The multi-gigahertz frequency comb-based interferometer exhibits only the interference amplitude peak without the phase fringes, which can produce a rapid axial scan for full-field profilometry and tomography. Despite huge technical advantages, there remain problems that the interference intensity undulations occurred depending on the interference phase. To avoid such problems, we propose a compensation technique of the interference signals using two frequency combs with slightly varied center wavelengths. The compensated full-field surface profile measurements of cover glass and onion skin were demonstrated experimentally to verify the advantages of the proposed method.

  4. Evolution of solar magnetic fields - A new approach to MHD initial-boundary value problems by the method of nearcharacteristics

    NASA Technical Reports Server (NTRS)

    Nakagawa, Y.

    1980-01-01

    A method of analysis for the MHD initial-boundary problem is presented in which the model's formulation is based on the method of nearcharacteristics developed by Werner (1968) and modified by Shin and Kot (1978). With this method, the physical causality relationship can be traced from the perturbation to the response as in the method of characteristics, while achieving the advantage of a considerable reduction in mathematical procedures. The method offers the advantage of examining not only the evolution of nonforce free fields, but also the changes of physical conditions in the atmosphere accompanying the evolution of magnetic fields. The physical validity of the method is demonstrated with examples, and their significance in interpreting observations is discussed.

  5. Electric field strength determination in filamentary DBDs by CARS-based four-wave mixing

    NASA Astrophysics Data System (ADS)

    Boehm, Patrick; Kettlitz, Manfred; Brandenburg, Ronny; Hoeft, Hans; Czarnetzki, Uwe

    2016-09-01

    The electric field strength is a basic parameter of non-thermal plasmas. Therefore, a profound knowledge of the electric field distribution is crucial. In this contribution a four wave mixing technique based on Coherent Anti-Stokes Raman spectroscopy (CARS) is used to measure electric field strengths in filamentary dielectric barrier discharges (DBDs). The discharges are operated with a pulsed voltage in nitrogen at atmospheric pressure. Small amounts hydrogen (10 vol%) are admixed as tracer gas to evaluate the electric field strength in the 1 mm discharge gap. Absolute values of the electric field strength are determined by calibration of the CARS setup with high voltage amplitudes below the ignition threshold of the arrangement. Alteration of the electric field strength has been observed during the internal polarity reversal and the breakdown process. In this case the major advantage over emission based methods is that this technique can be used independently from emission, e.g. in the pre-phase and in between two consecutive, opposite discharge pulses where no emission occurs at all. This work was supported by the Deutsche Forschungsgemeinschaft, Forschergruppe FOR 1123 and Sonderforschungsbereich TRR 24 ``Fundamentals of complex plasmas''.

  6. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  7. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten

  8. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten

  9. Gradient-based interpolation method for division-of-focal-plane polarimeters.

    PubMed

    Gao, Shengkui; Gruev, Viktor

    2013-01-14

    Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.

  10. Field Evaluation of the Pedostructure-Based Model (Kamel®)

    USDA-ARS?s Scientific Manuscript database

    This study involves a field evaluation of the pedostructure-based model Kamel and comparisons between Kamel and the Hydrus-1D model for predicting profile soil moisture. This paper also presents a sensitivity analysis of Kamel with an evaluation field site used as the base scenario. The field site u...

  11. Dynamics of multiple viscoelastic carbon nanotube based nanocomposites with axial magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karličić, Danilo; Cajić, Milan; Murmu, Tony

    2014-06-21

    Nanocomposites and magnetic field effects on nanostructures have received great attention in recent years. A large amount of research work was focused on developing the proper theoretical framework for describing many physical effects appearing in structures on nanoscale level. Great step in this direction was successful application of nonlocal continuum field theory of Eringen. In the present paper, the free transverse vibration analysis is carried out for the system composed of multiple single walled carbon nanotubes (MSWCNT) embedded in a polymer matrix and under the influence of an axial magnetic field. Equivalent nonlocal model of MSWCNT is adopted as viscoelasticallymore » coupled multi-nanobeam system (MNBS) under the influence of longitudinal magnetic field. Governing equations of motion are derived using the Newton second low and nonlocal Rayleigh beam theory, which take into account small-scale effects, the effect of nanobeam angular acceleration, internal damping and Maxwell relation. Explicit expressions for complex natural frequency are derived based on the method of separation of variables and trigonometric method for the “Clamped-Chain” system. In addition, an analytical method is proposed in order to obtain asymptotic damped natural frequency and the critical damping ratio, which are independent of boundary conditions and a number of nanobeams in MNBS. The validity of obtained results is confirmed by comparing the results obtained for complex frequencies via trigonometric method with the results obtained by using numerical methods. The influence of the longitudinal magnetic field on the free vibration response of viscoelastically coupled MNBS is discussed in detail. In addition, numerical results are presented to point out the effects of the nonlocal parameter, internal damping, and parameters of viscoelastic medium on complex natural frequencies of the system. The results demonstrate the efficiency of the suggested methodology to find the

  12. A method for real time detecting of non-uniform magnetic field

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2015-04-01

    The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple

  13. Improved methods for fan sound field determination

    NASA Technical Reports Server (NTRS)

    Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.

    1981-01-01

    Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.

  14. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    PubMed

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3

  15. Interpreting the cross-sectional flow field in a river bank based on a genetic-algorithm two-dimensional heat-transport method (GA-VS2DH)

    NASA Astrophysics Data System (ADS)

    Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui

    2016-12-01

    Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.

  16. Extending methods: using Bourdieu's field analysis to further investigate taste

    NASA Astrophysics Data System (ADS)

    Schindel Dimick, Alexandra

    2015-06-01

    In this commentary on Per Anderhag, Per-Olof Wickman and Karim Hamza's article Signs of taste for science, I consider how their study is situated within the concern for the role of science education in the social and cultural production of inequality. Their article provides a finely detailed methodology for analyzing the constitution of taste within science education classrooms. Nevertheless, because the authors' socially situated methodology draws upon Bourdieu's theories, it seems equally important to extend these methods to consider how and why students make particular distinctions within a relational context—a key aspect of Bourdieu's theory of cultural production. By situating the constitution of taste within Bourdieu's field analysis, researchers can explore the ways in which students' tastes and social positionings are established and transformed through time, space, place, and their ability to navigate the field. I describe the process of field analysis in relation to the authors' paper and suggest that combining the authors' methods with a field analysis can provide a strong methodological and analytical framework in which theory and methods combine to create a detailed understanding of students' interest in relation to their context.

  17. Cheminformatics meets molecular mechanics: a combined application of knowledge-based pose scoring and physical force field-based hit scoring functions improves the accuracy of structure-based virtual screening.

    PubMed

    Hsieh, Jui-Hua; Yin, Shuangye; Wang, Xiang S; Liu, Shubin; Dokholyan, Nikolay V; Tropsha, Alexander

    2012-01-23

    Poor performance of scoring functions is a well-known bottleneck in structure-based virtual screening (VS), which is most frequently manifested in the scoring functions' inability to discriminate between true ligands vs known nonbinders (therefore designated as binding decoys). This deficiency leads to a large number of false positive hits resulting from VS. We have hypothesized that filtering out or penalizing docking poses recognized as non-native (i.e., pose decoys) should improve the performance of VS in terms of improved identification of true binders. Using several concepts from the field of cheminformatics, we have developed a novel approach to identifying pose decoys from an ensemble of poses generated by computational docking procedures. We demonstrate that the use of target-specific pose (scoring) filter in combination with a physical force field-based scoring function (MedusaScore) leads to significant improvement of hit rates in VS studies for 12 of the 13 benchmark sets from the clustered version of the Database of Useful Decoys (DUD). This new hybrid scoring function outperforms several conventional structure-based scoring functions, including XSCORE::HMSCORE, ChemScore, PLP, and Chemgauss3, in 6 out of 13 data sets at early stage of VS (up 1% decoys of the screening database). We compare our hybrid method with several novel VS methods that were recently reported to have good performances on the same DUD data sets. We find that the retrieved ligands using our method are chemically more diverse in comparison with two ligand-based methods (FieldScreen and FLAP::LBX). We also compare our method with FLAP::RBLB, a high-performance VS method that also utilizes both the receptor and the cognate ligand structures. Interestingly, we find that the top ligands retrieved using our method are highly complementary to those retrieved using FLAP::RBLB, hinting effective directions for best VS applications. We suggest that this integrative VS approach combining

  18. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  19. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-08-12

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison.

  20. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  1. Remote monitoring and fault recovery for FPGA-based field controllers of telescope and instruments

    NASA Astrophysics Data System (ADS)

    Zhu, Yuhua; Zhu, Dan; Wang, Jianing

    2012-09-01

    As the increasing size and more and more functions, modern telescopes have widely used the control architecture, i.e. central control unit plus field controller. FPGA-based field controller has the advantages of field programmable, which provide a great convenience for modifying software and hardware of control system. It also gives a good platform for implementation of the new control scheme. Because of multi-controlled nodes and poor working environment in scattered locations, reliability and stability of the field controller should be fully concerned. This paper mainly describes how we use the FPGA-based field controller and Ethernet remote to construct monitoring system with multi-nodes. When failure appearing, the new FPGA chip does self-recovery first in accordance with prerecovery strategies. In case of accident, remote reconstruction for the field controller can be done through network intervention if the chip is not being restored. This paper also introduces the network remote reconstruction solutions of controller, the system structure and transport protocol as well as the implementation methods. The idea of hardware and software design is given based on the FPGA. After actual operation on the large telescopes, desired results have been achieved. The improvement increases system reliability and reduces workload of maintenance, showing good application and popularization.

  2. Global positioning method based on polarized light compass system

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  3. A full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) for nonsmooth electromagnetic fields in waveguides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan Kai; Cai Wei; Ji Xia

    2008-07-20

    In this paper, we propose a new full vectorial generalized discontinuous Galerkin beam propagation method (GDG-BPM) to accurately handle the discontinuities in electromagnetic fields associated with wave propagations in inhomogeneous optical waveguides. The numerical method is a combination of the traditional beam propagation method (BPM) with a newly developed generalized discontinuous Galerkin (GDG) method [K. Fan, W. Cai, X. Ji, A generalized discontinuous Galerkin method (GDG) for Schroedinger equations with nonsmooth solutions, J. Comput. Phys. 227 (2008) 2387-2410]. The GDG method is based on a reformulation, using distributional variables to account for solution jumps across material interfaces, of Schroedinger equationsmore » resulting from paraxial approximations of vector Helmholtz equations. Four versions of the GDG-BPM are obtained for either the electric or magnetic field components. Modeling of wave propagations in various optical fibers using the full vectorial GDG-BPM is included. Numerical results validate the high order accuracy and the flexibility of the method for various types of interface jump conditions.« less

  4. Simple, Low-Cost Data Collection Methods for Agricultural Field Studies.

    ERIC Educational Resources Information Center

    Koenig, Richard T.; Winger, Marlon; Kitchen, Boyd

    2000-01-01

    Summarizes relatively simple and inexpensive methods for collecting data from agricultural field studies. Describes methods involving on-farm testing, crop yield measurement, quality evaluations, weed control effectiveness, plant nutrient status, and other measures. Contains 29 references illustrating how these methods were used to conduct…

  5. 3D displacement field measurement with correlation based on the micro-geometrical surface texture

    NASA Astrophysics Data System (ADS)

    Bubaker-Isheil, Halima; Serri, Jérôme; Fontaine, Jean-François

    2011-07-01

    Image correlation methods are widely used in experimental mechanics to obtain displacement field measurements. Currently, these methods are applied using digital images of the initial and deformed surfaces sprayed with black or white paint. Speckle patterns are then captured and the correlation is performed with a high degree of accuracy to an order of 0.01 pixels. In 3D, however, stereo-correlation leads to a lower degree of accuracy. Correlation techniques are based on the search for a sub-image (or pattern) displacement field. The work presented in this paper introduces a new correlation-based approach for 3D displacement field measurement that uses an additional 3D laser scanner and a CMM (Coordinate Measurement Machine). Unlike most existing methods that require the presence of markers on the observed object (such as black speckle, grids or random patterns), this approach relies solely on micro-geometrical surface textures such as waviness, roughness and aperiodic random defects. The latter are assumed to remain sufficiently small thus providing an adequate estimate of the particle displacement. The proposed approach can be used in a wide range of applications such as sheet metal forming with large strains. The method proceeds by first obtaining cloud points using the 3D laser scanner mounted on a CMM. These points are used to create 2D maps that are then correlated. In this respect, various criteria have been investigated for creating maps consisting of patterns, which facilitate the correlation procedure. Once the maps are created, the correlation between both configurations (initial and moved) is carried out using traditional methods developed for field measurements. Measurement validation was conducted using experiments in 2D and 3D with good results for rigid displacements in 2D, 3D and 2D rotations.

  6. Introducing Field-Based Geologic Research Using Soil Geomorphology

    ERIC Educational Resources Information Center

    Eppes, Martha Cary

    2009-01-01

    A field-based study of soils and the factors that influence their development is a strong, broad introduction to geologic concepts and research. A course blueprint is detailed where students design and complete a semester-long field-based soil geomorphology project. Students are first taught basic soil concepts and to describe soil, sediment and…

  7. Adaptive Markov Random Fields for Example-Based Super-resolution of Faces

    NASA Astrophysics Data System (ADS)

    Stephenson, Todd A.; Chen, Tsuhan

    2006-12-01

    Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.

  8. The system analysis of light field information collection based on the light field imaging

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Li, Wenhua; Hao, Chenyang

    2016-10-01

    Augmented reality(AR) technology is becoming the study focus, and the AR effect of the light field imaging makes the research of light field camera attractive. The micro array structure was adopted in most light field information acquisition system(LFIAS) since emergence of light field camera, micro lens array(MLA) and micro pinhole array(MPA) system mainly included. It is reviewed in this paper the structure of the LFIAS that the Light field camera commonly used in recent years. LFIAS has been analyzed based on the theory of geometrical optics. Meanwhile, this paper presents a novel LFIAS, plane grating system, we call it "micro aperture array(MAA." And the LFIAS are analyzed based on the knowledge of information optics; This paper proves that there is a little difference in the multiple image produced by the plane grating system. And the plane grating system can collect and record the amplitude and phase information of the field light.

  9. Piezoresistor-equipped fluorescence-based cantilever probe for near-field scanning.

    PubMed

    Kan, Tetsuo; Matsumoto, Kiyoshi; Shimoyama, Isao

    2007-08-01

    Scanning near-field optical microscopes (SNOMs) with fluorescence-based probes are promising tools for evaluating the optical characteristics of nanoaperture devices used for biological investigations, and this article reports on the development of a microfabricated fluorescence-based SNOM probe with a piezoresistor. The piezoresistor was built into a two-legged root of a 160-microm-long cantilever. To improve the displacement sensitivity of the cantilever, the piezoresistor's doped area was shallowly formed on the cantilever surface. A fluorescent bead, 500 nm in diameter, was attached to the bottom of the cantilever end as a light-intensity-sensitive material in the visible-light range. The surface of the scanned sample was simply detected by the probe's end being displaced by contact with the sample. Measuring displacements piezoresistively is advantageous because it eliminates the noise arising from the use of the optical-lever method and is free of any disturbance in the absorption or the emission spectrum of the fluorescent material at the probe tip. The displacement sensitivity was estimated to be 6.1 x 10(-6) nm(-1), and the minimum measurable displacement was small enough for near-field measurement. This probe enabled clear scanning images of the light field near a 300 x 300 nm(2) aperture to be obtained in the near-field region where the tip-sample distance is much shorter than the light wavelength. This scanning result indicates that the piezoresistive way of tip-sample distance regulation is effective for characterizing nanoaperture optical devices.

  10. Magnetic Helicity Estimations in Models and Observations of the Solar Magnetic Field. III. Twist Number Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Pariat, E.; Moraitis, K.

    We study the writhe, twist, and magnetic helicity of different magnetic flux ropes, based on models of the solar coronal magnetic field structure. These include an analytical force-free Titov–Démoulin equilibrium solution, non-force-free magnetohydrodynamic simulations, and nonlinear force-free magnetic field models. The geometrical boundary of the magnetic flux rope is determined by the quasi-separatrix layer and the bottom surface, and the axis curve of the flux rope is determined by its overall orientation. The twist is computed by the Berger–Prior formula, which is suitable for arbitrary geometry and both force-free and non-force-free models. The magnetic helicity is estimated by the twistmore » multiplied by the square of the axial magnetic flux. We compare the obtained values with those derived by a finite volume helicity estimation method. We find that the magnetic helicity obtained with the twist method agrees with the helicity carried by the purely current-carrying part of the field within uncertainties for most test cases. It is also found that the current-carrying part of the model field is relatively significant at the very location of the magnetic flux rope. This qualitatively explains the agreement between the magnetic helicity computed by the twist method and the helicity contributed purely by the current-carrying magnetic field.« less

  11. New method for solving inductive electric fields in the non-uniformly conducting ionosphere

    NASA Astrophysics Data System (ADS)

    Vanhamäki, H.; Amm, O.; Viljanen, A.

    2006-10-01

    We present a new calculation method for solving inductive electric fields in the ionosphere. The time series of the potential part of the ionospheric electric field, together with the Hall and Pedersen conductances serves as the input to this method. The output is the time series of the induced rotational part of the ionospheric electric field. The calculation method works in the time-domain and can be used with non-uniform, time-dependent conductances. In addition, no particular symmetry requirements are imposed on the input potential electric field. The presented method makes use of special non-local vector basis functions called the Cartesian Elementary Current Systems (CECS). This vector basis offers a convenient way of representing curl-free and divergence-free parts of 2-dimensional vector fields and makes it possible to solve the induction problem using simple linear algebra. The new calculation method is validated by comparing it with previously published results for Alfvén wave reflection from a uniformly conducting ionosphere.

  12. Transformations Based on Continuous Piecewise-Affine Velocity Fields

    PubMed Central

    Freifeld, Oren; Hauberg, Søren; Batmanghelich, Kayhan; Fisher, Jonn W.

    2018-01-01

    We propose novel finite-dimensional spaces of well-behaved ℝn → ℝn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization over monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available. PMID:28092517

  13. Transformations based on continuous piecewise-affine velocity fields

    DOE PAGES

    Freifeld, Oren; Hauberg, Soren; Batmanghelich, Kayhan; ...

    2017-01-11

    Here, we propose novel finite-dimensional spaces of well-behaved Rn → Rn transformations. The latter are obtained by (fast and highly-accurate) integration of continuous piecewise-affine velocity fields. The proposed method is simple yet highly expressive, effortlessly handles optional constraints (e.g., volume preservation and/or boundary conditions), and supports convenient modeling choices such as smoothing priors and coarse-to-fine analysis. Importantly, the proposed approach, partly due to its rapid likelihood evaluations and partly due to its other properties, facilitates tractable inference over rich transformation spaces, including using Markov-Chain Monte-Carlo methods. Its applications include, but are not limited to: monotonic regression (more generally, optimization overmore » monotonic functions); modeling cumulative distribution functions or histograms; time-warping; image warping; image registration; real-time diffeomorphic image editing; data augmentation for image classifiers. Our GPU-based code is publicly available.« less

  14. Application of State Quantization-Based Methods in HEP Particle Transport Simulation

    NASA Astrophysics Data System (ADS)

    Santi, Lucio; Ponieman, Nicolás; Jun, Soon Yung; Genser, Krzysztof; Elvira, Daniel; Castro, Rodrigo

    2017-10-01

    Simulation of particle-matter interactions in complex geometries is one of the main tasks in high energy physics (HEP) research. An essential aspect of it is an accurate and efficient particle transportation in a non-uniform magnetic field, which includes the handling of volume crossings within a predefined 3D geometry. Quantized State Systems (QSS) is a family of numerical methods that provides attractive features for particle transportation processes, such as dense output (sequences of polynomial segments changing only according to accuracy-driven discrete events) and lightweight detection and handling of volume crossings (based on simple root-finding of polynomial functions). In this work we present a proof-of-concept performance comparison between a QSS-based standalone numerical solver and an application based on the Geant4 simulation toolkit, with its default Runge-Kutta based adaptive step method. In a case study with a charged particle circulating in a vacuum (with interactions with matter turned off), in a uniform magnetic field, and crossing up to 200 volume boundaries twice per turn, simulation results showed speedups of up to 6 times in favor of QSS while it being 10 times slower in the case with zero volume boundaries.

  15. Effective-field renormalization-group method for Ising systems

    NASA Astrophysics Data System (ADS)

    Fittipaldi, I. P.; De Albuquerque, D. F.

    1992-02-01

    A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.

  16. Assessment of real-time PCR based methods for quantification of pollen-mediated gene flow from GM to conventional maize in a field study.

    PubMed

    Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric

    2006-04-01

    Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions.

  17. Limitations of the background field method applied to Rayleigh-Bénard convection

    NASA Astrophysics Data System (ADS)

    Nobili, Camilla; Otto, Felix

    2017-09-01

    We consider Rayleigh-Bénard convection as modeled by the Boussinesq equations, in the case of infinite Prandtl numbers and with no-slip boundary condition. There is a broad interest in bounds of the upwards heat flux, as given by the Nusselt number Nu, in terms of the forcing via the imposed temperature difference, as given by the Rayleigh number in the turbulent regime Ra ≫ 1 . In several studies, the background field method applied to the temperature field has been used to provide upper bounds on Nu in terms of Ra. In these applications, the background field method comes in the form of a variational problem where one optimizes a stratified temperature profile subject to a certain stability condition; the method is believed to capture the marginal stability of the boundary layer. The best available upper bound via this method is Nu ≲Ra/1 3 ( ln R a )/1 15 ; it proceeds via the construction of a stable temperature background profile that increases logarithmically in the bulk. In this paper, we show that the background temperature field method cannot provide a tighter upper bound in terms of the power of the logarithm. However, by another method, one does obtain the tighter upper bound Nu ≲ Ra /1 3 ( ln ln Ra ) /1 3 so that the result of this paper implies that the background temperature field method is unphysical in the sense that it cannot provide the optimal bound.

  18. A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate g...

  19. New Method to Calculate the Time Variation of the Force Field Parameter

    NASA Astrophysics Data System (ADS)

    Santiago, A.; Lara, A.; Enríquez-Rivera, O.; Caballero-Lopez, R. A.

    2018-03-01

    Galactic cosmic rays (CRs) entering the heliosphere are affected by interplanetary magnetic fields and solar wind disturbances resulting in the modulation of the CR total flux observed in the inner heliosphere. The so-called force field model is often used to compute the galactic CR spectrum modulated by the solar activity due to the fact that it characterizes this process by only one parameter (the modulation potential, ϕ). In this work, we present two types of an empirical simplification (ES) method used to reconstruct the time variation of the modulation potential (Δϕ). Our ES offers a simple and fast alternative to compute the Δϕ at any desired time. The first ES type is based on the empirical fact that the dependence between Δϕ and neutron monitor (NM) count rates can be parameterized by a second-degree polynomial. The second ES type is based on the assumption that there is a inverse relation between Δϕ and NM count rates. We report the parameters found for the two types, which may be used to compute Δϕ for some NMs in a very fast and efficient way. In order to test the validity of the proposed ES, we compare our results with Δϕ obtained from literature. Finally, we apply our method to obtain the proton and helium spectra of primary CRs near the Earth at four randomly selected times.

  20. 3-D Magnetotelluric Forward Modeling And Inversion Incorporating Topography By Using Vector Finite-Element Method Combined With Divergence Corrections Based On The Magnetic Field (VFEH++)

    NASA Astrophysics Data System (ADS)

    Shi, X.; Utada, H.; Jiaying, W.

    2009-12-01

    The vector finite-element method combined with divergence corrections based on the magnetic field H, referred to as VFEH++ method, is developed to simulate the magnetotelluric (MT) responses of 3-D conductivity models. The advantages of the new VFEH++ method are the use of edge-elements to eliminate the vector parasites and the divergence corrections to explicitly guarantee the divergence-free conditions in the whole modeling domain. 3-D MT topographic responses are modeling using the new VFEH++ method, and are compared with those calculated by other numerical methods. The results show that MT responses can be modeled highly accurate using the VFEH+ +method. The VFEH++ algorithm is also employed for the 3-D MT data inversion incorporating topography. The 3-D MT inverse problem is formulated as a minimization problem of the regularized misfit function. In order to avoid the huge memory requirement and very long time for computing the Jacobian sensitivity matrix for Gauss-Newton method, we employ the conjugate gradient (CG) approach to solve the inversion equation. In each iteration of CG algorithm, the cost computation is the product of the Jacobian sensitivity matrix with a model vector x or its transpose with a data vector y, which can be transformed into two pseudo-forwarding modeling. This avoids the full explicitly Jacobian matrix calculation and storage which leads to considerable savings in the memory required by the inversion program in PC computer. The performance of CG algorithm will be illustrated by several typical 3-D models with horizontal earth surface and topographic surfaces. The results show that the VFEH++ and CG algorithms can be effectively employed to 3-D MT field data inversion.

  1. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  2. A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters

    NASA Technical Reports Server (NTRS)

    Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani

    2013-01-01

    This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.

  3. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  4. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.

  5. An Improved Image Matching Method Based on Surf Algorithm

    NASA Astrophysics Data System (ADS)

    Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.

    2018-04-01

    Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.

  6. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R [Albuquerque, NM; Rohde, Steven B [Corrales, NM; Novak, James L [Albuquerque, NM

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  7. Lagrangian based methods for coherent structure detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allshouse, Michael R., E-mail: mallshouse@chaos.utexas.edu; Peacock, Thomas, E-mail: tomp@mit.edu

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other twomore » approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.« less

  8. Lagrangian based methods for coherent structure detection

    NASA Astrophysics Data System (ADS)

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  9. Detection of a sudden change of the field time series based on the Lorenz system

    PubMed Central

    Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan

    2017-01-01

    We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series. PMID:28141832

  10. Detection of a sudden change of the field time series based on the Lorenz system.

    PubMed

    Da, ChaoJiu; Li, Fang; Shen, BingLu; Yan, PengCheng; Song, Jian; Ma, DeShan

    2017-01-01

    We conducted an exploratory study of the detection of a sudden change of the field time series based on the numerical solution of the Lorenz system. First, the time when the Lorenz path jumped between the regions on the left and right of the equilibrium point of the Lorenz system was quantitatively marked and the sudden change time of the Lorenz system was obtained. Second, the numerical solution of the Lorenz system was regarded as a vector; thus, this solution could be considered as a vector time series. We transformed the vector time series into a time series using the vector inner product, considering the geometric and topological features of the Lorenz system path. Third, the sudden change of the resulting time series was detected using the sliding t-test method. Comparing the test results with the quantitatively marked time indicated that the method could detect every sudden change of the Lorenz path, thus the method is effective. Finally, we used the method to detect the sudden change of the pressure field time series and temperature field time series, and obtained good results for both series, which indicates that the method can apply to high-dimension vector time series. Mathematically, there is no essential difference between the field time series and vector time series; thus, we provide a new method for the detection of the sudden change of the field time series.

  11. A self-consistent field method for galactic dynamics

    NASA Technical Reports Server (NTRS)

    Hernquist, Lars; Ostriker, Jeremiah P.

    1992-01-01

    The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.

  12. Method for imaging with low frequency electromagnetic fields

    DOEpatents

    Lee, Ki H.; Xie, Gan Q.

    1994-01-01

    A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.

  13. Method for imaging with low frequency electromagnetic fields

    DOEpatents

    Lee, K.H.; Xie, G.Q.

    1994-12-13

    A method is described for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The travel times corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter [alpha] for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography. 13 figures.

  14. [The Diagnostics of Detonation Flow External Field Based on Multispectral Absorption Spectroscopy Technology].

    PubMed

    Lü, Xiao-jing; Li, Ning; Weng, Chun-sheng

    2016-03-01

    Compared with traditional sampling-based sensing method, absorption spectroscopy technology is well suitable for detonation flow diagnostics, since it can provide with us fast response, nonintrusive, sensitive solution for situ measurements of multiple flow-field parameters. The temperature and concentration test results are the average values along the laser path with traditional absorption spectroscopy technology, while the boundary of detonation flow external field is unknown and it changes all the time during the detonation engine works, traditional absorption spectroscopy technology is no longer suitable for detonation diagnostics. The trend of line strength with temperature varies with different absorption lines. By increasing the number of absorption lines in the test path, more information of the non-uniform flow field can be obtained. In this paper, based on multispectral absorption technology, the reconstructed model of detonation flow external field distribution was established according to the simulation results of space-time conservation element and solution element method, and a diagnostic method of detonation flow external field was given. The model deviation and calculation error of the least squares method adopted were studied by simulation, and the maximum concentration and temperature calculation error was 20.1% and 3.2%, respectively. Four absorption lines of H2O were chosen and detonation flow was scanned at the same time. The detonation external flow testing system was set up for the valveless gas-liquid continuous pulse detonation engine with the diameter of 80 mm. Through scanning H2O absorption lines with a high frequency of 10 kHz, the on-line detection of detonation external flow was realized by direct absorption method combined with time-division multiplexing technology, and the reconstruction of dynamic temperature distribution was realized as well for the first time, both verifying the feasibility of the test method. The test results

  15. Interior reconstruction method based on rotation-translation scanning model.

    PubMed

    Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian

    2014-01-01

    In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.

  16. Method of determining interwell oil field fluid saturation distribution

    DOEpatents

    Donaldson, Erle C.; Sutterfield, F. Dexter

    1981-01-01

    A method of determining the oil and brine saturation distribution in an oil field by taking electrical current and potential measurements among a plurality of open-hole wells geometrically distributed throughout the oil field. Poisson's equation is utilized to develop fluid saturation distributions from the electrical current and potential measurement. Both signal generating equipment and chemical means are used to develop current flow among the several open-hole wells.

  17. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-06-01

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Studying the Transfer of Magnetic Helicity in Solar Active Regions with the Connectivity-based Helicity Flux Density Method

    NASA Astrophysics Data System (ADS)

    Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.

    2018-01-01

    In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.

  19. High-field neutral beam injection for improving the Q of a gas dynamic trap-based fusion neutron source

    NASA Astrophysics Data System (ADS)

    Zeng, Qiusun; Chen, Dehong; Wang, Minghuang

    2017-12-01

    In order to improve the fusion energy gain (Q) of a gas dynamic trap (GDT)-based fusion neutron source, a method in which the neutral beam is obliquely injected at a higher magnetic field position rather than at the mid-plane of the GDT is proposed. This method is beneficial for confining a higher density of fast ions at the turning point in the zone with a higher magnetic field, as well as obtaining a higher mirror ratio by reducing the mid-plane field rather than increasing the mirror field. In this situation, collision scattering loss of fast ions with higher density will occur and change the confinement time, power balance and particle balance. Using an updated calculation model with high-field neutral beam injection for a GDT-based fusion neutron source conceptual design, we got four optimal design schemes for a GDT-based fusion neutron source in which Q was improved to two- to three-fold compared with a conventional design scheme and considering the limitation for avoiding plasma instabilities, especially the fire-hose instability. The distribution of fast ions could be optimized by building a proper magnetic field configuration with enough space for neutron shielding and by multi-beam neutral particle injection at different axial points.

  20. Field-based Information Technology in Geology Education: GeoPads

    NASA Astrophysics Data System (ADS)

    Knoop, P. A.; van der Pluijm, B.

    2004-12-01

    During the past two summers, we have successfully incorporated a field-based information technology component into our senior-level, field geology course (GS-440) at the University of Michigan's Camp Davis Geology Field Station, near Jackson, WY. Using GeoPads -- rugged TabletPCs equipped with electronic notebook software, GIS, GPS, and wireless networking -- we have significantly enhanced our field mapping exercises and field trips. While fully retaining the traditional approaches and advantages of field instruction, GeoPads offer important benefits in the development of students' spatial reasoning skills. GeoPads enable students to record observations and directly create geologic maps in the field, using a combination of an electronic field notebook (Microsoft OneNote) tightly integrated with pen-enabled GIS software (ArcGIS-ArcMap). Specifically, this arrangement permits students to analyze and manipulate their data in multiple contexts and representations -- while still in the field -- using both traditional 2-D map views, as well as richer 3-D contexts. Such enhancements provide students with powerful exploratory tools that aid the development of spatial reasoning skills, allowing more intuitive interactions with 2-D representations of our 3-D world. Additionally, field-based GIS mapping enables better error-detection, through immediate interaction with current observations in the context of both supporting data (e.g., topographic maps, aerial photos, magnetic surveys) and students' ongoing observations. The overall field-based IT approach also provides students with experience using tools that are increasingly relevant to their future academic or professional careers.

  1. Quantum Field Energy Sensor based on the Casimir Effect

    NASA Astrophysics Data System (ADS)

    Ludwig, Thorsten

    The Casimir effect converts vacuum fluctuations into a measurable force. Some new energy technologies aim to utilize these vacuum fluctuations in commonly used forms of energy like electricity or mechanical motion. In order to study these energy technologies it is helpful to have sensors for the energy density of vacuum fluctuations. In today's scientific instrumentation and scanning microscope technologies there are several common methods to measure sub-nano Newton forces. While the commercial atomic force microscopes (AFM) mostly work with silicon cantilevers, there are a large number of reports on the use of quartz tuning forks to get high-resolution force measurements or to create new force sensors. Both methods have certain advantages and disadvantages over the other. In this report the two methods are described and compared towards their usability for Casimir force measurements. Furthermore a design for a quantum field energy sensor based on the Casimir force measurement will be described. In addition some general considerations on extracting energy from vacuum fluctuations will be given.

  2. Nondestructive assessment of timber bridges using a vibration-based method

    Treesearch

    Xiping Wang; James P. Wacker; Robert J. Ross; Brian K. Brashaw

    2005-01-01

    This paper describes an effort to develop a global dynamic testing technique for evaluating the overall stiffness of timber bridge superstructures. A forced vibration method was used to measure the natural frequency of single-span timber bridges in the laboratory and field. An analytical model based on simple beam theory was proposed to represent the relationship...

  3. The 'Arm Force Field' method to predict manual arm strength based on only hand location and force direction.

    PubMed

    La Delfa, Nicholas J; Potvin, Jim R

    2017-03-01

    This paper describes the development of a novel method (termed the 'Arm Force Field' or 'AFF') to predict manual arm strength (MAS) for a wide range of body orientations, hand locations and any force direction. This method used an artificial neural network (ANN) to predict the effects of hand location and force direction on MAS, and included a method to estimate the contribution of the arm's weight to the predicted strength. The AFF method predicted the MAS values very well (r 2  = 0.97, RMSD = 5.2 N, n = 456) and maintained good generalizability with external test data (r 2  = 0.842, RMSD = 13.1 N, n = 80). The AFF can be readily integrated within any DHM ergonomics software, and appears to be a more robust, reliable and valid method of estimating the strength capabilities of the arm, when compared to current approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    NASA Astrophysics Data System (ADS)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all

  5. Simple quality assurance method of dynamic tumor tracking with the gimbaled linac system using a light field.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Hayata, Masahiro; Tsuda, Shintaro; Yamada, Kiyoshi; Nagata, Yasushi

    2016-09-08

    We proposed a simple visual method for evaluating the dynamic tumor tracking (DTT) accuracy of a gimbal mechanism using a light field. A single photon beam was set with a field size of 30 × 30 mm2 at a gantry angle of 90°. The center of a cube phantom was set up at the isocenter of a motion table, and 4D modeling was performed based on the tumor and infrared (IR) marker motion. After 4D modeling, the cube phantom was replaced with a sheet of paper, which was placed perpen-dicularly, and a light field was projected on the sheet of paper. The light field was recorded using a web camera in a treatment room that was as dark as possible. Calculated images from each image obtained using the camera were summed to compose a total summation image. Sinusoidal motion sequences were produced by moving the phantom with a fixed amplitude of 20 mm and different breathing periods of 2, 4, 6, and 8 s. The light field was projected on the sheet of paper under three conditions: with the moving phantom and DTT based on the motion of the phantom, with the moving phantom and non-DTT, and with a stationary phantom for comparison. The values of tracking errors using the light field were 1.12 ± 0.72, 0.31 ± 0.19, 0.27 ± 0.12, and 0.15 ± 0.09 mm for breathing periods of 2, 4, 6, and 8s, respectively. The tracking accuracy showed dependence on the breath-ing period. We proposed a simple quality assurance (QA) process for the tracking accuracy of a gimbal mechanism system using a light field and web camera. Our method can assess the tracking accuracy using a light field without irradiation and clearly visualize distributions like film dosimetry. © 2016 The Authors.

  6. Method of using triaxial magnetic fields for making particle structures

    DOEpatents

    Martin, James E.; Anderson, Robert A.; Williamson, Rodney L.

    2005-01-18

    A method of producing three-dimensional particle structures with enhanced magnetic susceptibility in three dimensions by applying a triaxial energetic field to a magnetic particle suspension and subsequently stabilizing said particle structure. Combinations of direct current and alternating current fields in three dimensions produce particle gel structures, honeycomb structures, and foam-like structures.

  7. A mixed pseudospectral/finite difference method for a thermally driven fluid in a nonuniform gravitational field

    NASA Technical Reports Server (NTRS)

    Macaraeg, M. G.

    1985-01-01

    A numerical study of the steady, axisymmetric flow in a heated, rotating spherical shell is conducted to model the Atmospheric General Circulation Experiment (AGCE) proposed to run aboard a later shuttle mission. The AGCE will consist of concentric rotating spheres confining a dielectric fluid. By imposing a dielectric field across the fluid a radial body force will be created. The numerical solution technique is based on the incompressible Navier-Stokes equations. In the method a pseudospectral technique is based on the incompressible Navier-Stokes equations. In the method a pseudospectral technique is used in the latitudinal direction, and a second-order accurate finite difference scheme discretizes time and radial derivatives. This paper discusses the development and performance of this numerical scheme for the AGCE which has been modelled in the past only by pure FD formulations. In addition, previous models have not investigated the effect of using a dielectric force to simulate terrestrial gravity. The effect of this dielectric force on the flow field is investigated as well as a parameter study of varying rotation rates and boundary temperatures. Among the effects noted are the production of larger velocities and enhanced reversals of radial temperature gradients for a body force generated by the electric field.

  8. Physically consistent data assimilation method based on feedback control for patient-specific blood flow analysis.

    PubMed

    Ii, Satoshi; Adib, Mohd Azrul Hisham Mohd; Watanabe, Yoshiyuki; Wada, Shigeo

    2018-01-01

    This paper presents a novel data assimilation method for patient-specific blood flow analysis based on feedback control theory called the physically consistent feedback control-based data assimilation (PFC-DA) method. In the PFC-DA method, the signal, which is the residual error term of the velocity when comparing the numerical and reference measurement data, is cast as a source term in a Poisson equation for the scalar potential field that induces flow in a closed system. The pressure values at the inlet and outlet boundaries are recursively calculated by this scalar potential field. Hence, the flow field is physically consistent because it is driven by the calculated inlet and outlet pressures, without any artificial body forces. As compared with existing variational approaches, although this PFC-DA method does not guarantee the optimal solution, only one additional Poisson equation for the scalar potential field is required, providing a remarkable improvement for such a small additional computational cost at every iteration. Through numerical examples for 2D and 3D exact flow fields, with both noise-free and noisy reference data as well as a blood flow analysis on a cerebral aneurysm using actual patient data, the robustness and accuracy of this approach is shown. Moreover, the feasibility of a patient-specific practical blood flow analysis is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  9. a Marker-Based Eulerian-Lagrangian Method for Multiphase Flow with Supersonic Combustion Applications

    NASA Astrophysics Data System (ADS)

    Fan, Xiaofeng; Wang, Jiangfeng

    2016-06-01

    The atomization of liquid fuel is a kind of intricate dynamic process from continuous phase to discrete phase. Procedures of fuel spray in supersonic flow are modeled with an Eulerian-Lagrangian computational fluid dynamics methodology. The method combines two distinct techniques and develops an integrated numerical simulation method to simulate the atomization processes. The traditional finite volume method based on stationary (Eulerian) Cartesian grid is used to resolve the flow field, and multi-component Navier-Stokes equations are adopted in present work, with accounting for the mass exchange and heat transfer occupied by vaporization process. The marker-based moving (Lagrangian) grid is utilized to depict the behavior of atomized liquid sprays injected into a gaseous environment, and discrete droplet model 13 is adopted. To verify the current approach, the proposed method is applied to simulate processes of liquid atomization in supersonic cross flow. Three classic breakup models, TAB model, wave model and K-H/R-T hybrid model, are discussed. The numerical results are compared with multiple perspectives quantitatively, including spray penetration height and droplet size distribution. In addition, the complex flow field structures induced by the presence of liquid spray are illustrated and discussed. It is validated that the maker-based Eulerian-Lagrangian method is effective and reliable.

  10. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.

  11. FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...

  12. Providing Culturally Responsive Teaching in Field-Based and Student Teaching Experiences: A Case Study

    ERIC Educational Resources Information Center

    Kea, Cathy D.; Trent, Stanley C.

    2013-01-01

    This mixed design study chronicles the yearlong outcomes of 27 undergraduate preservice teacher candidates' ability to design and deliver culturally responsive lesson plans during field-based experience lesson observations and student teaching settings after receiving instruction in a special education methods course. While components of…

  13. A telluric method for natural field induced polarization studies

    NASA Astrophysics Data System (ADS)

    Zorin, Nikita; Epishkin, Dmitrii; Yakovlev, Andrey

    2016-12-01

    Natural field induced polarization (NFIP) is a branch of low-frequency electromagnetics designed for detection of buried polarizable objects from magnetotelluric (MT) data. The conventional approach to the method deals with normalized MT apparent resistivity. We show that it is more favorable to extract the IP effect from solely electric (telluric) transfer functions instead. For lateral localization of polarizable bodies it is convenient to work with the telluric tensor determinant, which does not depend on the rotation of the receiving electric dipoles. Applicability of the new method was verified in the course of a large-scale field research. The field work was conducted in a well-explored area in East Kazakhstan known for the presence of various IP sources such as graphite, magnetite, and sulfide mineralization. A new multichannel processing approach allowed the determination of the telluric tensor components with very good accuracy. This holds out a hope that in some cases NFIP data may be used not only for detection of polarizable objects, but also for a rough estimation of their spectral IP characteristics.

  14. Method of improving field emission characteristics of diamond thin films

    DOEpatents

    Krauss, A.R.; Gruen, D.M.

    1999-05-11

    A method of preparing diamond thin films with improved field emission properties is disclosed. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display. 3 figs.

  15. Method of improving field emission characteristics of diamond thin films

    DOEpatents

    Krauss, Alan R.; Gruen, Dieter M.

    1999-01-01

    A method of preparing diamond thin films with improved field emission properties. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display.

  16. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  17. An Efficient and Examinable Illegal Fallow Fields Detecting Method with Spatio-Temporal Information Integration

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Hao; Chu, Tzu-How

    2017-04-01

    To control the rice production and farm usage in Taiwan, Agriculture and Food Agency (AFA) has published a series of policies to subsidize farmers to plant different crops or to practice fallow science 1983. Because of no efficient and examinable mechanism to verify the fallow fields surveyed by township office, illegal fallow fields were still repeated each year. In this research, we used remote sensing images, GIS data of Fields, and application records of fallow fields to establish an illegal fallow fields detecting method in Yulin County in central Taiwan. This method included: 1. collected multi-temporal images from FS-2 or SPOT series with 4 time periods; 2. combined the application records and GIS data of fields to verify the location of fallow fields; 3. conducted ground truth survey and classified images with ISODATA and Maximum Likelihood Classification (MLC); 4. defined the land cover type of fallow fields by zonal statistic; 5. verified accuracy with ground truth; 6. developed potential illegal fallow fields survey method and benefit estimation. We use 190 fallow fields with 127 legal and 63 illegal as ground truth and accuracies of illegal fallow field interpretation in producer and user are 71.43% and 38.46%. If township office surveyed 117 classified illegal fallow fields, 45 of 63 illegal fallow fields will be detected. By using our method, township office can save 38.42% of the manpower to detect illegal fallow fields and receive an examinable 71.43% producer accuracy.

  18. Online virtual isocenter based radiation field targeting for high performance small animal microirradiation

    NASA Astrophysics Data System (ADS)

    Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.

    2015-12-01

    Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean  ±  SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.

  19. A new method for indirectly estimating infiltration of paddy fields in situ

    NASA Astrophysics Data System (ADS)

    Xu, Yunqiang; Su, Baolin; Wang, Hongqi; He, Jingyi

    2018-06-01

    Infiltration is one of the major procedures in water balance research and pollution load estimation in paddy fields. In this study, a new method for indirectly estimating infiltration of paddy fields in situ was proposed and implemented in Taihu Lake basin. Since when there were no rainfall, irrigation and artificial drainage, the water depth variation process of a paddy field is only influenced by evapotranspiration and infiltration (E + F). Firstly, (E + F) was estimated by deciding the steady decreasing rate of water depth; then the evapotranspiration (ET) of the paddy field was calculated by using the crop coefficient method with the recommended FAO-56 Penman-Monteith equation; finally, the infiltration of the paddy field was obtained by subtracting ET from (E + F). Results show that the mean infiltration of the studied paddy field during rice jointing-booting period was 7.41 mm day-1, and the mean vertical infiltration and lateral seepage of the paddy field were 5.46 and 1.95 mm day-1 respectively.

  20. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  1. Field Evaluation of Advanced Methods of Subsurface Exploration for Transit Tunneling

    DOT National Transportation Integrated Search

    1980-06-01

    This report presents the results of a field evaluation of advanced methods of subsurface exploration on an ongoing urban rapid transit tunneling project. The objective of this study is to evaluate, through a field demonstration project, the feasibili...

  2. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  3. [Development and application of electroanalytical methods in biomedical fields].

    PubMed

    Kusu, Fumiyo

    2015-01-01

    To summarize our electroanalytical research in the biomedical field over the past 43 years, this review describes studies on specular reflection measurement, redox potential determination, amperometric acid sensing, HPLC with electrochemical detection, and potential oscillation across a liquid membrane. The specular reflection method was used for clarifying the adsorption of neurotransmitters and their related drugs onto a gold electrode and the interaction between dental alloys and compound iodine glycerin. A voltammetric screening test using a redox potential for the antioxidative effect of flavonoids was proposed. Amperometric acid sensing based on the measurement of the reduction prepeak current of 2-methyl-1,4-naphthoquinone (VK3) or 3,5-di-tert-buty1-1,2-benzoquinone (DBBQ) was applied to determine acid values of fats and oils, titrable acidity of coffee, and enzyme activity of lipase, free fatty acids (FFAs) in serum, short-chain fatty acids in feces, etc. The electrode reactions of phenothiazines, catechins, and cholesterol were applied to biomedical analysis using HPLC with electrochemical detection. A three-channel electrochemical detection system was utilized for the sensitive determination of redox compounds in Chinese herbal medicines. The behavior of barbituric acid derivatives was examined based on potential oscillation measurements.

  4. Development of a three-dimensional correction method for optical distortion of flow field inside a liquid droplet.

    PubMed

    Gim, Yeonghyeon; Ko, Han Seo

    2016-04-15

    In this Letter, a three-dimensional (3D) optical correction method, which was verified by simulation, was developed to reconstruct droplet-based flow fields. In the simulation, a synthetic phantom was reconstructed using a simultaneous multiplicative algebraic reconstruction technique with three detectors positioned at the synthetic object (represented by the phantom), with offset angles of 30° relative to each other. Additionally, a projection matrix was developed using the ray tracing method. If the phantom is in liquid, the image of the phantom can be distorted since the light passes through a convex liquid-vapor interface. Because of the optical distortion effect, the projection matrix used to reconstruct a 3D field should be supplemented by the revision ray, instead of the original projection ray. The revision ray can be obtained from the refraction ray occurring on the surface of the liquid. As a result, the error on the reconstruction field of the phantom could be reduced using the developed optical correction method. In addition, the developed optical method was applied to a Taylor cone which was caused by the high voltage between the droplet and the substrate.

  5. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  6. A Novel High Sensitivity Sensor for Remote Field Eddy Current Non-Destructive Testing Based on Orthogonal Magnetic Field

    PubMed Central

    Xu, Xiaojie; Liu, Ming; Zhang, Zhanbin; Jia, Yueling

    2014-01-01

    Remote field eddy current is an effective non-destructive testing method for ferromagnetic tubular structures. In view of conventional sensors' disadvantages such as low signal-to-noise ratio and poor sensitivity to axial cracks, a novel high sensitivity sensor based on orthogonal magnetic field excitation is proposed. Firstly, through a three-dimensional finite element simulation, the remote field effect under orthogonal magnetic field excitation is determined, and an appropriate configuration which can generate an orthogonal magnetic field for a tubular structure is developed. Secondly, optimized selection of key parameters such as frequency, exciting currents and shielding modes is analyzed in detail, and different types of pick-up coils, including a new self-differential mode pick-up coil, are designed and analyzed. Lastly, the proposed sensor is verified experimentally by various types of defects manufactured on a section of a ferromagnetic tube. Experimental results show that the proposed novel sensor can largely improve the sensitivity of defect detection, especially for axial crack whose depth is less than 40% wall thickness, which are very difficult to detect and identify by conventional sensors. Another noteworthy advantage of the proposed sensor is that it has almost equal sensitivity to various types of defects, when a self-differential mode pick-up coil is adopted. PMID:25615738

  7. Practical quantum mechanics-based fragment methods for predicting molecular crystal properties.

    PubMed

    Wen, Shuhao; Nanda, Kaushik; Huang, Yuanhang; Beran, Gregory J O

    2012-06-07

    Significant advances in fragment-based electronic structure methods have created a real alternative to force-field and density functional techniques in condensed-phase problems such as molecular crystals. This perspective article highlights some of the important challenges in modeling molecular crystals and discusses techniques for addressing them. First, we survey recent developments in fragment-based methods for molecular crystals. Second, we use examples from our own recent research on a fragment-based QM/MM method, the hybrid many-body interaction (HMBI) model, to analyze the physical requirements for a practical and effective molecular crystal model chemistry. We demonstrate that it is possible to predict molecular crystal lattice energies to within a couple kJ mol(-1) and lattice parameters to within a few percent in small-molecule crystals. Fragment methods provide a systematically improvable approach to making predictions in the condensed phase, which is critical to making robust predictions regarding the subtle energy differences found in molecular crystals.

  8. Improved methods for the measurement and analysis of stellar magnetic fields

    NASA Technical Reports Server (NTRS)

    Saar, Steven H.

    1988-01-01

    The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.

  9. A comparison between GO/aperture-field and physical-optics methods for offset reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1984-01-01

    Both geometrical optics (GO)/aperture-field and physical-optics (PO) methods are used extensively in the diffraction analysis of offset parabolic and dual reflectors. An analytical/numerical comparative study is performed to demonstrate the limitations of the GO/aperture-field method for accurately predicting the sidelobe and null positions and levels. In particular, it is shown that for offset parabolic reflectors and for feeds located at the focal point, the predicted far-field patterns (amplitude) by the GO/aperture-field method will always be symmetric even in the offset plane. This, of course, is inaccurate for the general case and it is shown that the physical-optics method can result in asymmetric patterns for cases in which the feed is located at the focal point. Representative numerical data are presented and a comparison is made with available measured data.

  10. A geologic approach to field methods in fluvial geomorphology

    USGS Publications Warehouse

    Fitzpatrick, Faith A.; Thornbush, Mary J; Allen, Casey D; Fitzpatrick, Faith A.

    2014-01-01

    A geologic approach to field methods in fluvial geomorphology is useful for understanding causes and consequences of past, present, and possible future perturbations in river behavior and floodplain dynamics. Field methods include characterizing river planform and morphology changes and floodplain sedimentary sequences over long periods of time along a longitudinal river continuum. Techniques include topographic and bathymetric surveying of fluvial landforms in valley bottoms and describing floodplain sedimentary sequences through coring, trenching, and examining pits and exposures. Historical sediment budgets that include floodplain sedimentary records can characterize past and present sources and sinks of sediment along a longitudinal river continuum. Describing paleochannels and floodplain vertical accretion deposits, estimating long-term sedimentation rates, and constructing historical sediment budgets can assist in management of aquatic resources, habitat, sedimentation, and flooding issues.

  11. Thorough exploration of complex environments with a space-based potential field

    NASA Astrophysics Data System (ADS)

    Kenealy, Alina; Primiano, Nicholas; Keyes, Alex; Lyons, Damian M.

    2015-01-01

    Robotic exploration, for the purposes of search and rescue or explosive device detection, can be improved by using a team of multiple robots. Potential field navigation methods offer natural and efficient distributed exploration algorithms in which team members are mutually repelled to spread out and cover the area efficiently. However, they also suffer from field minima issues. Liu and Lyons proposed a Space-Based Potential Field (SBPF) algorithm that disperses robots efficiently and also ensures they are driven in a distributed fashion to cover complex geometry. In this paper, the approach is modified to handle two problems with the original SBPF method: fast exploration of enclosed spaces, and fast navigation of convex obstacles. Firstly, a "gate-sensing" function was implemented. The function draws the robot to narrow openings, such as doors or corridors that it might otherwise pass by, to ensure every room can be explored. Secondly, an improved obstacle field conveyor belt function was developed which allows the robot to avoid walls and barriers while using their surface as a motion guide to avoid being trapped. Simulation results, where the modified SPBF program controls the MobileSim Pioneer 3-AT simulator program, are presented for a selection of maps that capture difficult to explore geometries. Physical robot results are also presented, where a team of Pioneer 3-AT robots is controlled by the modified SBPF program. Data collected prior to the improvements, new simulation results, and robot experiments are presented as evidence of performance improvements.

  12. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  13. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  14. A New Method for Coronal Magnetic Field Reconstruction

    NASA Astrophysics Data System (ADS)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  15. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  16. Remote-Sensing-Based Evaluation of Relative Consumptive Use Between Flood- and Drip-Irrigated Fields

    NASA Astrophysics Data System (ADS)

    Martinez Baquero, G. F.; Jordan, D. L.; Whittaker, A. T.; Allen, R. G.

    2013-12-01

    Governments and water authorities are compelled to evaluate the impacts of agricultural irrigation on economic development and sustainability as water supply shortages continue to increase in many communities. One of the strategies commonly used to reduce such impacts is the conversion of traditional irrigation methods towards more water-efficient practices. As part of a larger effort by the New Mexico Interstate Stream Commission to understand the environmental and economic impact of converting from flood irrigation to drip irrigation, this study evaluates the water-saving effectiveness of drip irrigation in Deming, New Mexico, using a remote-sensing-based technique combined with ground data collection. The remote-sensing-based technique used relative temperature differences as a proxy for water use to show relative differences in crop consumptive use between flood- and drip-irrigated fields. Temperature analysis showed that, on average, drip-irrigated fields were cooler than flood-irrigated fields, indicating higher water use. The higher consumption of water by drip-irrigated fields was supported by a determination of evapotranspiration (ET) from all fields using the METRIC Landsat-based surface energy balance model. METRIC analysis yielded higher instantaneous ET for drip-irrigated fields when compared to flood-irrigated fields and confirmed that drip-irrigated fields consumed more water than flood-irrigated fields planted with the same crop. More water use generally results in more biomass and hence higher crop yield, and this too was confirmed by greater relative Normalized Difference Vegetation Index for the drip irrigated fields. Results from this study confirm previous estimates regarding the impacts of increased efficiency of drip irrigation on higher water consumption in the area (Ward and Pulido-Velazquez, 2008). The higher water consumption occurs with drip because, with the limited water supplies and regulated maximum limits on pumping amounts, the

  17. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  18. An optimized content-aware image retargeting method: toward expanding the perceived visual field of the high-density retinal prosthesis recipients

    NASA Astrophysics Data System (ADS)

    Li, Heng; Zeng, Yajie; Lu, Zhuofan; Cao, Xiaofei; Su, Xiaofan; Sui, Xiaohong; Wang, Jing; Chai, Xinyu

    2018-04-01

    Objective. Retinal prosthesis devices have shown great value in restoring some sight for individuals with profoundly impaired vision, but the visual acuity and visual field provided by prostheses greatly limit recipients’ visual experience. In this paper, we employ computer vision approaches to seek to expand the perceptible visual field in patients implanted potentially with a high-density retinal prosthesis while maintaining visual acuity as much as possible. Approach. We propose an optimized content-aware image retargeting method, by introducing salient object detection based on color and intensity-difference contrast, aiming to remap important information of a scene into a small visual field and preserve their original scale as much as possible. It may improve prosthetic recipients’ perceived visual field and aid in performing some visual tasks (e.g. object detection and object recognition). To verify our method, psychophysical experiments, detecting object number and recognizing objects, are conducted under simulated prosthetic vision. As control, we use three other image retargeting techniques, including Cropping, Scaling, and seam-assisted shrinkability. Main results. Results show that our method outperforms in preserving more key features and has significantly higher recognition accuracy in comparison with other three image retargeting methods under the condition of small visual field and low-resolution. Significance. The proposed method is beneficial to expand the perceived visual field of prosthesis recipients and improve their object detection and recognition performance. It suggests that our method may provide an effective option for image processing module in future high-density retinal implants.

  19. Studies on system and measuring method of far-field beam divergency in near field by Ronchi ruling

    NASA Astrophysics Data System (ADS)

    Zhou, Chenbo; Yang, Li; Ma, Wenli; Yan, Peiying; Fan, Tianquan; He, Shangfeng

    1996-10-01

    Up to now, as large as seven times of Rayleigh-range or more is needed in measuring the far-field Gaussian beam divergency. This method is very inconvenient for the determination of the output beam divergency of the industrial product such as He-Ne lasers and the measuring unit will occupy a large space. The measurement and the measuring accuracy will be greatly influenced by the environment. Application of the Ronchi ruling to the measurement of far-field divergency of Gaussian beam in near-field is analyzed in the paper. The theoretical research and the experiments show that this measuring method is convenient in industrial application. The measuring system consists of a precision mechanical unit which scans Gaussian beam with a microdisplaced Ronchi ruling, a signal sampling system, a single-chip microcomputer data processing system and an electronic unit with microprinter output. The characteristics of the system is stable and the repeatability errors of the system are low. The spot size and far-field divergency of visible Gaussian laser beam can be measured with the system.

  20. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  1. Extremely high-definition full-parallax computer-generated hologram created by the polygon-based method.

    PubMed

    Matsushima, Kyoji; Nakahara, Sumio

    2009-12-01

    A large-scale full-parallax computer-generated hologram (CGH) with four billion (2(16) x 2(16)) pixels is created to reconstruct a fine true 3D image of a scene, with occlusions. The polygon-based method numerically generates the object field of a surface object, whose shape is provided by a set of vertex data of polygonal facets, while the silhouette method makes it possible to reconstruct the occluded scene. A novel technique using the segmented frame buffer is presented for handling and propagating large wave fields even in the case where the whole wave field cannot be stored in memory. We demonstrate that the full-parallax CGH, calculated by the proposed method and fabricated by a laser lithography system, reconstructs a fine 3D image accompanied by a strong sensation of depth.

  2. FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...

  3. Opto-mechatronics issues in solid immersion lens based near-field recording

    NASA Astrophysics Data System (ADS)

    Park, No-Cheol; Yoon, Yong-Joong; Lee, Yong-Hyun; Kim, Joong-Gon; Kim, Wan-Chin; Choi, Hyun; Lim, Seungho; Yang, Tae-Man; Choi, Moon-Ho; Yang, Hyunseok; Rhim, Yoon-Chul; Park, Young-Pil

    2007-06-01

    We analyzed the effects of an external shock on a collision problem in a solid immersion lens (SIL) based near-field recording (NFR) through a shock response analysis and proposed a possible solution to this problem with adopting a protector and safety mode. With this proposed method the collision between SIL and media can be avoided. We showed possible solution for contamination problem in SIL based NFR through a numerical air flow analysis. We also introduced possible solid immersion lens designs to increase the fabrication and assembly tolerances of an optical head with replicated lens. Potentially, these research results could advance NFR technology for commercial product.

  4. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  5. Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives.

    PubMed

    Yang, Guijun; Liu, Jiangang; Zhao, Chunjiang; Li, Zhenhong; Huang, Yanbo; Yu, Haiyang; Xu, Bo; Yang, Xiaodong; Zhu, Dongmei; Zhang, Xiaoyan; Zhang, Ruyang; Feng, Haikuan; Zhao, Xiaoqing; Li, Zhenhai; Li, Heli; Yang, Hao

    2017-01-01

    Phenotyping plays an important role in crop science research; the accurate and rapid acquisition of phenotypic information of plants or cells in different environments is helpful for exploring the inheritance and expression patterns of the genome to determine the association of genomic and phenotypic information to increase the crop yield. Traditional methods for acquiring crop traits, such as plant height, leaf color, leaf area index (LAI), chlorophyll content, biomass and yield, rely on manual sampling, which is time-consuming and laborious. Unmanned aerial vehicle remote sensing platforms (UAV-RSPs) equipped with different sensors have recently become an important approach for fast and non-destructive high throughput phenotyping and have the advantage of flexible and convenient operation, on-demand access to data and high spatial resolution. UAV-RSPs are a powerful tool for studying phenomics and genomics. As the methods and applications for field phenotyping using UAVs to users who willing to derive phenotypic parameters from large fields and tests with the minimum effort on field work and getting highly reliable results are necessary, the current status and perspectives on the topic of UAV-RSPs for field-based phenotyping were reviewed based on the literature survey of crop phenotyping using UAV-RSPs in the Web of Science™ Core Collection database and cases study by NERCITA. The reference for the selection of UAV platforms and remote sensing sensors, the commonly adopted methods and typical applications for analyzing phenotypic traits by UAV-RSPs, and the challenge for crop phenotyping by UAV-RSPs were considered. The review can provide theoretical and technical support to promote the applications of UAV-RSPs for crop phenotyping.

  6. Unmanned Aerial Vehicle Remote Sensing for Field-Based Crop Phenotyping: Current Status and Perspectives

    PubMed Central

    Yang, Guijun; Liu, Jiangang; Zhao, Chunjiang; Li, Zhenhong; Huang, Yanbo; Yu, Haiyang; Xu, Bo; Yang, Xiaodong; Zhu, Dongmei; Zhang, Xiaoyan; Zhang, Ruyang; Feng, Haikuan; Zhao, Xiaoqing; Li, Zhenhai; Li, Heli; Yang, Hao

    2017-01-01

    Phenotyping plays an important role in crop science research; the accurate and rapid acquisition of phenotypic information of plants or cells in different environments is helpful for exploring the inheritance and expression patterns of the genome to determine the association of genomic and phenotypic information to increase the crop yield. Traditional methods for acquiring crop traits, such as plant height, leaf color, leaf area index (LAI), chlorophyll content, biomass and yield, rely on manual sampling, which is time-consuming and laborious. Unmanned aerial vehicle remote sensing platforms (UAV-RSPs) equipped with different sensors have recently become an important approach for fast and non-destructive high throughput phenotyping and have the advantage of flexible and convenient operation, on-demand access to data and high spatial resolution. UAV-RSPs are a powerful tool for studying phenomics and genomics. As the methods and applications for field phenotyping using UAVs to users who willing to derive phenotypic parameters from large fields and tests with the minimum effort on field work and getting highly reliable results are necessary, the current status and perspectives on the topic of UAV-RSPs for field-based phenotyping were reviewed based on the literature survey of crop phenotyping using UAV-RSPs in the Web of Science™ Core Collection database and cases study by NERCITA. The reference for the selection of UAV platforms and remote sensing sensors, the commonly adopted methods and typical applications for analyzing phenotypic traits by UAV-RSPs, and the challenge for crop phenotyping by UAV-RSPs were considered. The review can provide theoretical and technical support to promote the applications of UAV-RSPs for crop phenotyping. PMID:28713402

  7. Advanced image based methods for structural integrity monitoring: Review and prospects

    NASA Astrophysics Data System (ADS)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  8. A universal strategy for the creation of machine learning-based atomistic force fields

    NASA Astrophysics Data System (ADS)

    Huan, Tran Doan; Batra, Rohit; Chapman, James; Krishnan, Sridevi; Chen, Lihua; Ramprasad, Rampi

    2017-09-01

    Emerging machine learning (ML)-based approaches provide powerful and novel tools to study a variety of physical and chemical problems. In this contribution, we outline a universal strategy to create ML-based atomistic force fields, which can be used to perform high-fidelity molecular dynamics simulations. This scheme involves (1) preparing a big reference dataset of atomic environments and forces with sufficiently low noise, e.g., using density functional theory or higher-level methods, (2) utilizing a generalizable class of structural fingerprints for representing atomic environments, (3) optimally selecting diverse and non-redundant training datasets from the reference data, and (4) proposing various learning approaches to predict atomic forces directly (and rapidly) from atomic configurations. From the atomistic forces, accurate potential energies can then be obtained by appropriate integration along a reaction coordinate or along a molecular dynamics trajectory. Based on this strategy, we have created model ML force fields for six elemental bulk solids, including Al, Cu, Ti, W, Si, and C, and show that all of them can reach chemical accuracy. The proposed procedure is general and universal, in that it can potentially be used to generate ML force fields for any material using the same unified workflow with little human intervention. Moreover, the force fields can be systematically improved by adding new training data progressively to represent atomic environments not encountered previously.

  9. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  10. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter.

    PubMed

    Tsapakis, Stylianos; Papaconstantinou, Dimitrios; Diagourtas, Andreas; Droutsas, Konstantinos; Andreanos, Konstantinos; Moschos, Marilita M; Brouzas, Dimitrios

    2017-01-01

    To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter. Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points) were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter. High correlation coefficient ( r =0.808, P <0.0001) was found between the virtual reality visual field test and the Humphrey perimeter visual field. Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use.

  11. Community-based human–elephant conflict mitigation: The value of an evidence-based approach in promoting the uptake of effective methods

    PubMed Central

    Gunaryadi, Donny; Sugiyo

    2017-01-01

    Human–elephant conflict (HEC) is a serious threat to elephants and can cause major economic losses. It is widely accepted that reduction of HEC will often require community-based methods for repelling elephants but there are few tests of such methods. We tested community-based crop-guarding methods with and without novel chili-based elephant deterrents and describe changes in farmers’ willingness to adopt these methods following our demonstration of their relative effectiveness. In three separate field-trials that took place over almost two years (October 2005 –May 2007) in two villages adjacent to Way Kambas National Park (WKNP) in Indonesia, we found that community-based crop-guarding was effective at keeping Asian elephants (Elephas maximus) out of crop fields in 91.2% (52 out of 57), 87.6% (156 out of 178), and 80.0% (16 out of 20) of attempted raids. Once the method had been shown to be effective at demonstration sites, farmers in 16 villages around WKNP voluntarily adopted it during the July 2008 to March 2009 period and were able to repel elephants in 73.9% (150 out of 203) of attempted raids, with seven villages repelling 100% of attempted raids. These 16 villages had all experienced high levels of HEC in the preceding years; e.g. they accounted for >97% of the 742 HEC incidents recorded for the entire park in 2006. Our work shows, therefore, that a simple evidence-based approach can facilitate significant reductions in HEC at the protected area scale. PMID:28510590

  12. Hyperspectral Imaging and Related Field Methods: Building the Science

    NASA Technical Reports Server (NTRS)

    Goetz, Alexander F. H.; Steffen, Konrad; Wessman, Carol

    1999-01-01

    The proposal requested funds for the computing power to bring hyperspectral image processing into undergraduate and graduate remote sensing courses. This upgrade made it possible to handle more students in these oversubscribed courses and to enhance CSES' summer short course entitled "Hyperspectral Imaging and Data Analysis" provided for government, industry, university and military. Funds were also requested to build field measurement capabilities through the purchase of spectroradiometers, canopy radiation sensors and a differential GPS system. These instruments provided systematic and complete sets of field data for the analysis of hyperspectral data with the appropriate radiometric and wavelength calibration as well as atmospheric data needed for application of radiative transfer models. The proposed field equipment made it possible to team-teach a new field methods course, unique in the country, that took advantage of the expertise of the investigators rostered in three different departments, Geology, Geography and Biology.

  13. Evaluation of Methods for In-Situ Calibration of Field-Deployable Microphone Phased Arrays

    NASA Technical Reports Server (NTRS)

    Humphreys, William M.; Lockard, David P.; Khorrami, Mehdi R.; Culliton, William G.; McSwain, Robert G.

    2017-01-01

    Current field-deployable microphone phased arrays for aeroacoustic flight testing require the placement of hundreds of individual sensors over a large area. Depending on the duration of the test campaign, the microphones may be required to stay deployed at the testing site for weeks or even months. This presents a challenge in regards to tracking the response (i.e., sensitivity) of the individual sensors as a function of time in order to evaluate the health of the array. To address this challenge, two different methods for in-situ tracking of microphone responses are described. The first relies on the use of an aerial sound source attached as a payload on a hovering small Unmanned Aerial System (sUAS) vehicle. The second relies on the use of individually excited ground-based sound sources strategically placed throughout the array pattern. Testing of the two methods was performed in microphone array deployments conducted at Fort A.P. Hill in 2015 and at Edwards Air Force Base in 2016. The results indicate that the drift in individual sensor responses can be tracked reasonably well using both methods. Thus, in-situ response tracking methods are useful as a diagnostic tool for monitoring the health of a phased array during long duration deployments.

  14. Quantitative Evaluation of the Total Magnetic Moments of Colloidal Magnetic Nanoparticles: A Kinetics-based Method.

    PubMed

    Liu, Haiyi; Sun, Jianfei; Wang, Haoyao; Wang, Peng; Song, Lina; Li, Yang; Chen, Bo; Zhang, Yu; Gu, Ning

    2015-06-08

    A kinetics-based method is proposed to quantitatively characterize the collective magnetization of colloidal magnetic nanoparticles. The method is based on the relationship between the magnetic force on a colloidal droplet and the movement of the droplet under a gradient magnetic field. Through computational analysis of the kinetic parameters, such as displacement, velocity, and acceleration, the magnetization of colloidal magnetic nanoparticles can be calculated. In our experiments, the values measured by using our method exhibited a better linear correlation with magnetothermal heating, than those obtained by using a vibrating sample magnetometer and magnetic balance. This finding indicates that this method may be more suitable to evaluate the collective magnetism of colloidal magnetic nanoparticles under low magnetic fields than the commonly used methods. Accurate evaluation of the magnetic properties of colloidal nanoparticles is of great importance for the standardization of magnetic nanomaterials and for their practical application in biomedicine. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. An improved method for the calculation of Near-Field Acoustic Radiation Modes

    NASA Astrophysics Data System (ADS)

    Liu, Zu-Bin; Maury, Cédric

    2016-02-01

    Sensing and controlling Acoustic Radiation Modes (ARMs) in the near-field of vibrating structures is of great interest for broadband noise reduction or enhancement, as ARMs are velocity distributions defined over a vibrating surface, that independently and optimally contribute to the acoustic power in the acoustic field. But present methods only provide far-field ARMs (FFARMs) that are inadequate for the acoustic near-field problem. The Near-Field Acoustic Radiation Modes (NFARMs) are firstly studied with an improved numerical method, the Pressure-Velocity method, which rely on the eigen decomposition of the acoustic transfers between the vibrating source and a conformal observation surface, including sound pressure and velocity transfer matrices. The active and reactive parts of the sound power are separated and lead to the active and reactive ARMs. NFARMs are studied for a 2D baffled beam and for a 3D baffled plate, and so as differences between the NFARMS and the classical FFARMs. Comparisons of the NFARMs are analyzed when varying frequency and observation distance to the source. It is found that the efficiencies and shapes of the optimal active ARMs are independent on the distance while that of the reactive ones are distinctly related on.

  16. Chip-based wide field-of-view nanoscopy

    NASA Astrophysics Data System (ADS)

    Diekmann, Robin; Helle, Øystein I.; Øie, Cristina I.; McCourt, Peter; Huser, Thomas R.; Schüttpelz, Mark; Ahluwalia, Balpreet S.

    2017-04-01

    Present optical nanoscopy techniques use a complex microscope for imaging and a simple glass slide to hold the sample. Here, we demonstrate the inverse: the use of a complex, but mass-producible optical chip, which hosts the sample and provides a waveguide for the illumination source, and a standard low-cost microscope to acquire super-resolved images via two different approaches. Waveguides composed of a material with high refractive-index contrast provide a strong evanescent field that is used for single-molecule switching and fluorescence excitation, thus enabling chip-based single-molecule localization microscopy. Additionally, multimode interference patterns induce spatial fluorescence intensity variations that enable fluctuation-based super-resolution imaging. As chip-based nanoscopy separates the illumination and detection light paths, total-internal-reflection fluorescence excitation is possible over a large field of view, with up to 0.5 mm × 0.5 mm being demonstrated. Using multicolour chip-based nanoscopy, we visualize fenestrations in liver sinusoidal endothelial cells.

  17. A two-microphone method for the determination of the mode amplitude distribution in high-frequency ducted broadband sound fields.

    PubMed

    Joseph, P F

    2017-10-01

    This paper describes a measurement technique that allows the modal amplitude distribution to be determined in ducts with mean flow and reflections. The method is based only on measurements of the acoustic pressure two-point coherence at the duct wall. The technique is primarily applicable to broadband sound fields in the high frequency limit and whose mode amplitudes are mutually incoherent. The central assumption underlying the technique is that the relative mode amplitude distribution is independent of frequency. The two-microphone method proposed in this paper is also used to determine the transmitted sound power and far field pressure directivity.

  18. Deghosting based on the transmission matrix method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong

    2017-12-01

    As the developments of seismic exploration and subsequent seismic exploitation advance, marine acquisition systems with towed streamers become an important seismic data acquisition method. But the existing air-water reflective interface can generate surface related multiples, including ghosts, which can affect the accuracy and performance of the following seismic data processing algorithms. Thus, we derive a deghosting method from a new perspective, i.e. using the transmission matrix (T-matrix) method instead of inverse scattering series. The T-matrix-based deghosting algorithm includes all scattering effects and is convergent absolutely. Initially, the effectiveness of the proposed method is demonstrated using synthetic data obtained from a designed layered model, and its noise-resistant property is also illustrated using noisy synthetic data contaminated by random noise. Numerical examples on complicated data from the open SMAART Pluto model and field marine data further demonstrate the validity and flexibility of the proposed method. After deghosting, low frequency components are recovered reasonably and the fake high frequency components are attenuated, and the recovered low frequency components will be useful for the subsequent full waveform inversion. The proposed deghosting method is currently suitable for two-dimensional towed streamer cases with accurate constant depth information and its extension into variable-depth streamers in three-dimensional cases will be studied in the future.

  19. Uncertainty Evaluations of the CRCS In-orbit Field Radiometric Calibration Methods for Thermal Infrared Channels of FENGYUN Meteorological Satellites

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Rong, Z.; Min, M.; Hao, X.; Yang, H.

    2017-12-01

    Meteorological satellites have become an irreplaceable weather and ocean-observing tool in China. These satellites are used to monitor natural disasters and improve the efficiency of many sectors of Chinese national economy. It is impossible to ignore the space-derived data in the fields of meteorology, hydrology, and agriculture, as well as disaster monitoring in China, a large agricultural country. For this reason, China is making a sustained effort to build and enhance its meteorological observing system and application system. The first Chinese polar-orbiting weather satellite was launched in 1988. Since then China has launched 14 meteorological satellites, 7 of which are sun synchronous and 7 of which are geostationary satellites; China will continue its two types of meteorological satellite programs. In order to achieve the in-orbit absolute radiometric calibration of the operational meteorological satellites' thermal infrared channels, China radiometric calibration sites (CRCS) established a set of in-orbit field absolute radiometric calibration methods (FCM) for thermal infrared channels (TIR) and the uncertainty of this method was evaluated and analyzed based on TERRA/AQUA MODIS observations. Comparisons between the MODIS at pupil brightness temperatures (BTs) and the simulated BTs at the top of atmosphere using radiative transfer model (RTM) based on field measurements showed that the accuracy of the current in-orbit field absolute radiometric calibration methods was better than 1.00K (@300K, K=1) in thermal infrared channels. Therefore, the current CRCS field calibration method for TIR channels applied to Chinese metrological satellites was with favorable calibration accuracy: for 10.5-11.5µm channel was better than 0.75K (@300K, K=1) and for 11.5-12.5µm channel was better than 0.85K (@300K, K=1).

  20. Neuronal current detection with low-field magnetic resonance: simulations and methods.

    PubMed

    Cassará, Antonino Mario; Maraviglia, Bruno; Hartwig, Stefan; Trahms, Lutz; Burghoff, Martin

    2009-10-01

    The noninvasive detection of neuronal currents in active brain networks [or direct neuronal imaging (DNI)] by means of nuclear magnetic resonance (NMR) remains a scientific challenge. Many different attempts using NMR scanners with magnetic fields >1 T (high-field NMR) have been made in the past years to detect phase shifts or magnitude changes in the NMR signals. However, the many physiological (i.e., the contemporarily BOLD effect, the weakness of the neuronal-induced magnetic field, etc.) and technical limitations (e.g., the spatial resolution) in observing the weak signals have led to some contradicting results. In contrast, only a few attempts have been made using low-field NMR techniques. As such, this paper was aimed at reviewing two recent developments in this front. The detection schemes discussed in this manuscript, the resonant mechanism (RM) and the DC method, are specific to NMR instrumentations with main fields below the earth magnetic field (50 microT), while some even below a few microteslas (ULF-NMR). However, the experimental validation for both techniques, with differentiating sensitivity to the various neuronal activities at specific temporal and spatial resolutions, is still in progress and requires carefully designed magnetic field sensor technology. Additional care should be taken to ensure a stringent magnetic shield from the ambient magnetic field fluctuations. In this review, we discuss the characteristics and prospect of these two methods in detecting neuronal currents, along with the technical requirements on the instrumentation.

  1. mHealth Series: mHealth project in Zhao County, rural China – Description of objectives, field site and methods

    PubMed Central

    van Velthoven, Michelle Helena; Li, Ye; Wang, Wei; Du, Xiaozhen; Wu, Qiong; Chen, Li; Majeed, Azeem; Rudan, Igor; Zhang, Yanfeng; Car, Josip

    2013-01-01

    Background We set up a collaboration between researchers in China and the UK that aimed to explore the use of mHealth in China. This is the first paper in a series of papers on a large mHealth project part of this collaboration. This paper included the aims and objectives of the mHealth project, our field site, and the detailed methods of two studies. Field site The field site for this mHealth project was Zhao County, which lies 280 km south of Beijing in Hebei Province, China. Methods We described the methodology of two studies: (i) a mixed methods study exploring factors influencing sample size calculations for mHealth–based health surveys and (ii) a cross–over study determining validity of an mHealth text messaging data collection tool. The first study used mixed methods, both quantitative and qualitative, including: (i) two surveys with caregivers of young children, (ii) interviews with caregivers, village doctors and participants of the cross–over study, and (iii) researchers’ views. We combined data from caregivers, village doctors and researchers to provide an in–depth understanding of factors influencing sample size calculations for mHealth–based health surveys. The second study, a cross–over study, used a randomised cross–over study design to compare the traditional face–to–face survey method to the new text messaging survey method. We assessed data equivalence (intrarater agreement), the amount of information in responses, reasons for giving different responses, the response rate, characteristics of non–responders, and the error rate. Conclusions This paper described the objectives, field site and methods of a large mHealth project part of a collaboration between researchers in China and the UK. The mixed methods study evaluating factors that influence sample size calculations could help future studies with estimating reliable sample sizes. The cross–over study comparing face–to–face and text message survey data collection

  2. Concept mapping as a method to enhance evidence-based public health.

    PubMed

    van Bon-Martens, Marja J H; van de Goor, Ien A M; van Oers, Hans A M

    2017-02-01

    In this paper we explore the suitability of concept mapping as a method for integrating knowledge from science, practice, and policy. In earlier research we described and analysed five cases of concept mapping procedures in the Netherlands, serving different purposes and fields in public health. In the current paper, seven new concept mapping studies of co-produced work are added to extend this analysis. For each of these twelve studies we analysed: (1) how the method was able to integrate knowledge from practice with scientific knowledge by facilitating dialogue and collaboration between different stakeholders in the field of public health, such as academic researchers, practitioners, policy-makers and the public; (2) how the method was able to bring theory development a step further (scientific relevance); and (3) how the method was able to act as a sound basis for practical decision-making (practical relevance). Based on the answers to these research questions, all but one study was considered useful for building more evidence-based public health, even though the extent to which they underpinned actual decision-making varied. The chance of actually being implemented in practice seems strongly related to the extent to which the responsible decision-makers are involved in the way the concept map is prepared and executed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    PubMed

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  4. Temporal gravity field modeling based on least square collocation with short-arc approach

    NASA Astrophysics Data System (ADS)

    ran, jiangjun; Zhong, Min; Xu, Houze; Liu, Chengshu; Tangdamrongsub, Natthachet

    2014-05-01

    After the launch of the Gravity Recovery And Climate Experiment (GRACE) in 2002, several research centers have attempted to produce the finest gravity model based on different approaches. In this study, we present an alternative approach to derive the Earth's gravity field, and two main objectives are discussed. Firstly, we seek the optimal method to estimate the accelerometer parameters, and secondly, we intend to recover the monthly gravity model based on least square collocation method. The method has been paid less attention compared to the least square adjustment method because of the massive computational resource's requirement. The positions of twin satellites are treated as pseudo-observations and unknown parameters at the same time. The variance covariance matrices of the pseudo-observations and the unknown parameters are valuable information to improve the accuracy of the estimated gravity solutions. Our analyses showed that introducing a drift parameter as an additional accelerometer parameter, compared to using only a bias parameter, leads to a significant improvement of our estimated monthly gravity field. The gravity errors outside the continents are significantly reduced based on the selected set of the accelerometer parameters. We introduced the improved gravity model namely the second version of Institute of Geodesy and Geophysics, Chinese Academy of Sciences (IGG-CAS 02). The accuracy of IGG-CAS 02 model is comparable to the gravity solutions computed from the Geoforschungszentrum (GFZ), the Center for Space Research (CSR) and the NASA Jet Propulsion Laboratory (JPL). In term of the equivalent water height, the correlation coefficients over the study regions (the Yangtze River valley, the Sahara desert, and the Amazon) among four gravity models are greater than 0.80.

  5. Numerical inverse method predicting acoustic spinning modes radiated by a ducted fan from free-field test data.

    PubMed

    Lewy, Serge

    2008-07-01

    Spinning modes generated by a ducted turbofan at a given frequency determine the acoustic free-field directivity. An inverse method starting from measured directivity patterns is interesting in providing information on the noise sources without requiring tedious spinning-mode experimental analyses. According to a previous article, equations are based on analytical modal splitting inside a cylindrical duct and on a Rayleigh or a Kirchhoff integral on the duct exit cross section to get far-field directivity. Equations are equal in number to free-field measurement locations and the unknowns are the propagating mode amplitudes (there are generally more unknowns than equations). A MATLAB procedure has been implemented by using either the pseudoinverse function or the backslash operator. A constraint comes from the fact that squared modal amplitudes must be positive which involves an iterative least squares fitting. Numerical simulations are discussed along with several examples based on tests performed by Rolls-Royce in the framework of a European project. It is assessed that computation is very fast and it well fits the measured directivities, but the solution depends on the method and is not unique. This means that the initial set of modes should be chosen according to any known physical property of the acoustic sources.

  6. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  7. Comparison of PCR-Based Diagnosis with Centrifuged-Based Enrichment Method for Detection of Borrelia persica in Animal Blood Samples.

    PubMed

    Naddaf, S R; Kishdehi, M; Siavashi, Mr

    2011-01-01

    The mainstay of diagnosis of relapsing fever (RF) is demonstration of the spirochetes in Giemsa-stained thick blood smears, but during non fever periods the bacteria are very scanty and rarely detected in blood smears by microscopy. This study is aimed to evaluate the sensitivity of different methods developed for detection of low-grade spirochetemia. Animal blood samples with low degrees of spirochetemia were tested with two PCRs and a nested PCR targeting flaB, GlpQ, and rrs genes. Also, a centrifuged-based enrichment method and Giemsa staining were performed on blood samples with various degrees of spirochetemia. The flaB-PCR and nested rrs-PCR turned positive with various degrees of spirochetemia including the blood samples that turned negative with dark-field microscopy. The GlpQ-PCR was positive as far as at least one spirochete was seen in 5-10 microscopic fields. The sensitivity of GlpQ-PCR increased when DNA from Buffy Coat Layer (BCL) was used as template. The centrifuged-based enrichment method turned positive with as low concentration as 50 bacteria/ml blood, while Giemsa thick staining detected bacteria with concentrations ≥ 25000 bacteria/ml. Centrifuged-based enrichment method appeared as much as 500-fold more sensitive than thick smears, which makes it even superior to some PCR assays. Due to simplicity and minimal laboratory requirements, this method can be considered a valuable tool for diagnosis of RF in rural health centers.

  8. Method and apparatus for steady-state magnetic measurement of poloidal magnetic field near a tokamak plasma

    DOEpatents

    Woolley, R.D.

    1998-09-08

    A method and apparatus are disclosed for the steady-state measurement of poloidal magnetic field near a tokamak plasma, where the tokamak is configured with respect to a cylindrical coordinate system having z, phi (toroidal), and r axes. The method is based on combining the two magnetic field principles of induction and torque. The apparatus includes a rotor assembly having a pair of inductive magnetic field pickup coils which are concentrically mounted, orthogonally oriented in the r and z directions, and coupled to remotely located electronics which include electronic integrators for determining magnetic field changes. The rotor assembly includes an axle oriented in the toroidal direction, with the axle mounted on pivot support brackets which in turn are mounted on a baseplate. First and second springs are located between the baseplate and the rotor assembly restricting rotation of the rotor assembly about its axle, the second spring providing a constant tensile preload in the first spring. A strain gauge is mounted on the first spring, and electronic means to continually monitor strain gauge resistance variations is provided. Electronic means for providing a known current pulse waveform to be periodically injected into each coil to create a time-varying torque on the rotor assembly in the toroidal direction causes mechanical strain variations proportional to the torque in the mounting means and springs so that strain gauge measurement of the variation provides periodic magnetic field measurements independent of the magnetic field measured by the electronic integrators. 6 figs.

  9. Method and apparatus for steady-state magnetic measurement of poloidal magnetic field near a tokamak plasma

    DOEpatents

    Woolley, Robert D.

    1998-01-01

    A method and apparatus for the steady-state measurement of poloidal magnetic field near a tokamak plasma, where the tokamak is configured with respect to a cylindrical coordinate system having z, phi (toroidal), and r axes. The method is based on combining the two magnetic field principles of induction and torque. The apparatus includes a rotor assembly having a pair of inductive magnetic field pickup coils which are concentrically mounted, orthogonally oriented in the r and z directions, and coupled to remotely located electronics which include electronic integrators for determining magnetic field changes. The rotor assembly includes an axle oriented in the toroidal direction, with the axle mounted on pivot support brackets which in turn are mounted on a baseplate. First and second springs are located between the baseplate and the rotor assembly restricting rotation of the rotor assembly about its axle, the second spring providing a constant tensile preload in the first spring. A strain gauge is mounted on the first spring, and electronic means to continually monitor strain gauge resistance variations is provided. Electronic means for providing a known current pulse waveform to be periodically injected into each coil to create a time-varying torque on the rotor assembly in the toroidal direction causes mechanical strain variations proportional to the torque in the mounting means and springs so that strain gauge measurement of the variation provides periodic magnetic field measurements independent of the magnetic field measured by the electronic integrators.

  10. A rapid and non-invasive method for measuring the peak positive pressure of HIFU fields by a laser beam.

    PubMed

    Wang, Hua; Zeng, Deping; Chen, Ziguang; Yang, Zengtao

    2017-04-12

    Based on the acousto-optic interaction, we propose a laser deflection method for rapidly, non-invasively and quantitatively measuring the peak positive pressure of HIFU fields. In the characterization of HIFU fields, the effect of nonlinear propagation is considered. The relation between the laser deflection length and the peak positive pressure is derived. Then the laser deflection method is assessed by comparing it with the hydrophone method. The experimental results show that the peak positive pressure measured by laser deflection method is little higher than that obtained by the hydrophone, confirming that they are in reasonable agreement. Considering that the peak pressure measured by hydrophones is always underestimated, the laser deflection method is assumed to be more accurate than the hydrophone method due to the absence of the errors in hydrophone spatial-averaging measurement and the influence of waveform distortion on hydrophone corrections. Moreover, noting that the Lorentz formula still remains applicable to high-pressure environments, the laser deflection method exhibits a great potential for measuring HIFU field under high-pressure amplitude. Additionally, the laser deflection method provides a rapid way for measuring the peak positive pressure, without the scan time, which is required by the hydrophones.

  11. Dark-field microscopic image stitching method for surface defects evaluation of large fine optics.

    PubMed

    Liu, Dong; Wang, Shitong; Cao, Pin; Li, Lu; Cheng, Zhongtao; Gao, Xin; Yang, Yongying

    2013-03-11

    One of the challenges in surface defects evaluation of large fine optics is to detect defects of microns on surfaces of tens or hundreds of millimeters. Sub-aperture scanning and stitching is considered to be a practical and efficient method. But since there are usually few defects on the large aperture fine optics, resulting in no defects or only one run-through line feature in many sub-aperture images, traditional stitching methods encounter with mismatch problem. In this paper, a feature-based multi-cycle image stitching algorithm is proposed to solve the problem. The overlapping areas of sub-apertures are categorized based on the features they contain. Different types of overlapping areas are then stitched in different cycles with different methods. The stitching trace is changed to follow the one that determined by the features. The whole stitching procedure is a region-growing like process. Sub-aperture blocks grow bigger after each cycle and finally the full aperture image is obtained. Comparison experiment shows that the proposed method is very suitable to stitch sub-apertures that very few feature information exists in the overlapping areas and can stitch the dark-field microscopic sub-aperture images very well.

  12. A comparative study of spin coated and floating film transfer method coated poly (3-hexylthiophene)/poly (3-hexylthiophene)-nanofibers based field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiwari, Shashi; Balasubramanian, S. K.; Takashima, Wataru

    2014-09-07

    A comparative study on electrical performance, optical properties, and surface morphology of poly(3-hexylthiophene) (P3HT) and P3HT-nanofibers based “normally on” type p-channel field effect transistors (FETs), fabricated by two different coating techniques has been reported here. Nanofibers are prepared in the laboratory with the approach of self-assembly of P3HT molecules into nanofibers in an appropriate solvent. P3HT (0.3 wt. %) and P3HT-nanofibers (∼0.25 wt. %) are used as semiconductor transport materials for deposition over FETs channel through spin coating as well as through our recently developed floating film transfer method (FTM). FETs fabricated using FTM show superior performance compared to spin coated devices;more » however, the mobility of FTM films based FETs is comparable to the mobility of spin coated one. The devices based on P3HT-nanofibers (using both the techniques) show much better performance in comparison to P3HT FETs. The best performance among all the fabricated organic field effect transistors are observed for FTM coated P3HT-nanofibers FETs. This improved performance of nanofiber-FETs is due to ordering of fibers and also due to the fact that fibers offer excellent charge transport facility because of point to point transmission. The optical properties and structural morphologies (P3HT and P3HT-nanofibers) are studied using UV-visible absorption spectrophotometer and atomic force microscopy , respectively. Coating techniques and effect of fiber formation for organic conductors give information for fabrication of organic devices with improved performance.« less

  13. Multi-scale Methods in Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Polyzou, W. N.; Michlin, Tracie; Bulut, Fatih

    2018-05-01

    Daubechies wavelets are used to make an exact multi-scale decomposition of quantum fields. For reactions that involve a finite energy that take place in a finite volume, the number of relevant quantum mechanical degrees of freedom is finite. The wavelet decomposition has natural resolution and volume truncations that can be used to isolate the relevant degrees of freedom. The application of flow equation methods to construct effective theories that decouple coarse and fine scale degrees of freedom is examined.

  14. FIELD SCREENING METHODS FOR HAZARDOUS WASTES AND TOXIC CHEMICALS

    EPA Science Inventory

    The purpose of this document is to present the technical papers that were presented at the Second International Symposium on Field Screening Methods for Hazardous Wastes and Toxic Chemicals. ixty platform presentations were made and included in one of ten sessions: hemical sensor...

  15. An Optimal Deconvolution Method for Reconstructing Pneumatically Distorted Near-Field Sonic Boom Pressure Measurements

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Haering, Edward A., Jr.; Ehernberger, L. J.

    1996-01-01

    In-flight measurements of the SR-71 near-field sonic boom were obtained by an F-16XL airplane at flightpath separation distances from 40 to 740 ft. Twenty-two signatures were obtained from Mach 1.60 to Mach 1.84 and altitudes from 47,600 to 49,150 ft. The shock wave signatures were measured by the total and static sensors on the F-16XL noseboo. These near-field signature measurements were distorted by pneumatic attenuation in the pitot-static sensors and accounting for their effects using optimal deconvolution. Measurement system magnitude and phase characteristics were determined from ground-based step-response tests and extrapolated to flight conditions using analytical models. Deconvolution was implemented using Fourier transform methods. Comparisons of the shock wave signatures reconstructed from the total and static pressure data are presented. The good agreement achieved gives confidence of the quality of the reconstruction analysis. although originally developed to reconstruct the sonic boom signatures from SR-71 sonic boom flight tests, the methods presented here generally apply to other types of highly attenuated or distorted pneumatic measurements.

  16. Local Field Response Method Phenomenologically Introducing Spin Correlations

    NASA Astrophysics Data System (ADS)

    Tomaru, Tatsuya

    2018-03-01

    The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.

  17. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    PubMed

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  18. A machine learning approach to the potential-field method for implicit modeling of geological structures

    NASA Astrophysics Data System (ADS)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  19. Removing systematic errors in interionic potentials of mean force computed in molecular simulations using reaction-field-based electrostatics

    PubMed Central

    Baumketner, Andrij

    2009-01-01

    The performance of reaction-field methods to treat electrostatic interactions is tested in simulations of ions solvated in water. The potential of mean force between sodium chloride pair of ions and between side chains of lysine and aspartate are computed using umbrella sampling and molecular dynamics simulations. It is found that in comparison with lattice sum calculations, the charge-group-based approaches to reaction-field treatments produce a large error in the association energy of the ions that exhibits strong systematic dependence on the size of the simulation box. The atom-based implementation of the reaction field is seen to (i) improve the overall quality of the potential of mean force and (ii) remove the dependence on the size of the simulation box. It is suggested that the atom-based truncation be used in reaction-field simulations of mixed media. PMID:19292522

  20. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array.

    PubMed

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-12-30

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.

  1. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array

    PubMed Central

    Wang, Qi; Wang, Yingmin; Zhu, Guolei

    2016-01-01

    The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective. PMID:28042828

  2. The force analysis for superparamagnetic nanoparticles-based gene delivery in an oscillating magnetic field

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Shi, Zongqian; Jia, Shenli; Zhang, Pengbo

    2017-04-01

    Due to the peculiar magnetic properties and the ability to function in cell-level biological interaction, superparamagnetic nanoparticles (SMNP) have been being the attractive carrier for gene delivery. The superparamagnetic nanoparticles with surface-bound gene vector can be attracted to the surface of cells by the Kelvin force provided by external magnetic field. In this article, the influence of the oscillating magnetic field on the characteristics of magnetofection is studied in terms of the magnetophoretic velocity. The magnetic field of a cylindrical permanent magnet is calculated by equivalent current source (ECS) method, and the Kelvin force is derived by using the effective moment method. The results show that the static magnetic field accelerates the sedimentation of the particles, and drives the particles inward towards the axis of the magnet. Based on the investigation of the magnetophoretic velocity of the particle under horizontally oscillating magnetic field, an oscillating velocity within the amplitude of the magnet oscillation is observed. Furthermore, simulation results indicate that the oscillating amplitude plays an important role in regulating the active region, where the particles may present oscillating motion. The analysis of the magnetophoretic velocity gives us an insight into the physical mechanism of the magnetofection. It's also helpful to the optimal design of the magnetofection system.

  3. Comparison of SVM RBF-NN and DT for crop and weed identification based on spectral measurement over corn fields

    USDA-ARS?s Scientific Manuscript database

    It is important to find an appropriate pattern-recognition method for in-field plant identification based on spectral measurement in order to classify the crop and weeds accurately. In this study, the method of Support Vector Machine (SVM) was evaluated and compared with two other methods, Decision ...

  4. A novel method of temperature compensation for piezoresistive microcantilever-based sensors.

    PubMed

    Han, Jianqiang; Wang, Xiaofei; Yan, Tianhong; Li, Yan; Song, Meixuan

    2012-03-01

    Microcantilever with integrated piezoresistor has been applied to in situ surface stress measurement in the field of biochemical sensors. It is well known that piezoresistive cantilever-based sensors are sensitive to ambient temperature changing due to highly temperature-dependent piezoresistive effect and mismatch in thermal expansion of composite materials. This paper proposes a novel method of temperature drift compensation for microcantilever-based sensors with a piezoresistive full Wheatstone bridge integrated at the clamped ends by subtracting the amplified output voltage of the reference cantilever from the output voltage of the sensing cantilever through a simple temperature compensating circuit. Experiments show that the temperature drift of microcantilever sensors can be significantly reduced by the method.

  5. Magnetic irreversibility: An important amendment in the zero-field-cooling and field-cooling method

    NASA Astrophysics Data System (ADS)

    Teixeira Dias, Fábio; das Neves Vieira, Valdemar; Esperança Nunes, Sabrina; Pureur, Paulo; Schaf, Jacob; Fernanda Farinela da Silva, Graziele; de Paiva Gouvêa, Cristol; Wolff-Fabris, Frederik; Kampert, Erik; Obradors, Xavier; Puig, Teresa; Roa Rovira, Joan Josep

    2016-02-01

    The present work reports about experimental procedures to correct significant deviations of magnetization data, caused by magnetic relaxation, due to small field cycling by sample transport in the inhomogeneous applied magnetic field of commercial magnetometers. The extensively used method for measuring the magnetic irreversibility by first cooling the sample in zero field, switching on a constant applied magnetic field and measuring the magnetization M(T) while slowly warming the sample, and subsequently measuring M(T) while slowly cooling it back in the same field, is very sensitive even to small displacement of the magnetization curve. In our melt-processed YBaCuO superconducting sample we observed displacements of the irreversibility limit up to 7 K in high fields. Such displacements are detected only on confronting the magnetic irreversibility limit with other measurements, like for instance zero resistance, in which the sample remains fixed and so is not affected by such relaxation. We measured the magnetic irreversibility, Tirr(H), using a vibrating sample magnetometer (VSM) from Quantum Design. The zero resistance data, Tc0(H), were obtained using a PPMS from Quantum Design. On confronting our irreversibility lines with those of zero resistance, we observed that the Tc0(H) data fell several degrees K above the Tirr(H) data, which obviously contradicts the well known properties of superconductivity. In order to get consistent Tirr(H) data in the H-T plane, it was necessary to do a lot of additional measurements as a function of the amplitude of the sample transport and extrapolate the Tirr(H) data for each applied field to zero amplitude.

  6. Introduction to Vector Field Visualization

    NASA Technical Reports Server (NTRS)

    Kao, David; Shen, Han-Wei

    2010-01-01

    Vector field visualization techniques are essential to help us understand the complex dynamics of flow fields. These can be found in a wide range of applications such as study of flows around an aircraft, the blood flow in our heart chambers, ocean circulation models, and severe weather predictions. The vector fields from these various applications can be visually depicted using a number of techniques such as particle traces and advecting textures. In this tutorial, we present several fundamental algorithms in flow visualization including particle integration, particle tracking in time-dependent flows, and seeding strategies. For flows near surfaces, a wide variety of synthetic texture-based algorithms have been developed to depict near-body flow features. The most common approach is based on the Line Integral Convolution (LIC) algorithm. There also exist extensions of LIC to support more flexible texture generations for 3D flow data. This tutorial reviews these algorithms. Tensor fields are found in several real-world applications and also require the aid of visualization to help users understand their data sets. Examples where one can find tensor fields include mechanics to see how material respond to external forces, civil engineering and geomechanics of roads and bridges, and the study of neural pathway via diffusion tensor imaging. This tutorial will provide an overview of the different tensor field visualization techniques, discuss basic tensor decompositions, and go into detail on glyph based methods, deformation based methods, and streamline based methods. Practical examples will be used when presenting the methods; and applications from some case studies will be used as part of the motivation.

  7. Dim target detection method based on salient graph fusion

    NASA Astrophysics Data System (ADS)

    Hu, Ruo-lan; Shen, Yi-yan; Jiang, Jun

    2018-02-01

    Dim target detection is one key problem in digital image processing field. With development of multi-spectrum imaging sensor, it becomes a trend to improve the performance of dim target detection by fusing the information from different spectral images. In this paper, one dim target detection method based on salient graph fusion was proposed. In the method, Gabor filter with multi-direction and contrast filter with multi-scale were combined to construct salient graph from digital image. And then, the maximum salience fusion strategy was designed to fuse the salient graph from different spectral images. Top-hat filter was used to detect dim target from the fusion salient graph. Experimental results show that proposal method improved the probability of target detection and reduced the probability of false alarm on clutter background images.

  8. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  9. Development of glucose measurement system based on pulsed laser-induced ultrasonic method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Wan, Bin; Liu, Guodong; Xiong, Zhihua

    2016-09-01

    In this study, a kind of glucose measurement system based on pulsed-induced ultrasonic technique was established. In this system, the lateral detection mode was used, the Nd: YAG pumped optical parametric oscillator (OPO) pulsed laser was used as the excitation source, the high sensitivity ultrasonic transducer was used as the signal detector to capture the photoacoustic signals of the glucose. In the experiments, the real-time photoacoustic signals of glucose aqueous solutions with different concentrations were captured by ultrasonic transducer and digital oscilloscope. Moreover, the photoacoustic peak-to-peak values were gotten in the wavelength range from 1300nm to 2300nm. The characteristic absorption wavelengths of glucose were determined via the difference spectral method and second derivative method. In addition, the prediction models of predicting glucose concentrations were established via the multivariable linear regression algorithm and the optimal prediction model of corresponding optimal wavelengths. Results showed that the performance of the glucose system based on the pulsed-induced ultrasonic detection method was feasible. Therefore, the measurement scheme and prediction model have some potential value in the fields of non-invasive monitoring the concentration of the glucose gradient, especially in the food safety and biomedical fields.

  10. New reversing freeform lens design method for LED uniform illumination with extended source and near field

    NASA Astrophysics Data System (ADS)

    Zhao, Zhili; Zhang, Honghai; Zheng, Huai; Liu, Sheng

    2018-03-01

    In light-emitting diode (LED) array illumination (e.g. LED backlighting), obtainment of high uniformity in the harsh condition of the large distance height ratio (DHR), extended source and near field is a key as well as challenging issue. In this study, we present a new reversing freeform lens design algorithm based on the illuminance distribution function (IDF) instead of the traditional light intensity distribution, which allows uniform LED illumination in the above mentioned harsh conditions. IDF of freeform lens can be obtained by the proposed mathematical method, considering the effects of large DHR, extended source and near field target at the same time. In order to prove the claims, a slim direct-lit LED backlighting with DHR equal to 4 is designed. In comparison with the traditional lenses, illuminance uniformity of LED backlighting with the new lens increases significantly from 0.45 to 0.84, and CV(RMSE) decreases dramatically from 0.24 to 0.03 in the harsh condition. Meanwhile, luminance uniformity of LED backlighting with the new lens is obtained as high as 0.92 at the condition of extended source and near field. This new method provides a practical and effective way to solve the problem of large DHR, extended source and near field for LED array illumination.

  11. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  12. Method and systems for collecting data from multiple fields of view

    NASA Technical Reports Server (NTRS)

    Schwemmer, Geary K. (Inventor)

    2002-01-01

    Systems and methods for processing light from multiple fields (48, 54, 55) of view without excessive machinery for scanning optical elements. In an exemplary embodiment of the invention, multiple holographic optical elements (41, 42, 43, 44, 45), integrated on a common film (4), diffract and project light from respective fields of view.

  13. A multi-block adaptive solving technique based on lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao

    2018-05-01

    In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.

  14. Quantitative evaluation method of the threshold adjustment and the flat field correction performances of hybrid photon counting pixel detectors

    NASA Astrophysics Data System (ADS)

    Medjoubi, K.; Dawiec, A.

    2017-12-01

    A simple method is proposed in this work for quantitative evaluation of the quality of the threshold adjustment and the flat-field correction of Hybrid Photon Counting pixel (HPC) detectors. This approach is based on the Photon Transfer Curve (PTC) corresponding to the measurement of the standard deviation of the signal in flat field images. Fixed pattern noise (FPN), easily identifiable in the curve, is linked to the residual threshold dispersion, sensor inhomogeneity and the remnant errors in flat fielding techniques. The analytical expression of the signal to noise ratio curve is developed for HPC and successfully used as a fit function applied to experimental data obtained with the XPAD detector. The quantitative evaluation of the FPN, described by the photon response non-uniformity (PRNU), is measured for different configurations (threshold adjustment method and flat fielding technique) and is demonstrated to be used in order to evaluate the best setting for having the best image quality from a commercial or a R&D detector.

  15. Comparability between various field and laboratory wood-stove emission-measurement methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrillis, R.C.; Jaasma, D.R.

    1991-01-01

    The paper compares various field and laboratory woodstove emission measurement methods. In 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). Over the past several years, a number of field studies have been undertaken to determine the actual level of emission reduction achieved by new technology woodstoves in everyday use. The studies have required the development and use of particulate and gaseous emission sampling equipment compatible with operation in private homes. Since woodstoves are tested for certification in the laboratory using EPA Methods 5G and 5H, it is of interest to determine the correlation between these regulatorymore » methods and the inhouse equipment. Two inhouse sampling systems have been used most widely: one is an intermittent, pump-driven particulate sampler that collects particulate and condensible organics on a filter and organic adsorbent resin; and the other uses an evacuated cylinder as the motive force and particulate and condensible organics are collected in a condenser and dual filter. Both samplers can operate unattended for 1-week periods. A large number of tests have been run comparing Methods 5G and 5H to both samplers. The paper presents these comparison data and determines the relationships between regulations and field samplers.« less

  16. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  17. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  18. A method of multiplex PCR for detection of field released Beauveria bassiana, a fungal entomopathogen applied for pest management in jute (Corchorus olitorius).

    PubMed

    Biswas, Chinmay; Dey, Piyali; Gotyal, B S; Satpathy, Subrata

    2015-04-01

    The fungal entomopathogen Beauveria bassiana is a promising biocontrol agent for many pests. Some B. bassiana strains have been found effective against jute pests. To monitor the survival of field released B. bassiana a rapid and efficient detection technique is essential. Conventional methods such as plating method or direct culture method which are based on cultivation on selective media followed by microscopy are time consuming and not so sensitive. PCR based methods are rapid, sensitive and reliable. A single primer PCR may fail to amplify some of the strains. However, multiplex PCR increases the possibility of detection as it uses multiple primers. Therefore, in the present investigation a multiplex PCR protocol was developed by multiplexing three primers SCA 14, SCA 15 and SCB 9 to detect field released B. bassiana strains from soil as well as foliage of jute field. Using our multiplex PCR protocol all the five B. bassiana strains could be detected from soil and three strains viz., ITCC 6063, ITCC 4563 and ITCC 4796 could be detected even from the crop foliage after 45 days of spray.

  19. The Field: The Proper Location for Methods Courses.

    ERIC Educational Resources Information Center

    Cardarelli, Aldo F.

    1981-01-01

    The position taken here is that extensive field work with children which is built around a particular set of competencies which the university student is responsible to demonstrate will accomplish far more than the campus-centered, inferentially-based lecture approach. Such a program operating at an urban university is outlined. (Author/SJL)

  20. Accuracy-enhanced constitutive parameter identification using virtual fields method and special stereo-digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongya; Pan, Bing; Grédiac, Michel; Song, Weidong

    2018-04-01

    The virtual fields method (VFM) is generally used with two-dimensional digital image correlation (2D-DIC) or grid method (GM) for identifying constitutive parameters. However, when small out-of-plane translation/rotation occurs to the test specimen, 2D-DIC and GM are prone to yield inaccurate measurements, which further lessen the accuracy of the parameter identification using VFM. In this work, an easy-to-implement but effective "special" stereo-DIC (SS-DIC) method is proposed for accuracy-enhanced VFM identification. The SS-DIC can not only deliver accurate deformation measurement without being affected by unavoidable out-of-plane movement/rotation of a test specimen, but can also ensure evenly distributed calculation data in space, which leads to simple data processing. Based on the accurate kinematics fields with evenly distributed measured points determined by SS-DIC method, constitutive parameters can be identified by VFM with enhanced accuracy. Uniaxial tensile tests of a perforated aluminum plate and pure shear tests of a prismatic aluminum specimen verified the effectiveness and accuracy of the proposed method. Experimental results show that the constitutive parameters identified by VFM using SS-DIC are more accurate and stable than those identified by VFM using 2D-DIC. It is suggested that the proposed SS-DIC can be used as a standard measuring tool for mechanical identification using VFM.

  1. Representation of magnetic fields in space. [special attention to Geomagnetic fields and Magnetospheric models

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1976-01-01

    Several mathematical methods which are available for the description of magnetic fields in space are reviewed. Examples of the application of such methods are given, with particular emphasis on work related to the geomagnetic field, and their individual properties and associated problems are described. The methods are grouped in five main classes: (1) methods based on the current density, (2) methods using the scalar magnetic potential, (3) toroidal and poloidal components of the field and spherical vector harmonics, (4) Euler potentials, and (5) local expansions of the field near a given reference point. Special attention is devoted to models of the magnetosphere, to the uniqueness of the scalar potential as derived from observed data, and to the L parameter.

  2. A molecular-field-based similarity study of non-nucleoside HIV-1 reverse transcriptase inhibitors

    NASA Astrophysics Data System (ADS)

    Mestres, Jordi; Rohrer, Douglas C.; Maggiora, Gerald M.

    1999-01-01

    This article describes a molecular-field-based similarity method for aligning molecules by matching their steric and electrostatic fields and an application of the method to the alignment of three structurally diverse non-nucleoside HIV-1 reverse transcriptase inhibitors. A brief description of the method, as implemented in the program MIMIC, is presented, including a discussion of pairwise and multi-molecule similarity-based matching. The application provides an example that illustrates how relative binding orientations of molecules can be determined in the absence of detailed structural information on their target protein. In the particular system studied here, availability of the X-ray crystal structures of the respective ligand-protein complexes provides a means for constructing an 'experimental model' of the relative binding orientations of the three inhibitors. The experimental model is derived by using MIMIC to align the steric fields of the three protein P66 subunit main chains, producing an overlay with a 1.41 Å average rms distance between the corresponding Cα's in the three chains. The inter-chain residue similarities for the backbone structures show that the main-chain conformations are conserved in the region of the inhibitor-binding site, with the major deviations located primarily in the 'finger' and RNase H regions. The resulting inhibitor structure overlay provides an experimental-based model that can be used to evaluate the quality of the direct a priori inhibitor alignment obtained using MIMIC. It is found that the 'best' pairwise alignments do not always correspond to the experimental model alignments. Therefore, simply combining the best pairwise alignments will not necessarily produce the optimal multi-molecule alignment. However, the best simultaneous three-molecule alignment was found to reproduce the experimental inhibitor alignment model. A pairwise consistency index has been derived which gauges the quality of combining the pairwise

  3. Dynamic blocked transfer stiffness method of characterizing the magnetic field and frequency dependent dynamic viscoelastic properties of MRE

    NASA Astrophysics Data System (ADS)

    Poojary, Umanath R.; Hegde, Sriharsha; Gangadharan, K. V.

    2016-11-01

    Magneto rheological elastomer (MRE) is a potential resilient element for the semi active vibration isolator. MRE based isolators adapt to different frequency of vibrations arising from the source to isolate the structure over wider frequency range. The performance of MRE isolator depends on the magnetic field and frequency dependent characteristics of MRE. Present study is focused on experimentally evaluating the dynamic stiffness and loss factor of MRE through dynamic blocked transfer stiffness method. The dynamic stiffness variations of MRE exhibit strong magnetic field and mild frequency dependency. Enhancements in dynamic stiffness saturate with the increase in magnetic field and the frequency. The inconsistent variations of loss factor with the magnetic field substantiate the inability of MRE to have independent control over its damping characteristics.

  4. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew [Mill Valley, CA

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  5. A field method for soil erosion measurements in agricultural and natural lands

    Treesearch

    Y.P. Hsieh; K.T. Grant; G.C. Bugna

    2009-01-01

    Soil erosion is one of the most important watershed processes in nature, yet quantifying it under field conditions remains a challenge. The lack of soil erosion field data is a major factor hindering our ability to predict soil erosion in a watershed. We present here the development of a simple and sensitive field method that quantifies soil erosion and the resulting...

  6. Field Supervisor Perspectives on Evidence-Based Practice: Familiarity, Feasibility, and Implementation

    ERIC Educational Resources Information Center

    Heffernan, Kristin; Dauenhauer, Jason

    2017-01-01

    The Council on Social Work Education has designated field education as social work's signature pedagogy, putting field supervisors in a key role of preparing students as competent social workers. This study examined field supervisors' Evidence Based Practice (EBP) behaviors using a modified version of the Evidence-Based Practice Process Assessment…

  7. Self-Alignment MEMS IMU Method Based on the Rotation Modulation Technique on a Swing Base

    PubMed Central

    Chen, Zhiyong; Yang, Haotian; Wang, Chengbin; Lin, Zhihui; Guo, Meifeng

    2018-01-01

    The micro-electro-mechanical-system (MEMS) inertial measurement unit (IMU) has been widely used in the field of inertial navigation due to its small size, low cost, and light weight, but aligning MEMS IMUs remains a challenge for researchers. MEMS IMUs have been conventionally aligned on a static base, requiring other sensors, such as magnetometers or satellites, to provide auxiliary information, which limits its application range to some extent. Therefore, improving the alignment accuracy of MEMS IMU as much as possible under swing conditions is of considerable value. This paper proposes an alignment method based on the rotation modulation technique (RMT), which is completely self-aligned, unlike the existing alignment techniques. The effect of the inertial sensor errors is mitigated by rotating the IMU. Then, inertial frame-based alignment using the rotation modulation technique (RMT-IFBA) achieved coarse alignment on the swing base. The strong tracking filter (STF) further improved the alignment accuracy. The performance of the proposed method was validated with a physical experiment, and the results of the alignment showed that the standard deviations of pitch, roll, and heading angle were 0.0140°, 0.0097°, and 0.91°, respectively, which verified the practicality and efficacy of the proposed method for the self-alignment of the MEMS IMU on a swing base. PMID:29649150

  8. Research on Visualization Design Method in the Field of New Media Software Engineering

    NASA Astrophysics Data System (ADS)

    Deqiang, Hu

    2018-03-01

    In the new period of increasingly developed science and technology, with the increasingly fierce competition in the market and the increasing demand of the masses, new design and application methods have emerged in the field of new media software engineering, that is, the visualization design method. Applying the visualization design method to the field of new media software engineering can not only improve the actual operation efficiency of new media software engineering but more importantly the quality of software development can be enhanced by means of certain media of communication and transformation; on this basis, the progress and development of new media software engineering in China are also continuously promoted. Therefore, the application of visualization design method in the field of new media software engineering is analysed concretely in this article from the perspective of the overview of visualization design methods and on the basis of systematic analysis of the basic technology.

  9. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    Root traits are increasingly important in breading of new crop varieties. E.g., longer and fewer lateral roots are suggested to improve drought resistance of wheat. Thus, detailed root architectural parameters are important. However, classical field sampling of roots only provides more aggregated information such as root length density (coring), root counts per area (trenches) or root arrival curves at certain depths (rhizotubes). We investigate the possibility of obtaining the information about root system architecture of plants using field based classical root sampling schemes, based on sensitivity analysis and inverse parameter estimation. This methodology was developed based on a virtual experiment where a root architectural model was used to simulate root system development in a field, parameterized for winter wheat. This information provided the ground truth which is normally unknown in a real field experiment. The three sampling schemes coring, trenching, and rhizotubes where virtually applied to and aggregated information computed. Morris OAT global sensitivity analysis method was then performed to determine the most sensitive parameters of root architecture model for the three different sampling methods. The estimated means and the standard deviation of elementary effects of a total number of 37 parameters were evaluated. Upper and lower bounds of the parameters were obtained based on literature and published data of winter wheat root architectural parameters. Root length density profiles of coring, arrival curve characteristics observed in rhizotubes, and root counts in grids of trench profile method were evaluated statistically to investigate the influence of each parameter using five different error functions. Number of branches, insertion angle inter-nodal distance, and elongation rates are the most sensitive parameters and the parameter sensitivity varies slightly with the depth. Most parameters and their interaction with the other parameters show

  10. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  11. Effects of a High Magnetic Field on the Microstructure of Ni-Based Single-Crystal Superalloys During Directional Solidification

    NASA Astrophysics Data System (ADS)

    Xuan, Weidong; Lan, Jian; Liu, Huan; Li, Chuanjun; Wang, Jiang; Ren, Weili; Zhong, Yunbo; Li, Xi; Ren, Zhongming

    2017-08-01

    High magnetic fields are widely used to improve the microstructure and properties of materials during the solidification process. During the preparation of single-crystal turbine blades, the microstructure of the superalloy is the main factor that determines its mechanical properties. In this work, the effects of a high magnetic field on the microstructure of Ni-based single-crystal superalloys PWA1483 and CMSX-4 during directional solidification were investigated experimentally. The results showed that the magnetic field modified the primary dendrite arm spacing, γ' phase size, and microsegregation of the superalloys. In addition, the size and volume fractions of γ/ γ' eutectic and the microporosity were decreased in a high magnetic field. Analysis of variance (ANOVA) results showed that the effect of a high magnetic field on the microstructure during directional solidification was significant ( p < 0.05). Based on both experimental results and theoretical analysis, the modification of microstructure was attributed to thermoelectric magnetic convection occurring in the interdendritic regions under a high magnetic field. The present work provides a new method to optimize the microstructure of Ni-based single-crystal superalloy blades by applying a high magnetic field.

  12. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  13. A Web service substitution method based on service cluster nets

    NASA Astrophysics Data System (ADS)

    Du, YuYue; Gai, JunJing; Zhou, MengChu

    2017-11-01

    Service substitution is an important research topic in the fields of Web services and service-oriented computing. This work presents a novel method to analyse and substitute Web services. A new concept, called a Service Cluster Net Unit, is proposed based on Web service clusters. A service cluster is converted into a Service Cluster Net Unit. Then it is used to analyse whether the services in the cluster can satisfy some service requests. Meanwhile, the substitution methods of an atomic service and a composite service are proposed. The correctness of the proposed method is proved, and the effectiveness is shown and compared with the state-of-the-art method via an experiment. It can be readily applied to e-commerce service substitution to meet the business automation needs.

  14. Consistent simulation of droplet evaporation based on the phase-field multiphase lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Safari, Hesameddin; Rahimian, Mohammad Hassan; Krafczyk, Manfred

    2014-09-01

    In the present article, we extend and generalize our previous article [H. Safari, M. H. Rahimian, and M. Krafczyk, Phys. Rev. E 88, 013304 (2013), 10.1103/PhysRevE.88.013304] to include the gradient of the vapor concentration at the liquid-vapor interface as the driving force for vaporization allowing the evaporation from the phase interface to work for arbitrary temperatures. The lattice Boltzmann phase-field multiphase modeling approach with a suitable source term, accounting for the effect of the phase change on the velocity field, is used to solve the two-phase flow field. The modified convective Cahn-Hilliard equation is employed to reconstruct the dynamics of the interface topology. The coupling between the vapor concentration and temperature field at the interface is modeled by the well-known Clausius-Clapeyron correlation. Numerous validation tests including one-dimensional and two-dimensional cases are carried out to demonstrate the consistency of the presented model. Results show that the model is able to predict the flow features around and inside an evaporating droplet quantitatively in quiescent as well as convective environments.

  15. Consistent simulation of droplet evaporation based on the phase-field multiphase lattice Boltzmann method.

    PubMed

    Safari, Hesameddin; Rahimian, Mohammad Hassan; Krafczyk, Manfred

    2014-09-01

    In the present article, we extend and generalize our previous article [H. Safari, M. H. Rahimian, and M. Krafczyk, Phys. Rev. E 88, 013304 (2013)] to include the gradient of the vapor concentration at the liquid-vapor interface as the driving force for vaporization allowing the evaporation from the phase interface to work for arbitrary temperatures. The lattice Boltzmann phase-field multiphase modeling approach with a suitable source term, accounting for the effect of the phase change on the velocity field, is used to solve the two-phase flow field. The modified convective Cahn-Hilliard equation is employed to reconstruct the dynamics of the interface topology. The coupling between the vapor concentration and temperature field at the interface is modeled by the well-known Clausius-Clapeyron correlation. Numerous validation tests including one-dimensional and two-dimensional cases are carried out to demonstrate the consistency of the presented model. Results show that the model is able to predict the flow features around and inside an evaporating droplet quantitatively in quiescent as well as convective environments.

  16. Verification of the ISO calibration method for field pyranometers under tropical sky conditions

    NASA Astrophysics Data System (ADS)

    Janjai, Serm; Tohsing, Korntip; Pattarapanitchai, Somjet; Detkhon, Pasakorn

    2017-02-01

    Field pyranomters need to be annually calibrated and the International Organization for Standardization (ISO) has defined a standard method (ISO 9847) for calibrating these pyranometers. According to this standard method for outdoor calibration, the field pyranometers have to be compared to a reference pyranometer for the period of 2 to 14 days, depending on sky conditions. In this work, the ISO 9847 standard method was verified under tropical sky conditions. To verify the standard method, calibration of field pyranometers was conducted at a tropical site located in Nakhon Pathom (13.82o N, 100.04o E), Thailand under various sky conditions. The conditions of the sky were monitored by using a sky camera. The calibration results for different time periods used for the calibration under various sky conditions were analyzed. It was found that the calibration periods given by this standard method could be reduced without significant change in the final calibration result. In addition, recommendation and discussion on the use of this standard method in the tropics were also presented.

  17. Method and apparatus for determining vertical heat flux of geothermal field

    DOEpatents

    Poppendiek, Heinz F.

    1982-01-01

    A method and apparatus for determining vertical heat flux of a geothermal field, and mapping the entire field, is based upon an elongated heat-flux transducer (10) comprised of a length of tubing (12) of relatively low thermal conductivity with a thermopile (20) inside for measuring the thermal gradient between the ends of the transducer after it has been positioned in a borehole for a period sufficient for the tube to reach thermal equilibrium. The transducer is thermally coupled to the surrounding earth by a fluid annulus, preferably water or mud. A second transducer comprised of a length of tubing of relatively high thermal conductivity is used for a second thermal gradient measurement. The ratio of the first measurement to the second is then used to determine the earth's thermal conductivity, k.sub..infin., from a precalculated graph, and using the value of thermal conductivity thus determined, then determining the vertical earth temperature gradient, b, from predetermined steady state heat balance equations which relate the undisturbed vertical earth temperature distributions at some distance from the borehole and earth thermal conductivity to the temperature gradients in the transducers and their thermal conductivity. The product of the earth's thermal conductivity, k.sub..infin., and the earth's undisturbed vertical temperature gradient, b, then determines the earth's vertical heat flux. The process can be repeated many times for boreholes of a geothermal field to map vertical heat flux.

  18. A CLEAN-based method for mosaic deconvolution

    NASA Astrophysics Data System (ADS)

    Gueth, F.; Guilloteau, S.; Viallefond, F.

    1995-03-01

    Mosaicing may be used in aperture synthesis to map large fields of view. So far, only MEM techniques have been used to deconvolve mosaic images (Cornwell (1988)). A CLEAN-based method has been developed, in which the components are located in a modified expression. This allows a better utilization of the information and consequent noise reduction in the overlapping regions. Simulations show that this method gives correct clean maps and recovers most of the flux of the sources. The introduction of the short-spacing visibilities in the data set is strongly required. Their absence actually introduces artificial lack of structures on the corresponding scale in the mosaic images. The formation of ``stripes'' in clean maps may also occur, but this phenomenon can be significantly reduced by using the Steer-Dewdney-Ito algorithm (Steer, Dewdney & Ito (1984)) to identify the CLEAN components. Typical IRAM interferometer pointing errors do not have a significant effect on the reconstructed images.

  19. Cartographic generalization of urban street networks based on gravitational field theory

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Li, Yongshu; Li, Zheng; Guo, Jiawei

    2014-05-01

    The automatic generalization of urban street networks is a constant and important aspect of geographical information science. Previous studies show that the dual graph for street-street relationships more accurately reflects the overall morphological properties and importance of streets than do other methods. In this study, we construct a dual graph to represent street-street relationship and propose an approach to generalize street networks based on gravitational field theory. We retain the global structural properties and topological connectivity of an original street network and borrow from gravitational field theory to define the gravitational force between nodes. The concept of multi-order neighbors is introduced and the gravitational force is taken as the measure of the importance contribution between nodes. The importance of a node is defined as the result of the interaction between a given node and its multi-order neighbors. Degree distribution is used to evaluate the level of maintaining the global structure and topological characteristics of a street network and to illustrate the efficiency of the suggested method. Experimental results indicate that the proposed approach can be used in generalizing street networks and retaining their density characteristics, connectivity and global structure.

  20. Phase field approaches of bone remodeling based on TIP

    NASA Astrophysics Data System (ADS)

    Ganghoffer, Jean-François; Rahouadj, Rachid; Boisse, Julien; Forest, Samuel

    2016-01-01

    The process of bone remodeling includes a cycle of repair, renewal, and optimization. This adaptation process, in response to variations in external loads and chemical driving factors, involves three main types of bone cells: osteoclasts, which remove the old pre-existing bone; osteoblasts, which form the new bone in a second phase; osteocytes, which are sensing cells embedded into the bone matrix, trigger the aforementioned sequence of events. The remodeling process involves mineralization of the bone in the diffuse interface separating the marrow, which contains all specialized cells, from the newly formed bone. The main objective advocated in this contribution is the setting up of a modeling and simulation framework relying on the phase field method to capture the evolution of the diffuse interface between the new bone and the marrow at the scale of individual trabeculae. The phase field describes the degree of mineralization of this diffuse interface; it varies continuously between the lower value (no mineral) and unity (fully mineralized phase, e.g. new bone), allowing the consideration of a diffuse moving interface. The modeling framework is the theory of continuous media, for which field equations for the mechanical, chemical, and interfacial phenomena are written, based on the thermodynamics of irreversible processes. Additional models for the cellular activity are formulated to describe the coupling of the cell activity responsible for bone production/resorption to the kinetics of the internal variables. Kinetic equations for the internal variables are obtained from a pseudo-potential of dissipation. The combination of the balance equations for the microforce associated to the phase field and the kinetic equations lead to the Ginzburg-Landau equation satisfied by the phase field with a source term accounting for the dissipative microforce. Simulations illustrating the proposed framework are performed in a one-dimensional situation showing the evolution of

  1. Evaluation Method for Fieldlike-Torque Efficiency by Modulation of the Resonance Field

    NASA Astrophysics Data System (ADS)

    Kim, Changsoo; Kim, Dongseuk; Chun, Byong Sun; Moon, Kyoung-Woong; Hwang, Chanyong

    2018-05-01

    The spin Hall effect has attracted a lot of interest in spintronics because it offers the possibility of a faster switching route with an electric current than with a spin-transfer-torque device. Recently, fieldlike spin-orbit torque has been shown to play an important role in the magnetization switching mechanism. However, there is no simple method for observing the fieldlike spin-orbit torque efficiency. We suggest a method for measuring fieldlike spin-orbit torque using a linear change in the resonance field in spectra of direct-current (dc)-tuned spin-torque ferromagnetic resonance. The fieldlike spin-orbit torque efficiency can be obtained in both a macrospin simulation and in experiments by simply subtracting the Oersted field from the shifted amount of resonance field. This method analyzes the effect of fieldlike torque using dc in a normal metal; therefore, only the dc resistivity and the dimensions of each layer are considered in estimating the fieldlike spin-torque efficiency. The evaluation of fieldlike-torque efficiency of a newly emerging material by modulation of the resonance field provides a shortcut in the development of an alternative magnetization switching device.

  2. High-speed engine/component performance assessment using exergy and thrust-based methods

    NASA Technical Reports Server (NTRS)

    Riggins, D. W.

    1996-01-01

    This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.

  3. Stochastic-field cavitation model

    NASA Astrophysics Data System (ADS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  4. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  5. Multipolar Ewald Methods, 2: Applications Using a Quantum Mechanical Force Field

    PubMed Central

    2015-01-01

    A fully quantum mechanical force field (QMFF) based on a modified “divide-and-conquer” (mDC) framework is applied to a series of molecular simulation applications, using a generalized Particle Mesh Ewald method extended to multipolar charge densities. Simulation results are presented for three example applications: liquid water, p-nitrophenylphosphate reactivity in solution, and crystalline N,N-dimethylglycine. Simulations of liquid water using a parametrized mDC model are compared to TIP3P and TIP4P/Ew water models and experiment. The mDC model is shown to be superior for cluster binding energies and generally comparable for bulk properties. Examination of the dissociative pathway for dephosphorylation of p-nitrophenylphosphate shows that the mDC method evaluated with the DFTB3/3OB and DFTB3/OPhyd semiempirical models bracket the experimental barrier, whereas DFTB2 and AM1/d-PhoT QM/MM simulations exhibit deficiencies in the barriers, the latter for which is related, in part, to the anomalous underestimation of the p-nitrophenylate leaving group pKa. Simulations of crystalline N,N-dimethylglycine are performed and the overall structure and atomic fluctuations are compared with the experiment and the general AMBER force field (GAFF). The QMFF, which was not parametrized for this application, was shown to be in better agreement with crystallographic data than GAFF. Our simulations highlight some of the application areas that may benefit from using new QMFFs, and they demonstrate progress toward the development of accurate QMFFs using the recently developed mDC framework. PMID:25691830

  6. The reduced basis method for the electric field integral equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less

  7. Field Science Ethnography: Methods For Systematic Observation on an Expedition

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.

  8. Cross-comparison and evaluation of air pollution field estimation methods

    NASA Astrophysics Data System (ADS)

    Yu, Haofei; Russell, Armistead; Mulholland, James; Odman, Talat; Hu, Yongtao; Chang, Howard H.; Kumar, Naresh

    2018-04-01

    Accurate estimates of human exposure is critical for air pollution health studies and a variety of methods are currently being used to assign pollutant concentrations to populations. Results from these methods may differ substantially, which can affect the outcomes of health impact assessments. Here, we applied 14 methods for developing spatiotemporal air pollutant concentration fields of eight pollutants to the Atlanta, Georgia region. These methods include eight methods relying mostly on air quality observations (CM: central monitor; SA: spatial average; IDW: inverse distance weighting; KRIG: kriging; TESS-D: discontinuous tessellation; TESS-NN: natural neighbor tessellation with interpolation; LUR: land use regression; AOD: downscaled satellite-derived aerosol optical depth), one using the RLINE dispersion model, and five methods using a chemical transport model (CMAQ), with and without using observational data to constrain results. The derived fields were evaluated and compared. Overall, all methods generally perform better at urban than rural area, and for secondary than primary pollutants. We found the CM and SA methods may be appropriate only for small domains, and for secondary pollutants, though the SA method lead to large negative spatial correlations when using data withholding for PM2.5 (spatial correlation coefficient R = -0.81). The TESS-D method was found to have major limitations. Results of the IDW, KRIG and TESS-NN methods are similar. They are found to be better suited for secondary pollutants because of their satisfactory temporal performance (e.g. average temporal R2 > 0.85 for PM2.5 but less than 0.35 for primary pollutant NO2). In addition, they are suitable for areas with relatively dense monitoring networks due to their inability to capture spatial concentration variabilities, as indicated by the negative spatial R (lower than -0.2 for PM2.5 when assessed using data withholding). The performance of LUR and AOD methods were similar to

  9. A Field-Based Learning Experience for Introductory Level GIS Students

    ERIC Educational Resources Information Center

    Carlson, Tom

    2007-01-01

    This article describes a pedagogic foundation for introducing a field-based geographic information systems (GIS) experience to the GIS curriculum at the university level and uses a dual evaluation methodology to monitor student learning and satisfaction. Students learned the basics of field-based global position systems (GPS) and GIS data…

  10. Stable isotope labelling methods in mass spectrometry-based quantitative proteomics.

    PubMed

    Chahrour, Osama; Cobice, Diego; Malone, John

    2015-09-10

    Mass-spectrometry based proteomics has evolved as a promising technology over the last decade and is undergoing a dramatic development in a number of different areas, such as; mass spectrometric instrumentation, peptide identification algorithms and bioinformatic computational data analysis. The improved methodology allows quantitative measurement of relative or absolute protein amounts, which is essential for gaining insights into their functions and dynamics in biological systems. Several different strategies involving stable isotopes label (ICAT, ICPL, IDBEST, iTRAQ, TMT, IPTL, SILAC), label-free statistical assessment approaches (MRM, SWATH) and absolute quantification methods (AQUA) are possible, each having specific strengths and weaknesses. Inductively coupled plasma mass spectrometry (ICP-MS), which is still widely recognised as elemental detector, has recently emerged as a complementary technique to the previous methods. The new application area for ICP-MS is targeting the fast growing field of proteomics related research, allowing absolute protein quantification using suitable elemental based tags. This document describes the different stable isotope labelling methods which incorporate metabolic labelling in live cells, ICP-MS based detection and post-harvest chemical label tagging for protein quantification, in addition to summarising their pros and cons. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Generation of arbitrary vector fields based on a pair of orthogonal elliptically polarized base vectors.

    PubMed

    Xu, Danfeng; Gu, Bing; Rui, Guanghao; Zhan, Qiwen; Cui, Yiping

    2016-02-22

    We present an arbitrary vector field with hybrid polarization based on the combination of a pair of orthogonal elliptically polarized base vectors on the Poincaré sphere. It is shown that the created vector field is only dependent on the latitude angle 2χ but is independent on the longitude angle 2ψ on the Poincaré sphere. By adjusting the latitude angle 2χ, which is related to two identical waveplates in a common path interferometric arrangement, one could obtain arbitrary type of vector fields. Experimentally, we demonstrate the generation of such kind of vector fields and confirm the distribution of state of polarization by the measurement of Stokes parameters. Besides, we investigate the tight focusing properties of these vector fields. It is found that the additional degree of freedom 2χ provided by arbitrary vector field with hybrid polarization allows one to control the spatial structure of polarization and to engineer the focusing field.

  12. The multi-line slope method for measuring the effective magnetic field of cool stars: an application to the solar-like cycle of ɛ Eri

    NASA Astrophysics Data System (ADS)

    Scalia, C.; Leone, F.; Gangi, M.; Giarrusso, M.; Stift, M. J.

    2017-12-01

    One method for the determination of integrated longitudinal stellar fields from low-resolution spectra is the so-called slope method, which is based on the regression of the Stokes V signal against the first derivative of Stokes I. Here we investigate the possibility of extending this technique to measure the magnetic fields of cool stars from high-resolution spectra. For this purpose we developed a multi-line modification to the slope method, called the multi-line slope method. We tested this technique by analysing synthetic spectra computed with the COSSAM code and real observations obtained with the high-resolution spectropolarimeters Narval, HARPSpol and the Catania Astrophysical Observatory Spectropolarimeter (CAOS). We show that the multi-line slope method is a fast alternative to the least squares deconvolution technique for the measurement of the effective magnetic fields of cool stars. Using a Fourier transform on the effective magnetic field variations of the star ε Eri, we find that the long-term periodicity of the field corresponds to the 2.95-yr period of the stellar dynamo, revealed by the variation of the activity index.

  13. Optical proximity correction (OPC) in near-field lithography with pixel-based field sectioning time modulation

    NASA Astrophysics Data System (ADS)

    Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W.

    2018-01-01

    Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.

  14. Optical proximity correction (OPC) in near-field lithography with pixel-based field sectioning time modulation.

    PubMed

    Oh, Seonghyeon; Han, Dandan; Shim, Hyeon Bo; Hahn, Jae W

    2018-01-26

    Subwavelength features have been successfully demonstrated in near-field lithography. In this study, the point spread function (PSF) of a near-field beam spot from a plasmonic ridge nanoaperture is discussed with regard to the complex decaying characteristic of a non-propagating wave and the asymmetry of the field distribution for pattern design. We relaxed the shape complexity of the field distribution with pixel-based optical proximity correction (OPC) for simplifying the pattern image distortion. To enhance the pattern fidelity for a variety of arbitrary patterns, field-sectioning structures are formulated via convolutions with a time-modulation function and a transient PSF along the near-field dominant direction. The sharpness of corners and edges, and line shortening can be improved by modifying the original target pattern shape using the proposed approach by considering both the pattern geometry and directionality of the field decay for OPC in near-field lithography.

  15. Membership determination of open clusters based on a spectral clustering method

    NASA Astrophysics Data System (ADS)

    Gao, Xin-Hua

    2018-06-01

    We present a spectral clustering (SC) method aimed at segregating reliable members of open clusters in multi-dimensional space. The SC method is a non-parametric clustering technique that performs cluster division using eigenvectors of the similarity matrix; no prior knowledge of the clusters is required. This method is more flexible in dealing with multi-dimensional data compared to other methods of membership determination. We use this method to segregate the cluster members of five open clusters (Hyades, Coma Ber, Pleiades, Praesepe, and NGC 188) in five-dimensional space; fairly clean cluster members are obtained. We find that the SC method can capture a small number of cluster members (weak signal) from a large number of field stars (heavy noise). Based on these cluster members, we compute the mean proper motions and distances for the Hyades, Coma Ber, Pleiades, and Praesepe clusters, and our results are in general quite consistent with the results derived by other authors. The test results indicate that the SC method is highly suitable for segregating cluster members of open clusters based on high-precision multi-dimensional astrometric data such as Gaia data.

  16. A new signal restoration method based on deconvolution of the Point Spread Function (PSF) for the Flat-Field Holographic Concave Grating UV spectrometer system

    NASA Astrophysics Data System (ADS)

    Dai, Honglin; Luo, Yongdao

    2013-12-01

    In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.

  17. Two-component relativistic coupled-cluster methods using mean-field spin-orbit integrals

    NASA Astrophysics Data System (ADS)

    Liu, Junzi; Shen, Yue; Asthana, Ayush; Cheng, Lan

    2018-01-01

    A novel implementation of the two-component spin-orbit (SO) coupled-cluster singles and doubles (CCSD) method and the CCSD augmented with the perturbative inclusion of triple excitations [CCSD(T)] method using mean-field SO integrals is reported. The new formulation of SO-CCSD(T) features an atomic-orbital-based algorithm for the particle-particle ladder term in the CCSD equation, which not only removes the computational bottleneck associated with the large molecular-orbital integral file but also accelerates the evaluation of the particle-particle ladder term by around a factor of 4 by taking advantage of the spin-free nature of the instantaneous electron-electron Coulomb interaction. Benchmark calculations of the SO splittings for the thallium atom and a set of diatomic 2Π radicals as well as of the bond lengths and harmonic frequencies for a set of closed-shell diatomic molecules are presented. The basis-set and core-correlation effects in the calculations of these properties have been carefully analyzed.

  18. Section summary: Ground-based field measurements

    Treesearch

    Nophea Sasaki

    2013-01-01

    Although deforestation has been the main focus of international debate in REDD+, forest degradation could emit even more carbon emissions because forest degradation can take place in any accessible forest. Accounting for emission factors requires the use of stockchange or gain-loss approach depending on the forests in questions. Ground based field measurements are a...

  19. New variable selection methods for zero-inflated count data with applications to the substance abuse field

    PubMed Central

    Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming

    2011-01-01

    Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207

  20. Interlaminar Stresses by Refined Beam Theories and the Sinc Method Based on Interpolation of Highest Derivative

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander

    2010-01-01

    Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.