Sample records for method requires accurate

  1. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  2. Integrated Life-Cycle Framework for Maintenance, Monitoring and Reliability of Naval Ship Structures

    DTIC Science & Technology

    2012-08-15

    number of times, a fast and accurate method for analyzing the ship hull is required. In order to obtain this required computational speed and accuracy...Naval Engineers Fleet Maintenance & Modernization Symposium (FMMS 2011) [8] and the Eleventh International Conference on Fast Sea Transportation ( FAST ...probabilistic strength of the ship hull. First, a novel deterministic method for the fast and accurate calculation of the strength of the ship hull is

  3. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  4. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    USDA-ARS?s Scientific Manuscript database

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  5. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  6. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. I. Measurement of positive radical ions using porphyrin standard reference materials.

    PubMed

    Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth

    2010-06-15

    A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.

  7. An instrument for rapid, accurate, determination of fuel moisture content

    Treesearch

    Stephen S. Sackett

    1980-01-01

    Moisture contents of dead and living fuels are key variables in fire behavior. Accurate, real-time fuel moisture data are required for prescribed burning and wildfire behavior predictions. The convection oven method has become the standard for direct fuel moisture content determination. Efforts to quantify fuel moisture through indirect methods have not been...

  8. Methods to estimate irrigated reference crop evapotranspiration - a review.

    PubMed

    Kumar, R; Jat, M K; Shankar, V

    2012-01-01

    Efficient water management of crops requires accurate irrigation scheduling which, in turn, requires the accurate measurement of crop water requirement. Irrigation is applied to replenish depleted moisture for optimum plant growth. Reference evapotranspiration plays an important role for the determination of water requirements for crops and irrigation scheduling. Various models/approaches varying from empirical to physically base distributed are available for the estimation of reference evapotranspiration. Mathematical models are useful tools to estimate the evapotranspiration and water requirement of crops, which is essential information required to design or choose best water management practices. In this paper the most commonly used models/approaches, which are suitable for the estimation of daily water requirement for agricultural crops grown in different agro-climatic regions, are reviewed. Further, an effort has been made to compare the accuracy of various widely used methods under different climatic conditions.

  9. Evaluation of automated threshold selection methods for accurately sizing microscopic fluorescent cells by image analysis.

    PubMed Central

    Sieracki, M E; Reichenbach, S E; Webb, K L

    1989-01-01

    The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431

  10. Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.

    PubMed

    Morrison, Shane A; Luttbeg, Barney; Belden, Jason B

    2016-11-01

    Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50  = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  12. A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Hui, W. H.

    1991-01-01

    Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.

  13. Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition.

    PubMed

    Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C

    2015-06-08

    Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.

  14. Automated seed localization from CT datasets of the prostate.

    PubMed

    Brinkmann, D H; Kline, R W

    1998-09-01

    With the increasing utilization of permanent brachytherapy implants for treating carcinoma of the prostate, the importance of accurate post-treatment dose calculation also increases for assessing patient outcome and planning future treatments. An automatic method for seed localization of permanent brachytherapy implants, using CT datasets of the prostate, has been developed and tested on a phantom using an actual patient planned seed distribution. This method was also compared to results with the three-film technique for three patient datasets. The automatic method is as accurate or more accurate than the three film technique for 1 mm, 3 mm, and 5 mm contiguous CT slices, and eliminates the inter- and intra-observer variability of the manual methods. The automated method improves the localization of brachytherapy seeds while reducing the time required for the user to input information, and is demonstrated to be less operator dependent, less time consuming, and potentially more accurate than the three-film technique.

  15. Serial Scanning and Registration of High Resolution Quantitative Computed Tomography Volume Scans for the Determination of Local Bone Density Changes

    NASA Technical Reports Server (NTRS)

    Whalen, Robert T.; Napel, Sandy; Yan, Chye H.

    1996-01-01

    Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'

  16. Quantitative Hydrocarbon Energies from the PMO Method.

    ERIC Educational Resources Information Center

    Cooper, Charles F.

    1979-01-01

    Details a procedure for accurately calculating the quantum mechanical energies of hydrocarbons using the perturbational molecular orbital (PMO) method, which does not require the use of a computer. (BT)

  17. Physiological motion modeling for organ-mounted robots.

    PubMed

    Wood, Nathan A; Schwartzman, David; Zenati, Marco A; Riviere, Cameron N

    2017-12-01

    Organ-mounted robots passively compensate heartbeat and respiratory motion. In model-guided procedures, this motion can be a significant source of information that can be used to aid in localization or to add dynamic information to static preoperative maps. Models for estimating periodic motion are proposed for both position and orientation. These models are then tested on animal data and optimal orders are identified. Finally, methods for online identification are demonstrated. Models using exponential coordinates and Euler-angle parameterizations are as accurate as models using quaternion representations, yet require a quarter fewer parameters. Models which incorporate more than four cardiac or three respiration harmonics are no more accurate. Finally, online methods estimate model parameters as accurately as offline methods within three respiration cycles. These methods provide a complete framework for accurately modelling the periodic deformation of points anywhere on the surface of the heart in a closed chest. Copyright © 2017 John Wiley & Sons, Ltd.

  18. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  19. Learning Multiple Band-Pass Filters for Sleep Stage Estimation: Towards Care Support for Aged Persons

    NASA Astrophysics Data System (ADS)

    Takadama, Keiki; Hirose, Kazuyuki; Matsushima, Hiroyasu; Hattori, Kiyohiko; Nakajima, Nobuo

    This paper proposes the sleep stage estimation method that can provide an accurate estimation for each person without connecting any devices to human's body. In particular, our method learns the appropriate multiple band-pass filters to extract the specific wave pattern of heartbeat, which is required to estimate the sleep stage. For an accurate estimation, this paper employs Learning Classifier System (LCS) as the data-mining techniques and extends it to estimate the sleep stage. Extensive experiments on five subjects in mixed health confirm the following implications: (1) the proposed method can provide more accurate sleep stage estimation than the conventional method, and (2) the sleep stage estimation calculated by the proposed method is robust regardless of the physical condition of the subject.

  20. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    NASA Astrophysics Data System (ADS)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  1. Implement Method for Automated Testing of Markov Chain Convergence into INVERSE for ORNL12-RS-108J: Advanced Multi-Dimensional Forward and Inverse Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bledsoe, Keith C.

    2015-04-01

    The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less

  2. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  3. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  4. TLE uncertainty estimation using robust weighted differencing

    NASA Astrophysics Data System (ADS)

    Geul, Jacco; Mooij, Erwin; Noomen, Ron

    2017-05-01

    Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).

  5. A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils.

    PubMed

    Alam, Md Ferdous; Haque, Asadul

    2017-10-18

    An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis.

  6. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.

    PubMed

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy

    2015-05-01

    We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.

  7. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  8. Determination of molybenum in soils and rocks: A geochemical semimicro field method

    USGS Publications Warehouse

    Ward, F.N.

    1951-01-01

    Reconnaissance work in geochemical prospecting requires a simple, rapid, and moderately accurate method for the determination of small amounts of molybdenum in soils and rocks. The useful range of the suggested procedure is from 1 to 32 p.p.m. of molybdenum, but the upper limit can be extended. Duplicate determinations on eight soil samples containing less than 10 p.p.m. of molybdenum agree within 1 p.p.m., and a comparison of field results with those obtained by a conventional laboratory procedure shows that the method is sufficiently accurate for use in geochemical prospecting. The time required for analysis and the quantities of reagents needed have been decreased to provide essentially a "test tube" method for the determination of molybdenum in soils and rocks. With a minimum amount of skill, one analyst can make 30 molybdenum determinations in an 8-hour day.

  9. On the Development of Parameterized Linear Analytical Longitudinal Airship Models

    NASA Technical Reports Server (NTRS)

    Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.

    2008-01-01

    In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.

  10. Rapid, cost-effective and accurate quantification of Yucca schidigera Roezl. steroidal saponins using HPLC-ELSD method.

    PubMed

    Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona

    2017-04-15

    Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. Procedure for the systematic orientation of digitised cranial models. Design and validation.

    PubMed

    Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L

    2015-12-01

    Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. 48 CFR 1615.406-2 - Certificate of accurate cost or pricing data for community-rated carriers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... cost or pricing data for community-rated carriers. 1615.406-2 Section 1615.406-2 Federal Acquisition... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 1615.406-2 Certificate of accurate cost or pricing data for community-rated carriers. The contracting officer will require a carrier...

  13. [Automated procedure for volumetric measurement of metastases: estimation of tumor burden].

    PubMed

    Fabel, M; Bolte, H

    2008-09-01

    Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.

  14. A colorimetric method for the determination of carboxyhaemoglobin over a wide range of concentrations

    PubMed Central

    Trinder, P.; Harper, F. E.

    1962-01-01

    A colorimetric technique for the determination of carboxyhaemoglobin in blood is described. Carbon monoxide released from blood in a standard Conway unit reacts with palladous chloride/arsenomolybdate solution to produce a blue colour. Using 0·5 to 2 ml. of blood, the method will estimate carboxyhaemoglobin accurately at levels from 0·1% to 100% of total haemoglobin and in the presence of other abnormal pigments. A number of methods are available for the determination of carboxyhaemoglobin; none is accurate below a concentration of 1·5 g. carboxyhaemoglobin per 100 ml. but for most clinical purposes this is not important. For forensic purposes and occasionally in clinical use, an accurate determination of carboxyhaemoglobin below 750 mg. per 100 ml. may be required and no really satisfactory method is at present available. Some time ago when it was important to know whether a person who was found dead in a burning house had died before or after the fire had started, we became interested in developing a method which would determine accurately carboxyhaemoglobin at levels of 750 mg. per 100 ml. PMID:13922505

  15. Rapid and accurate prediction of degradant formation rates in pharmaceutical formulations using high-performance liquid chromatography-mass spectrometry.

    PubMed

    Darrington, Richard T; Jiao, Jim

    2004-04-01

    Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.

  16. Segmentation of bone pixels from EROI Image using clustering method for bone age assessment

    NASA Astrophysics Data System (ADS)

    Bakthula, Rajitha; Agarwal, Suneeta

    2016-03-01

    The bone age of a human can be identified using carpal and epiphysis bones ossification, which is limited to teen age. The accurate age estimation depends on best separation of bone pixels and soft tissue pixels in the ROI image. The traditional approaches like canny, sobel, clustering, region growing and watershed can be applied, but these methods requires proper pre-processing and accurate initial seed point estimation to provide accurate results. Therefore this paper proposes new approach to segment the bone from soft tissue and background pixels. First pixels are enhanced using BPE and the edges are identified by HIPI. Later a K-Means clustering is applied for segmentation. The performance of the proposed approach has been evaluated and compared with the existing methods.

  17. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  18. A Fast and Effective Pyridine-Free Method for the Determination of Hydroxyl Value of Hydroxyl-Terminated Polybutadiene and Other Hydroxy Compounds

    NASA Astrophysics Data System (ADS)

    Alex, Ancy Smitha; Kumar, Vijendra; Sekkar, V.; Bandyopadhyay, G. G.

    2017-07-01

    Hydroxyl-terminated polybutadiene (HTPB) is the workhorse propellant binder for launch vehicle and missile applications. Accurate determination of the hydroxyl value (OHV) of HTPB is crucial for tailoring the ultimate mechanical and ballistic properties of the propellant derived. This article describes a fast and effective methodology free of pyridine based on acetic anhydride, N-methyl imidazole, and toluene for the determination of OHV of nonpolar polymers like HTPB and other hydroxyl compounds. This method gives accurate and reproducible results comparable to standard methods and is superior to existing methods in terms of user friendliness, efficiency, and time requirement.

  19. A photogrammetric technique for generation of an accurate multispectral optical flow dataset

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2017-06-01

    A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.

  20. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  1. Forward and store telemedicine using Motion Pictures Expert Group: a novel approach to pediatric tele-echocardiography.

    PubMed

    Woodson, Kristina E; Sable, Craig A; Cross, Russell R; Pearson, Gail D; Martin, Gerard R

    2004-11-01

    Live transmission of echocardiograms over integrated services digital network lines is accurate and has led to improvements in the delivery of pediatric cardiology care. Permanent archiving of the live studies has not previously been reported. Specific obstacles to permanent storage of telemedicine files have included the ability to produce accurate images without a significant increase in storage requirements. We evaluated the accuracy of Motion Pictures Expert Group (MPEG) digitization of incoming video streams and assessed the storage requirements of these files for infants in a real-time pediatric tele-echocardiography program. All major cardiac diagnoses were correctly diagnosed by review of MPEG images. MPEG file size ranged from 11.1 to 182 MB (56.5 +/- 29.9 MB). MPEG digitization during live neonatal telemedicine is accurate and provides an efficient method for storage. This modality has acceptable storage requirements; file sizes are comparable to other digital modalities.

  2. Quantitative comparison of in situ soil CO2 flux measurement methods

    Treesearch

    Jennifer D. Knoepp; James M. Vose

    2002-01-01

    Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...

  3. Comparing three sampling techniques for estimating fine woody down dead biomass

    Treesearch

    Robert E. Keane; Kathy Gray

    2013-01-01

    Designing woody fuel sampling methods that quickly, accurately and efficiently assess biomass at relevant spatial scales requires extensive knowledge of each sampling method's strengths, weaknesses and tradeoffs. In this study, we compared various modifications of three common sampling methods (planar intercept, fixed-area microplot and photoload) for estimating...

  4. 14 CFR 417.203 - Compliance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...

  5. 14 CFR 417.203 - Compliance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...

  6. 14 CFR 417.203 - Compliance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... analysis method is based on accurate data and scientific principles and is statistically valid. The FAA... safety analysis must also meet the requirements for methods of analysis contained in appendices A and B... from an identical or similar launch if the analysis still applies to the later launch. (b) Method of...

  7. NEW TARGET AND CONTROL ASSAYS FOR QUANTITATIVE POLYMERASE CHAIN REACTION (QPCR) ANALYSIS OF ENTEROCOCCI IN WATER

    EPA Science Inventory

    Enterococci are frequently monitored in water samples as indicators of fecal pollution. Attention is now shifting from culture based methods for enumerating these organisms to more rapid molecular methods such as QPCR. Accurate quantitative analyses by this method requires highly...

  8. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    PubMed

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  9. Evaluating the accuracy of wear formulae for acetabular cup liners.

    PubMed

    Wu, James Shih-Shyn; Hsu, Shu-Ling; Chen, Jian-Horng

    2010-02-01

    This study proposes two methods for exploring the wear volume of a worn liner. The first method is a numerical method, in which SolidWorks software is used to create models of the worn out regions of liners at various wear directions and depths. The second method is an experimental one, in which a machining center is used to mill polyoxymethylene to manufacture worn and unworn liner models, then the volumes of the models are measured. The results show that the SolidWorks software is a good tool for presenting the wear pattern and volume of a worn liner. The formula provided by Ilchmann is the most suitable for computing liner volume loss, but is not accurate enough. This study suggests that a more accurate wear formula is required. This is crucial for accurate evaluation of the performance of hip components implanted in patients, as well as for designing new hip components.

  10. Approximating high-dimensional dynamics by barycentric coordinates with linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less

  11. Approximating high-dimensional dynamics by barycentric coordinates with linear programming.

    PubMed

    Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma

    2015-01-01

    The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.

  12. A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils

    PubMed Central

    Alam, Md Ferdous

    2017-01-01

    An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis. PMID:29057823

  13. Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals

    NASA Technical Reports Server (NTRS)

    de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.

    2015-01-01

    Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.

  14. Critical evaluation of five methods for quantifying chewing lice (Insecta: Phthiraptera).

    PubMed

    Clayton, D H; Drown, D M

    2001-12-01

    Five methods for estimating the abundance of chewing lice (Insecta: Phthiraptera) were tested. To evaluate the methods, feral pigeons (Columba livia) and 2 species of ischnoceran lice were used. The fraction of lice removed by each method was compared, and least squares linear regression was used to determine how well each method predicted total abundance. Total abundance was assessed in most cases using KOH dissolution. The 2 methods involving dead birds (body washing and post-mortem-ruffling) provided better results than 3 methods involving live birds (dust-ruffling, fumigation chambers, and visual examination). Body washing removed the largest fraction of lice (>82%) and was an extremely accurate predictor of total abundance (r2 = 0.99). Post-mortem-ruffling was also an accurate predictor of total abundance (r2 > or = 0.88), even though it removed a smaller proportion of lice (<70%) than body washing. Dust-ruffling and fumigation chambers removed even fewer lice, but were still reasonably accurate predictors of total abundance, except in the case of data sets restricted to birds with relatively few lice. Visual examination, the only method not requiring that lice be removed from the host, was an accurate predictor of louse abundance, except in the case of wing lice on lightly parasitized birds.

  15. Quantitative characterization of metastatic disease in the spine. Part I. Semiautomated segmentation using atlas-based deformable registration and the level set method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardisty, M.; Gordon, L.; Agarwal, P.

    2007-08-15

    Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less

  16. ECOLOGICAL AND SOCIALECONOMIC BENEFITS OF RESTORING AND-IMPAIRED STREAMS: EMERGY-BASED VALUATION

    EPA Science Inventory

    Sound environmental decisions require an integrated, systemic method of valuation that accurately accounts for environmental and social, as well as economic, costs and benefits. More inclusive methods are particularly needed for assessing ecological benefits because these are so...

  17. Efficient Methods of Estimating Switchgrass Biomass Supplies

    USDA-ARS?s Scientific Manuscript database

    Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...

  18. EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE

    EPA Science Inventory

    Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...

  19. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  20. High-throughput quantification of hydroxyproline for determination of collagen.

    PubMed

    Hofman, Kathleen; Hall, Bronwyn; Cleaver, Helen; Marshall, Susan

    2011-10-15

    An accurate and high-throughput assay for collagen is essential for collagen research and development of collagen products. Hydroxyproline is routinely assayed to provide a measurement for collagen quantification. The time required for sample preparation using acid hydrolysis and neutralization prior to assay is what limits the current method for determining hydroxyproline. This work describes the conditions of alkali hydrolysis that, when combined with the colorimetric assay defined by Woessner, provide a high-throughput, accurate method for the measurement of hydroxyproline. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Accurate Energy Transaction Allocation using Path Integration and Interpolation

    NASA Astrophysics Data System (ADS)

    Bhide, Mandar Mohan

    This thesis investigates many of the popular cost allocation methods which are based on actual usage of the transmission network. The Energy Transaction Allocation (ETA) method originally proposed by A.Fradi, S.Brigonne and B.Wollenberg which gives unique advantage of accurately allocating the transmission network usage is discussed subsequently. Modified calculation of ETA based on simple interpolation technique is then proposed. The proposed methodology not only increase the accuracy of calculation but also decreases number of calculations to less than half of the number of calculations required in original ETAs.

  2. 40 CFR 75.48 - Petition for an alternative monitoring system.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... method of ensuring an accurate assessment of operating hourly conditions on a real-time basis. (9) A...) Hourly test data for the alternative monitoring system at each required operating level and fuel type... continuous emissions monitoring system at each required operating level and fuel type. The fuel type...

  3. 40 CFR 75.48 - Petition for an alternative monitoring system.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... method of ensuring an accurate assessment of operating hourly conditions on a real-time basis. (9) A...) Hourly test data for the alternative monitoring system at each required operating level and fuel type... continuous emissions monitoring system at each required operating level and fuel type. The fuel type...

  4. 40 CFR 75.48 - Petition for an alternative monitoring system.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... method of ensuring an accurate assessment of operating hourly conditions on a real-time basis. (9) A...) Hourly test data for the alternative monitoring system at each required operating level and fuel type... continuous emissions monitoring system at each required operating level and fuel type. The fuel type...

  5. 40 CFR 75.48 - Petition for an alternative monitoring system.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... method of ensuring an accurate assessment of operating hourly conditions on a real-time basis. (9) A...) Hourly test data for the alternative monitoring system at each required operating level and fuel type... continuous emissions monitoring system at each required operating level and fuel type. The fuel type...

  6. Methods for Documenting Systematic Review Searches: A Discussion of Common Issues

    ERIC Educational Resources Information Center

    Rader, Tamara; Mann, Mala; Stansfield, Claire; Cooper, Chris; Sampson, Margaret

    2014-01-01

    Introduction: As standardized reporting requirements for systematic reviews are being adopted more widely, review authors are under greater pressure to accurately record their search process. With careful planning, documentation to fulfill the Preferred Reporting Items for Systematic Reviews and Meta-Analyses requirements can become a valuable…

  7. 40 CFR 75.48 - Petition for an alternative monitoring system.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... method of ensuring an accurate assessment of operating hourly conditions on a real-time basis. (9) A...) Hourly test data for the alternative monitoring system at each required operating level and fuel type... continuous emissions monitoring system at each required operating level and fuel type. The fuel type...

  8. Gasometric Determination of CO[subscript 2] Released from Carbonate Materials

    ERIC Educational Resources Information Center

    Fagerlund, Johan; Zevenhoven, Ron; Hulden, Stig-Goran; Sodergard, Berndt

    2010-01-01

    To determine the carbonation degree of materials used in mineral carbonation experiments, a fast, simple, and sufficiently accurate method is required. For this purpose, a method based on the reaction between carbonates and hydrochloric acid was developed. It was noted that this method could also be used to teach undergraduate students some basic…

  9. Methods to compute reliabilities for genomic predictions of feed intake

    USDA-ARS?s Scientific Manuscript database

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  10. Non-Invasive Methods for Iron Concentration Assessment

    NASA Astrophysics Data System (ADS)

    Carneiro, Antonio A. O.; Baffa, Oswaldo; Angulo, Ivan L.; Covas, Dimas T.

    2002-08-01

    Iron excess is commonly observed in patients with transfusional iron overload. The iron chelation therapy in these patients require accurate determination of the magnitude of iron excess. The most promising method for noninvasive assessment of iron stores is based on measurements of hepatic magnetic susceptibility.

  11. A new background subtraction method for Western blot densitometry band quantification through image analysis software.

    PubMed

    Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier

    2018-06-01

    Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Cross-Sectional HIV Incidence Estimation in HIV Prevention Research

    PubMed Central

    Brookmeyer, Ron; Laeyendecker, Oliver; Donnell, Deborah; Eshleman, Susan H.

    2013-01-01

    Accurate methods for estimating HIV incidence from cross-sectional samples would have great utility in prevention research. This report describes recent improvements in cross-sectional methods that significantly improve their accuracy. These improvements are based on the use of multiple biomarkers to identify recent HIV infections. These multi-assay algorithms (MAAs) use assays in a hierarchical approach for testing that minimizes the effort and cost of incidence estimation. These MAAs do not require mathematical adjustments for accurate estimation of the incidence rates in study populations in the year prior to sample collection. MAAs provide a practical, accurate, and cost-effective approach for cross-sectional HIV incidence estimation that can be used for HIV prevention research and global epidemic monitoring. PMID:23764641

  13. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.

    2017-03-01

    Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.

  15. Accurate measurement of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Technical abstract: Genetic transformation is a powerful means for the improvement of crop plants, but requires labor and resource intensive methods. An efficient method for identifying single copy transgene insertion events from a population of independent transgenic lines is desirable. Currently ...

  16. Accurate measure of transgene copy number in crop plants using droplet digital PCR

    USDA-ARS?s Scientific Manuscript database

    Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy numb...

  17. Multiplexed microsatellite recovery using massively parallel sequencing

    Treesearch

    T.N. Jennings; B.J. Knaus; T.D. Mullins; S.M. Haig; R.C. Cronn

    2011-01-01

    Conservation and management of natural populations requires accurate and inexpensive genotyping methods. Traditional microsatellite, or simple sequence repeat (SSR), marker analysis remains a popular genotyping method because of the comparatively low cost of marker development, ease of analysis and high power of genotype discrimination. With the availability of...

  18. LONGITUDINAL COHORT METHODS STUDIES

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Exposure classification for occupational studies is relatively easy compared to predicting residential childhood exposures. Recent NHEXAS (Maryland) study articl...

  19. Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions

    NASA Astrophysics Data System (ADS)

    Chen, Nan; Majda, Andrew J.

    2018-02-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  20. A review of models and micrometeorological methods used to estimate wetland evapotranspiration

    USGS Publications Warehouse

    Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.

    2004-01-01

    Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows particular promise for characterizing variability in ET as a result of spatial heterogeneity. LIDAR is another method that has special utility in a heterogeneous wetland environment, because it provides an integrated value for ET from a surface. The main drawback of LIDAR is the high cost of equipment and the need for an independent ET measure to assess accuracy. If Rn and G are measured accurately, the Priestley-Taylor equation can be used successfully with site-specific calibration factors to estimate wetland ET. The 'crop' cover coefficient (Kc) method can provide accurate wetland ET estimates if calibrated for the environmental and climatic characteristics of a particular area. More complicated equations such as the Penman and Penman-Monteith equations also can be used to estimate wetland ET, but surface variability and lack of information on aerodynamic and surface resistances make use of such equations somewhat questionable. ?? 2004 John Wiley and Sons, Ltd.

  1. On a modified streamline curvature method for the Euler equations

    NASA Technical Reports Server (NTRS)

    Cordova, Jeffrey Q.; Pearson, Carl E.

    1988-01-01

    A modification of the streamline curvature method leads to a quasilinear second-order partial differential equation for the streamline coordinate function. The existence of a stream function is not required. The method is applied to subsonic and supersonic nozzle flow, and to axially symmetric flow with swirl. For many situations, the associated numerical method is both fast and accurate.

  2. Towards an Optimized Method of Olive Tree Crown Volume Measurement

    PubMed Central

    Miranda-Fuentes, Antonio; Llorens, Jordi; Gamarra-Diezma, Juan L.; Gil-Ribes, Jesús A.; Gil, Emilio

    2015-01-01

    Accurate crown characterization of large isolated olive trees is vital for adjusting spray doses in three-dimensional crop agriculture. Among the many methodologies available, laser sensors have proved to be the most reliable and accurate. However, their operation is time consuming and requires specialist knowledge and so a simpler crown characterization method is required. To this end, three methods were evaluated and compared with LiDAR measurements to determine their accuracy: Vertical Crown Projected Area method (VCPA), Ellipsoid Volume method (VE) and Tree Silhouette Volume method (VTS). Trials were performed in three different kinds of olive tree plantations: intensive, adapted one-trunked traditional and traditional. In total, 55 trees were characterized. Results show that all three methods are appropriate to estimate the crown volume, reaching high coefficients of determination: R2 = 0.783, 0.843 and 0.824 for VCPA, VE and VTS, respectively. However, discrepancies arise when evaluating tree plantations separately, especially for traditional trees. Here, correlations between LiDAR volume and other parameters showed that the Mean Vector calculated for VCPA method showed the highest correlation for traditional trees, thus its use in traditional plantations is highly recommended. PMID:25658396

  3. Fast Markerless Tracking for Augmented Reality in Planar Environment

    NASA Astrophysics Data System (ADS)

    Basori, Ahmad Hoirul; Afif, Fadhil Noer; Almazyad, Abdulaziz S.; AbuJabal, Hamza Ali S.; Rehman, Amjad; Alkawaz, Mohammed Hazim

    2015-12-01

    Markerless tracking for augmented reality should not only be accurate but also fast enough to provide a seamless synchronization between real and virtual beings. Current reported methods showed that a vision-based tracking is accurate but requires high computational power. This paper proposes a real-time hybrid-based method for tracking unknown environments in markerless augmented reality. The proposed method provides collaboration of vision-based approach with accelerometers and gyroscopes sensors as camera pose predictor. To align the augmentation relative to camera motion, the tracking method is done by substituting feature-based camera estimation with combination of inertial sensors with complementary filter to provide more dynamic response. The proposed method managed to track unknown environment with faster processing time compared to available feature-based approaches. Moreover, the proposed method can sustain its estimation in a situation where feature-based tracking loses its track. The collaboration of sensor tracking managed to perform the task for about 22.97 FPS, up to five times faster than feature-based tracking method used as comparison. Therefore, the proposed method can be used to track unknown environments without depending on amount of features on scene, while requiring lower computational cost.

  4. Improved methods for the determination of drying conditions and fraction insoluble solids (FIS) in biomass pretreatment slurry

    DOE PAGES

    Sluiter, Amie; Sluiter, Justin; Wolfrum, Ed; ...

    2016-05-20

    Accurate and precise chemical characterization of biomass feedstocks and process intermediates is a requirement for successful technical and economic evaluation of biofuel conversion technologies. The uncertainty in primary measurements of the fraction insoluble solid (FIS) content of dilute acid pretreated corn stover slurry is the major contributor to uncertainty in yield calculations for enzymatic hydrolysis of cellulose to glucose. This uncertainty is propagated through process models and impacts modeled fuel costs. The challenge in measuring FIS is obtaining an accurate measurement of insoluble matter in the pretreated materials, while appropriately accounting for all biomass derived components. Three methods were testedmore » to improve this measurement. One used physical separation of liquid and solid phases, and two utilized direct determination of dry matter content in two fractions. We offer a comparison of drying methods. Lastly, our results show utilizing a microwave dryer to directly determine dry matter content is the optimal method for determining FIS, based on the low time requirements and the method optimization done using model slurries.« less

  5. Accurate image-charge method by the use of the residue theorem for core-shell dielectric sphere

    NASA Astrophysics Data System (ADS)

    Fu, Jing; Xu, Zhenli

    2018-02-01

    An accurate image-charge method (ICM) is developed for ionic interactions outside a core-shell structured dielectric sphere. Core-shell particles have wide applications for which the theoretical investigation requires efficient methods for the Green's function used to calculate pairwise interactions of ions. The ICM is based on an inverse Mellin transform from the coefficients of spherical harmonic series of the Green's function such that the polarization charge due to dielectric boundaries is represented by a series of image point charges and an image line charge. The residue theorem is used to accurately calculate the density of the line charge. Numerical results show that the ICM is promising in fast evaluation of the Green's function, and thus it is useful for theoretical investigations of core-shell particles. This routine can also be applicable for solving other problems with spherical dielectric interfaces such as multilayered media and Debye-Hückel equations.

  6. Accurate and facile determination of the index of refraction of organic thin films near the carbon 1s absorption edge.

    PubMed

    Yan, Hongping; Wang, Cheng; McCarn, Allison R; Ade, Harald

    2013-04-26

    A practical and accurate method to obtain the index of refraction, especially the decrement δ, across the carbon 1s absorption edge is demonstrated. The combination of absorption spectra scaled to the Henke atomic scattering factor database, the use of the doubly subtractive Kramers-Kronig relations, and high precision specular reflectivity measurements from thin films allow the notoriously difficult-to-measure δ to be determined with high accuracy. No independent knowledge of the film thickness or density is required. High confidence interpolation between relatively sparse measurements of δ across an absorption edge is achieved. Accurate optical constants determined by this method are expected to greatly improve the simulation and interpretation of resonant soft x-ray scattering and reflectivity data. The method is demonstrated using poly(methyl methacrylate) and should be extendable to all organic materials.

  7. Improving the accuracy of burn-surface estimation.

    PubMed

    Nichter, L S; Williams, J; Bryant, C A; Edlich, R F

    1985-09-01

    A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.

  8. Evaluation of four methods for estimating leaf area of isolated trees

    Treesearch

    P.J. Peper; E.G. McPherson

    2003-01-01

    The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...

  9. A general method for bead-enhanced quantitation by flow cytometry

    PubMed Central

    Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.

    2009-01-01

    Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632

  10. Factors Affecting Accuracy and Time Requirements of a Glucose Oxidase-Peroxidase Assay for Determination of Glucose

    USDA-ARS?s Scientific Manuscript database

    Accurate and rapid assays for glucose are desirable for analysis of glucose and starch in food and feedstuffs. An established colorimetric glucose oxidase-peroxidase method for glucose was modified to reduce analysis time, and evaluated for factors that affected accuracy. Time required to perform t...

  11. 3-D rigid body tracking using vision and depth sensors.

    PubMed

    Gedik, O Serdar; Alatan, A Aydn

    2013-10-01

    In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.

  12. Registration of T2-weighted and diffusion-weighted MR images of the prostate: comparison between manual and landmark-based methods

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin

    2012-02-01

    Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.

  13. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  14. Calculating the Financial Impact of Population Growth on Education.

    ERIC Educational Resources Information Center

    Cline, Daniel H.

    It is particularly difficult to make accurate enrollment projections for areas that are experiencing a rapid expansion in their population. The traditional method of calculating cohort survival ratios must be modified and supplemented with additional information to ensure accuracy; cost projection methods require detailed analyses of current costs…

  15. Hemlock woolly adelgid biological control: molecular methods to distinguish Laricobius nigrinus, L. rubidus, and their hybrids

    Treesearch

    Nathan P. Havill; Gina Davis; Joanne Klein; Adalgisa Caccone; Scott Salom

    2011-01-01

    Molecular diagnostics use DNA-based methods to assign unknown organisms to species. As such, they rely on a priori species designation by taxonomists and require validation with enough samples to capture the variation within species for accurately selecting diagnostic characters.

  16. On the reliability of Fusarium oxysporum f. sp. niveum research: Do we need standardized testing methods?

    USDA-ARS?s Scientific Manuscript database

    Fusarium oxysporum f. sp. nivium (Fon) is a pathogen highly variable in aggressiveness that requires a standardized testing method to more accurately define isolate aggressiveness (races) and to identify resistant watermelon lines. Isolates of Fon vary in aggressiveness from weakly to highly aggres...

  17. Z-scan theoretical and experimental studies for accurate measurements of the nonlinear refractive index and absorption of optical glasses near damage threshold

    NASA Astrophysics Data System (ADS)

    Olivier, Thomas; Billard, Franck; Akhouayri, Hassan

    2004-06-01

    Self-focusing is one of the dramatic phenomena that may occur during the propagation of a high power laser beam in a nonlinear material. This phenomenon leads to a degradation of the wave front and may also lead to a photoinduced damage of the material. Realistic simulations of the propagation of high power laser beams require an accurate knowledge of the nonlinear refractive index γ. In the particular case of fused silica and in the nanosecond regime, it seems that electronic mechanisms as well as electrostriction and thermal effects can lead to a significant refractive index variation. Compared to the different methods used to measure this parmeter, the Z-scan method is simple, offers a good sensitivity and may give absolute measurements if the incident beam is accurately studied. However, this method requires a very good knowledge of the incident beam and of its propagation inside a nonlinear sample. We used a split-step propagation algorithm to simlate Z-scan curves for arbitrary beam shape, sample thickness and nonlinear phase shift. According to our simulations and a rigorous analysis of the Z-scan measured signal, it appears that some abusive approximations lead to very important errors. Thus, by reducing possible errors on the interpretation of Z-scan experimental studies, we performed accurate measurements of the nonlinear refractive index of fused silica that show the significant contribution of nanosecond mechanisms.

  18. Unsupervised Learning —A Novel Clustering Method for Rolling Bearing Faults Identification

    NASA Astrophysics Data System (ADS)

    Kai, Li; Bo, Luo; Tao, Ma; Xuefeng, Yang; Guangming, Wang

    2017-12-01

    To promptly process the massive fault data and automatically provide accurate diagnosis results, numerous studies have been conducted on intelligent fault diagnosis of rolling bearing. Among these studies, such as artificial neural networks, support vector machines, decision trees and other supervised learning methods are used commonly. These methods can detect the failure of rolling bearing effectively, but to achieve better detection results, it often requires a lot of training samples. Based on above, a novel clustering method is proposed in this paper. This novel method is able to find the correct number of clusters automatically the effectiveness of the proposed method is validated using datasets from rolling element bearings. The diagnosis results show that the proposed method can accurately detect the fault types of small samples. Meanwhile, the diagnosis results are also relative high accuracy even for massive samples.

  19. Pathloss Calculation Using the Transmission Line Matrix and Finite Difference Time Domain Methods With Coarse Grids

    DOE PAGES

    Nutaro, James; Kuruganti, Teja

    2017-02-24

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  20. A suite of microplate reader-based colorimetric methods to quantify ammonium, nitrate, orthophosphate and silicate concentrations for aquatic nutrient monitoring.

    PubMed

    Ringuet, Stephanie; Sassano, Lara; Johnson, Zackary I

    2011-02-01

    A sensitive, accurate and rapid analysis of major nutrients in aquatic systems is essential for monitoring and maintaining healthy aquatic environments. In particular, monitoring ammonium (NH(4)(+)) concentrations is necessary for maintenance of many fish stocks, while accurate monitoring and regulation of ammonium, orthophosphate (PO(4)(3-)), silicate (Si(OH)(4)) and nitrate (NO(3)(-)) concentrations are required for regulating algae production. Monitoring of wastewater streams is also required for many aquaculture, municipal and industrial wastewater facilities to comply with local, state or federal water quality effluent regulations. Traditional methods for quantifying these nutrient concentrations often require laborious techniques or expensive specialized equipment making these analyses difficult. Here we present four alternative microcolorimetric assays that are based on a standard 96-well microplate format and microplate reader that simplify the quantification of each of these nutrients. Each method uses small sample volumes (200 µL), has a detection limit ≤ 1 µM in freshwater and ≤ 2 µM in saltwater, precision of at least 8% and compares favorably with standard analytical procedures. Routine use of these techniques in the laboratory and at an aquaculture facility to monitor nutrient concentrations associated with microalgae growth demonstrates that they are rapid, accurate and highly reproducible among different users. These techniques offer an alternative to standard nutrient analyses and because they are based on the standard 96-well format, they significantly decrease the cost and time of processing while maintaining high precision and sensitivity.

  1. Biosensors for spatiotemporal detection of reactive oxygen species in cells and tissues.

    PubMed

    Erard, Marie; Dupré-Crochet, Sophie; Nüße, Oliver

    2018-05-01

    Redox biology has become a major issue in numerous areas of physiology. Reactive oxygen species (ROS) have a broad range of roles from signal transduction to growth control and cell death. To understand the nature of these roles, accurate measurement of the reactive compounds is required. An increasing number of tools for ROS detection is available; however, the specificity and sensitivity of these tools are often insufficient. Furthermore, their specificity has been rarely evaluated in complex physiological conditions. Many ROS probes are sensitive to environmental conditions in particular pH, which may interfere with ROS detection and cause misleading results. Accurate detection of ROS in physiology and pathophysiology faces additional challenges concerning the precise localization of the ROS and the timing of their production and disappearance. Certain ROS are membrane permeable, and certain ROS probes move across cells and organelles. Targetable ROS probes such as fluorescent protein-based biosensors are required for accurate localization. Here we analyze these challenges in more detail, provide indications on the strength and weakness of current tools for ROS detection, and point out developments that will provide improved ROS detection methods in the future. There is no universal method that fits all situations in physiology and cell biology. A detailed knowledge of the ROS probes is required to choose the appropriate method for a given biological problem. The knowledge of the shortcomings of these probes should also guide the development of new sensors.

  2. Some problems of the calculation of three-dimensional boundary layer flows on general configurations

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.

    1973-01-01

    An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.

  3. Cognitive task analysis-based design and authoring software for simulation training.

    PubMed

    Munro, Allen; Clark, Richard E

    2013-10-01

    The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  4. Yellow River Icicle Hazard Dynamic Monitoring Using UAV Aerial Remote Sensing Technology

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Wang, G. H.; Tang, X. M.; Li, C. H.

    2014-02-01

    Monitoring the response of Yellow River icicle hazard change requires accurate and repeatable topographic surveys. A new method based on unmanned aerial vehicle (UAV) aerial remote sensing technology is proposed for real-time data processing in Yellow River icicle hazard dynamic monitoring. The monitoring area is located in the Yellow River ice intensive care area in southern BaoTou of Inner Mongolia autonomous region. Monitoring time is from the 20th February to 30th March in 2013. Using the proposed video data processing method, automatic extraction covering area of 7.8 km2 of video key frame image 1832 frames took 34.786 seconds. The stitching and correcting time was 122.34 seconds and the accuracy was better than 0.5 m. Through the comparison of precise processing of sequence video stitching image, the method determines the change of the Yellow River ice and locates accurate positioning of ice bar, improving the traditional visual method by more than 100 times. The results provide accurate aid decision information for the Yellow River ice prevention headquarters. Finally, the effect of dam break is repeatedly monitored and ice break five meter accuracy is calculated through accurate monitoring and evaluation analysis.

  5. Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.

    1973-01-01

    Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.

  6. Real-time, haptics-enabled simulator for probing ex vivo liver tissue.

    PubMed

    Lister, Kevin; Gao, Zhan; Desai, Jaydev P

    2009-01-01

    The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.

  7. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  8. A Method of Calibrating Airspeed Installations on Airplanes at Transonic and Supersonic Speeds by the Use of Accelerometer and Attitude-Angle Measurements

    NASA Technical Reports Server (NTRS)

    Zalovick, John A; Lina, Lindsay J; Trant, James P , Jr

    1953-01-01

    A method is described for calibrating airspeed installation on airplanes at transonic and supersonic speeds in vertical-plane maneuvers in which use is made of measurements of normal and longitudinal accelerations and attitude angle. In this method all the required instrumentation is carried within the airplane. An analytical study of the effects of various sources of error on the accuracy of an airspeed calibration by the accelerometer method indicated that the required measurements can be made accurately enough to insure a satisfactory calibration.

  9. Developing of method for primary frequency control droop and deadband actual values estimation

    NASA Astrophysics Data System (ADS)

    Nikiforov, A. A.; Chaplin, A. G.

    2017-11-01

    Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.

  10. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Accuracy evaluation of fluoroscopy-based 2D and 3D pose reconstruction with unicompartmental knee arthroplasty.

    PubMed

    Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S

    2009-04-01

    The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.

  12. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison

    PubMed Central

    2013-01-01

    Background Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. Results We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. Conclusion CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets. PMID:23617892

  13. Compression-based distance (CBD): a simple, rapid, and accurate method for microbiota composition comparison.

    PubMed

    Yang, Fang; Chia, Nicholas; White, Bryan A; Schook, Lawrence B

    2013-04-23

    Perturbations in intestinal microbiota composition have been associated with a variety of gastrointestinal tract-related diseases. The alleviation of symptoms has been achieved using treatments that alter the gastrointestinal tract microbiota toward that of healthy individuals. Identifying differences in microbiota composition through the use of 16S rRNA gene hypervariable tag sequencing has profound health implications. Current computational methods for comparing microbial communities are usually based on multiple alignments and phylogenetic inference, making them time consuming and requiring exceptional expertise and computational resources. As sequencing data rapidly grows in size, simpler analysis methods are needed to meet the growing computational burdens of microbiota comparisons. Thus, we have developed a simple, rapid, and accurate method, independent of multiple alignments and phylogenetic inference, to support microbiota comparisons. We create a metric, called compression-based distance (CBD) for quantifying the degree of similarity between microbial communities. CBD uses the repetitive nature of hypervariable tag datasets and well-established compression algorithms to approximate the total information shared between two datasets. Three published microbiota datasets were used as test cases for CBD as an applicable tool. Our study revealed that CBD recaptured 100% of the statistically significant conclusions reported in the previous studies, while achieving a decrease in computational time required when compared to similar tools without expert user intervention. CBD provides a simple, rapid, and accurate method for assessing distances between gastrointestinal tract microbiota 16S hypervariable tag datasets.

  14. Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude

    DTIC Science & Technology

    2009-12-01

    For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this

  15. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  16. Fast-tracking determination of homozygous transgenic lines and transgene stacking using a reliable quantitative real-time PCR assay.

    PubMed

    Wang, Xianghong; Jiang, Daiming; Yang, Daichang

    2015-01-01

    The selection of homozygous lines is a crucial step in the characterization of newly generated transgenic plants. This is particularly time- and labor-consuming when transgenic stacking is required. Here, we report a fast and accurate method based on quantitative real-time PCR with a rice gene RBE4 as a reference gene for selection of homozygous lines when using multiple transgenic stacking in rice. Use of this method allowed can be used to determine the stacking of up to three transgenes within four generations. Selection accuracy reached 100 % for a single locus and 92.3 % for two loci. This method confers distinct advantages over current transgenic research methodologies, as it is more accurate, rapid, and reliable. Therefore, this protocol could be used to efficiently select homozygous plants and to expedite time- and labor-consuming processes normally required for multiple transgene stacking. This protocol was standardized for determination of multiple gene stacking in molecular breeding via marker-assisted selection.

  17. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  18. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.

  19. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    PubMed

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  1. a Method of Generating dem from Dsm Based on Airborne Insar Data

    NASA Astrophysics Data System (ADS)

    Lu, W.; Zhang, J.; Xue, G.; Wang, C.

    2018-04-01

    Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.

  2. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  3. Characterizing dispersal patterns in a threatened seabird with limited genetic structure

    Treesearch

    Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery

    2009-01-01

    Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...

  4. A hydrogen gas-water equilibration method produces accurate and precise stable hydrogen isotope ratio measurements in nutrition studies

    USDA-ARS?s Scientific Manuscript database

    Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to ...

  5. Accuracy of airspeed measurements and flight calibration procedures

    NASA Technical Reports Server (NTRS)

    Huston, Wilber B

    1948-01-01

    The sources of error that may enter into the measurement of airspeed by pitot-static methods are reviewed in detail together with methods of flight calibration of airspeed installations. Special attention is given to the problem of accurate measurements of airspeed under conditions of high speed and maneuverability required of military airplanes. (author)

  6. Multiscale Reactive Molecular Dynamics

    DTIC Science & Technology

    2012-08-15

    biology cannot be described without considering electronic and nuclear-level dynamics and their coupling to slower, cooperative motions of the system ...coupling to slower, cooperative motions of the system . These inherently multiscale problems require computationally efficient and accurate methods to...condensed phase systems with computational efficiency orders of magnitudes greater than currently possible with ab initio simulation methods, thus

  7. Adjusting slash pine growth and yield for silvicultural treatments

    Treesearch

    Stephen R. Logan; Barry D. Shiver

    2006-01-01

    With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...

  8. Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences.

    PubMed

    Voelz, David G; Roggemann, Michael C

    2009-11-10

    Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.

  9. A discontinuous control volume finite element method for multi-phase flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Xie, Z.; Osman, H.; Pain, C. C.; Jackson, M. D.

    2018-01-01

    We present a new, high-order, control-volume-finite-element (CVFE) method for multiphase porous media flow with discontinuous 1st-order representation for pressure and discontinuous 2nd-order representation for velocity. The method has been implemented using unstructured tetrahedral meshes to discretize space. The method locally and globally conserves mass. However, unlike conventional CVFE formulations, the method presented here does not require the use of control volumes (CVs) that span the boundaries between domains with differing material properties. We demonstrate that the approach accurately preserves discontinuous saturation changes caused by permeability variations across such boundaries, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at significantly lower computational cost than using conventional CVFE methods. We resolve a long-standing problem associated with the use of classical CVFE methods to model flow in highly heterogeneous porous media.

  10. Accuracy of least-squares methods for the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bochev, Pavel B.; Gunzburger, Max D.

    1993-01-01

    Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.

  11. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  12. Finite difference time domain (FDTD) method for modeling the effect of switched gradients on the human body in MRI.

    PubMed

    Zhao, Huawei; Crozier, Stuart; Liu, Feng

    2002-12-01

    Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model. Copyright 2002 Wiley-Liss, Inc.

  13. Easy Leaf Area: Automated digital image analysis for rapid and accurate measurement of leaf area.

    PubMed

    Easlon, Hsien Ming; Bloom, Arnold J

    2014-07-01

    Measurement of leaf areas from digital photographs has traditionally required significant user input unless backgrounds are carefully masked. Easy Leaf Area was developed to batch process hundreds of Arabidopsis rosette images in minutes, removing background artifacts and saving results to a spreadsheet-ready CSV file. • Easy Leaf Area uses the color ratios of each pixel to distinguish leaves and calibration areas from their background and compares leaf pixel counts to a red calibration area to eliminate the need for camera distance calculations or manual ruler scale measurement that other software methods typically require. Leaf areas estimated by this software from images taken with a camera phone were more accurate than ImageJ estimates from flatbed scanner images. • Easy Leaf Area provides an easy-to-use method for rapid measurement of leaf area and nondestructive estimation of canopy area from digital images.

  14. [A peak recognition algorithm designed for chromatographic peaks of transformer oil].

    PubMed

    Ou, Linjun; Cao, Jian

    2014-09-01

    In the field of the chromatographic peak identification of the transformer oil, the traditional first-order derivative requires slope threshold to achieve peak identification. In terms of its shortcomings of low automation and easy distortion, the first-order derivative method was improved by applying the moving average iterative method and the normalized analysis techniques to identify the peaks. Accurate identification of the chromatographic peaks was realized through using multiple iterations of the moving average of signal curves and square wave curves to determine the optimal value of the normalized peak identification parameters, combined with the absolute peak retention times and peak window. The experimental results show that this algorithm can accurately identify the peaks and is not sensitive to the noise, the chromatographic peak width or the peak shape changes. It has strong adaptability to meet the on-site requirements of online monitoring devices of dissolved gases in transformer oil.

  15. Study on photoelectric parameter measurement method of high capacitance solar cell

    NASA Astrophysics Data System (ADS)

    Zhang, Junchao; Xiong, Limin; Meng, Haifeng; He, Yingwei; Cai, Chuan; Zhang, Bifeng; Li, Xiaohui; Wang, Changshi

    2018-01-01

    The high efficiency solar cells usually have high capacitance characteristic, so the measurement of their photoelectric performance usually requires long pulse width and long sweep time. The effects of irradiance non-uniformity, probe shielding and spectral mismatch on the IV curve measurement are analyzed experimentally. A compensation method for irradiance loss caused by probe shielding is proposed, and the accurate measurement of the irradiance intensity in the IV curve measurement process of solar cell is realized. Based on the characteristics that the open circuit voltage of solar cell is sensitive to the junction temperature, an accurate measurement method of the temperature of solar cell under continuous irradiation condition is proposed. Finally, a measurement method with the characteristic of high accuracy and wide application range for high capacitance solar cell is presented.

  16. Automated particle correspondence and accurate tilt-axis detection in tilted-image pairs

    DOE PAGES

    Shatsky, Maxim; Arbelaez, Pablo; Han, Bong-Gyoon; ...

    2014-07-01

    Tilted electron microscope images are routinely collected for an ab initio structure reconstruction as a part of the Random Conical Tilt (RCT) or Orthogonal Tilt Reconstruction (OTR) methods, as well as for various applications using the "free-hand" procedure. These procedures all require identification of particle pairs in two corresponding images as well as accurate estimation of the tilt-axis used to rotate the electron microscope (EM) grid. Here we present a computational approach, PCT (particle correspondence from tilted pairs), based on tilt-invariant context and projection matching that addresses both problems. The method benefits from treating the two problems as a singlemore » optimization task. It automatically finds corresponding particle pairs and accurately computes tilt-axis direction even in the cases when EM grid is not perfectly planar.« less

  17. Accurate Quantification of T Cells by Measuring Loss of Germline T-Cell Receptor Loci with Generic Single Duplex Droplet Digital PCR Assays.

    PubMed

    Zoutman, Willem H; Nell, Rogier J; Versluis, Mieke; van Steenderen, Debby; Lalai, Rajshri N; Out-Luiting, Jacoba J; de Lange, Mark J; Vermeer, Maarten H; Langerak, Anton W; van der Velden, Pieter A

    2017-03-01

    Quantifying T cells accurately in a variety of tissues of benign, inflammatory, or malignant origin can be of great importance in a variety of clinical applications. Flow cytometry and immunohistochemistry are considered to be gold-standard methods for T-cell quantification. However, these methods require fresh, frozen, or fixated cells and tissue of a certain quality. In addition, conventional and droplet digital PCR (ddPCR), whether followed by deep sequencing techniques, have been used to elucidate T-cell content by focusing on rearranged T-cell receptor (TCR) genes. These approaches typically target the whole TCR repertoire, thereby supplying additional information about TCR use. We alternatively developed and validated two novel generic single duplex ddPCR assays to quantify T cells accurately by measuring loss of specific germline TCR loci and compared them with flow cytometry-based quantification. These assays target sequences between the Dδ2 and Dδ3 genes (TRD locus) and Dβ1 and Jβ1.1 genes (TRB locus) that become deleted systematically early during lymphoid differentiation. Because these ddPCR assays require small amounts of DNA instead of freshly isolated, frozen, or fixated material, initially unanalyzable (scarce) specimens can be assayed from now on, supplying valuable information about T-cell content. Our ddPCR method provides a novel and sensitive way for quantifying T cells relatively fast, accurate, and independent of the cellular context. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  18. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  19. Prospective regularization design in prior-image-based reconstruction

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2015-12-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.

  20. Design of Rail Instrumentation for Wind Tunnel Sonic Boom Measurements and Computational-Experimental Comparisons

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Elmiligui, A.; Aftosmis, M.; Morgenstern, J.; Durston, D.; Thomas, S.

    2012-01-01

    An innovative pressure rail concept for wind tunnel sonic boom testing of modern aircraft configurations with very low overpressures was designed with an adjoint-based solution-adapted Cartesian grid method. The computational method requires accurate free-air calculations of a test article as well as solutions modeling the influence of rail and tunnel walls. Specialized grids for accurate Euler and Navier-Stokes sonic boom computations were used on several test articles including complete aircraft models with flow-through nacelles. The computed pressure signatures are compared with recent results from the NASA 9- x 7-foot Supersonic Wind Tunnel using the advanced rail design.

  1. Justification of Estimates for Fiscal Year 1984 Submitted to Congress.

    DTIC Science & Technology

    1983-01-01

    sponsoring different aspects related to unique manufacturing methods than those pursued by DARPA, and duplication of effort is prevented by direct...weapons systems. Rapid and economical methods of satisfying these requirements must significantly precede weapons systems developments to prevent... methods for obtaining accurate and efficient geodetic measurements. Also, a major advanced sensor/G&G data collection capability is being urdertaken by DNA

  2. Spatial Statistics for Tumor Cell Counting and Classification

    NASA Astrophysics Data System (ADS)

    Wirjadi, Oliver; Kim, Yoo-Jin; Breuel, Thomas

    To count and classify cells in histological sections is a standard task in histology. One example is the grading of meningiomas, benign tumors of the meninges, which requires to assess the fraction of proliferating cells in an image. As this process is very time consuming when performed manually, automation is required. To address such problems, we propose a novel application of Markov point process methods in computer vision, leading to algorithms for computing the locations of circular objects in images. In contrast to previous algorithms using such spatial statistics methods in image analysis, the present one is fully trainable. This is achieved by combining point process methods with statistical classifiers. Using simulated data, the method proposed in this paper will be shown to be more accurate and more robust to noise than standard image processing methods. On the publicly available SIMCEP benchmark for cell image analysis algorithms, the cell count performance of the present paper is significantly more accurate than results published elsewhere, especially when cells form dense clusters. Furthermore, the proposed system performs as well as a state-of-the-art algorithm for the computer-aided histological grading of meningiomas when combined with a simple k-nearest neighbor classifier for identifying proliferating cells.

  3. A Simple Method for Reproducing Orbital Plots for Illustration Using Microsoft Paint and Microsoft Excel

    NASA Astrophysics Data System (ADS)

    Niebuhr, Cole

    2018-04-01

    Papers published in the astronomical community, particularly in the field of double star research, often contain plots that display the positions of the component stars relative to each other on a Cartesian coordinate plane. Due to the complexities of plotting a three-dimensional orbit into a two-dimensional image, it is often difficult to include an accurate reproduction of the orbit for comparison purposes. Methods to circumvent this obstacle do exist; however, many of these protocols result in low-quality blurred images or require specific and often expensive software. Here, a method is reported using Microsoft Paint and Microsoft Excel to produce high-quality images with an accurate reproduction of a partial orbit.

  4. One-dimensional wave bottom boundary layer model comparison: specific eddy viscosity and turbulence closure models

    USGS Publications Warehouse

    Puleo, J.A.; Mouraenko, O.; Hanes, D.M.

    2004-01-01

    Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.

  5. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  6. Reproducible analyses of microbial food for advanced life support systems

    NASA Technical Reports Server (NTRS)

    Petersen, Gene R.

    1988-01-01

    The use of yeasts in controlled ecological life support systems (CELSS) for microbial food regeneration in space required the accurate and reproducible analysis of intracellular carbohydrate and protein levels. The reproducible analysis of glycogen was a key element in estimating overall content of edibles in candidate yeast strains. Typical analytical methods for estimating glycogen in Saccharomyces were not found to be entirely aplicable to other candidate strains. Rigorous cell lysis coupled with acid/base fractionation followed by specific enzymatic glycogen analyses were required to obtain accurate results in two strains of Candida. A profile of edible fractions of these strains was then determined. The suitability of yeasts as food sources in CELSS food production processes is discussed.

  7. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers

    PubMed Central

    Han, Buhm; Kang, Hyun Min; Eskin, Eleazar

    2009-01-01

    With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255

  8. METHODS STUDIES FOR THE NATIONAL CHILDREN'S STUDY: SEMIPERMEABLE MEMBRANE DEVICE (SPMD)

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Although long-term integrated exposure measurements are a critical component of exposure assessment, the ability to include these measurements into epidemiologic...

  9. METHODS STUDIES FOR THE NATIONAL CHILDREN'S STUDY: MOLECULARLY IMPRINTED POLYMERS

    EPA Science Inventory

    Accurate exposure classification tools are required to link exposure with health effects in epidemiological studies. Although long-term integrated exposure measurements are a critical component of exposure assessment, the ability to include these measurements into epidemiologic...

  10. Noise Reduction in High-Throughput Gene Perturbation Screens

    USDA-ARS?s Scientific Manuscript database

    Motivation: Accurate interpretation of perturbation screens is essential for a successful functional investigation. However, the screened phenotypes are often distorted by noise, and their analysis requires specialized statistical analysis tools. The number and scope of statistical methods available...

  11. Precise and accurate assay of pregnenolone and five other neurosteroids in monkey brain tissue by LC-MS/MS.

    PubMed

    Dury, Alain Y; Ke, Yuyong; Labrie, Fernand

    2016-09-01

    A series of steroids present in the brain have been named "neurosteroids" following the possibility of their role in the central nervous system impairments such as anxiety disorders, depression, premenstrual dysphoric disorder (PMDD), addiction, or even neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. Study of their potential role requires a sensitive and accurate assay of their concentration in the monkey brain, the closest model to the human. We have thus developed a robust, precise and accurate liquid chromatography-tandem mass spectrometry method for the assay of pregnenolone, pregnanolone, epipregnanolone, allopregnanolone, epiallopregnanolone, and androsterone in the cynomolgus monkey brain. The extraction method includes a thorough sample cleanup using protein precipitation and phospholipid removal, followed by hexane liquid-liquid extraction and a Girard T ketone-specific derivatization. This method opens the possibility of investigating the potential implication of these six steroids in the most suitable animal model for neurosteroid-related research. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  13. Using Multiple Barometers to Detect the Floor Location of Smart Phones with Built-in Barometric Sensors for Indoor Positioning

    PubMed Central

    Xia, Hao; Wang, Xiaogang; Qiao, Yanyou; Jian, Jun; Chang, Yuanfei

    2015-01-01

    Following the popularity of smart phones and the development of mobile Internet, the demands for accurate indoor positioning have grown rapidly in recent years. Previous indoor positioning methods focused on plane locations on a floor and did not provide accurate floor positioning. In this paper, we propose a method that uses multiple barometers as references for the floor positioning of smart phones with built-in barometric sensors. Some related studies used barometric formula to investigate the altitude of mobile devices and compared the altitude with the height of the floors in a building to obtain the floor number. These studies assume that the accurate height of each floor is known, which is not always the case. They also did not consider the difference in the barometric-pressure pattern at different floors, which may lead to errors in the altitude computation. Our method does not require knowledge of the accurate heights of buildings and stories. It is robust and less sensitive to factors such as temperature and humidity and considers the difference in the barometric-pressure change trends at different floors. We performed a series of experiments to validate the effectiveness of this method. The results are encouraging. PMID:25835189

  14. Accurate optimization of amino acid form factors for computing small-angle X-ray scattering intensity of atomistic protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Dudu; Yang, Sichun; Lu, Lanyuan

    2016-06-20

    Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less

  15. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  16. Low-cycle fatigue testing methods

    NASA Technical Reports Server (NTRS)

    Lieurade, H. P.

    1978-01-01

    The good design of highly stressed mechanical components requires accurate knowledge of the service behavior of materials. The main methods for solving the problems of designers are: determination of the mechanical properties of the material after cyclic stabilization; plotting of resistance to plastic deformation curves; effect of temperature on the life on low cycle fatigue; and simulation of notched parts behavior.

  17. Smart fast blood counting of trace volumes of body fluids from various mammalian species using a compact custom-built microscope cytometer (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Smith, Zachary J.; Gao, Tingjuan; Lin, Tzu-Yin; Carrade-Holt, Danielle; Lane, Stephen M.; Matthews, Dennis L.; Dwyre, Denis M.; Wachsmann-Hogiu, Sebastian

    2016-03-01

    Cell counting in human body fluids such as blood, urine, and CSF is a critical step in the diagnostic process for many diseases. Current automated methods for cell counting are based on flow cytometry systems. However, these automated methods are bulky, costly, require significant user expertise, and are not well suited to counting cells in fluids other than blood. Therefore, their use is limited to large central laboratories that process enough volume of blood to recoup the significant capital investment these instruments require. We present in this talk a combination of a (1) low-cost microscope system, (2) simple sample preparation method, and (3) fully automated analysis designed for providing cell counts in blood and body fluids. We show results on both humans and companion and farm animals, showing that accurate red cell, white cell, and platelet counts, as well as hemoglobin concentration, can be accurately obtained in blood, as well as a 3-part white cell differential in human samples. We can also accurately count red and white cells in body fluids with a limit of detection ~3 orders of magnitude smaller than current automated instruments. This method uses less than 1 microliter of blood, and less than 5 microliters of body fluids to make its measurements, making it highly compatible with finger-stick style collections, as well as appropriate for small animals such as laboratory mice where larger volume blood collections are dangerous to the animal's health.

  18. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  19. Using structure to explore the sequence alignment space of remote homologs.

    PubMed

    Kuziemko, Andrew; Honig, Barry; Petrey, Donald

    2011-10-01

    Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.

  20. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  1. A k-Space Method for Moderately Nonlinear Wave Propagation

    PubMed Central

    Jing, Yun; Wang, Tianren; Clement, Greg T.

    2013-01-01

    A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114

  2. Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions

    NASA Astrophysics Data System (ADS)

    Chen, N.; Majda, A.

    2017-12-01

    Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.

  3. Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions

    NASA Technical Reports Server (NTRS)

    Pilon, Anthony R.; Lyrintzis, Anastasios S.

    1997-01-01

    The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.

  4. Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering

    NASA Technical Reports Server (NTRS)

    Bhatia, A. K.

    2007-01-01

    Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.

  5. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  6. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  7. A rapid method of estimating the collision frequencies between the earth and the earth-crossing bodies

    NASA Technical Reports Server (NTRS)

    Su, Shin-Yi; Kessler, Donald J.

    1991-01-01

    The present study examines a very fast method of calculating the collision frequency between two low-eccentricity orbiting bodies for evaluating the evolution of earth-orbiting objects such as space debris. The results are very accurate and the required computer time is negligible. The method is now applied without modification to calculate the collision frequencies for moderately and highly eccentric orbits.

  8. Improvement of Computer Software Quality through Software Automated Tools.

    DTIC Science & Technology

    1986-08-31

    requirement for increased emphasis on software quality assurance has lead to the creation of various methods of verification and validation. Experience...result was a vast array of methods , systems, languages and automated tools to assist in the process. Given that the primary role of quality assurance is...Unfortunately, there is no single method , tool or technique that can insure accurate, reliable and cost effective software. Therefore, government and industry

  9. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  10. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  11. Robust and Accurate Shock Capturing Method for High-Order Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Atkins, Harold L.; Pampell, Alyssa

    2011-01-01

    A simple yet robust and accurate approach for capturing shock waves using a high-order discontinuous Galerkin (DG) method is presented. The method uses the physical viscous terms of the Navier-Stokes equations as suggested by others; however, the proposed formulation of the numerical viscosity is continuous and compact by construction, and does not require the solution of an auxiliary diffusion equation. This work also presents two analyses that guided the formulation of the numerical viscosity and certain aspects of the DG implementation. A local eigenvalue analysis of the DG discretization applied to a shock containing element is used to evaluate the robustness of several Riemann flux functions, and to evaluate algorithm choices that exist within the underlying DG discretization. A second analysis examines exact solutions to the DG discretization in a shock containing element, and identifies a "model" instability that will inevitably arise when solving the Euler equations using the DG method. This analysis identifies the minimum viscosity required for stability. The shock capturing method is demonstrated for high-speed flow over an inviscid cylinder and for an unsteady disturbance in a hypersonic boundary layer. Numerical tests are presented that evaluate several aspects of the shock detection terms. The sensitivity of the results to model parameters is examined with grid and order refinement studies.

  12. Group refractive index quantification using a Fourier domain short coherence Sagnac interferometer.

    PubMed

    Montonen, Risto; Kassamakov, Ivan; Lehmann, Peter; Österberg, Kenneth; Hæggström, Edward

    2018-02-15

    The group refractive index is important in length calibration of Fourier domain interferometers by transparent transfer standards. We demonstrate accurate group refractive index quantification using a Fourier domain short coherence Sagnac interferometer. Because of a justified linear length calibration function, the calibration constants cancel out in the evaluation of the group refractive index, which is then obtained accurately from two uncalibrated lengths. Measurements of two standard thickness coverslips revealed group indices of 1.5426±0.0042 and 1.5434±0.0046, with accuracies quoted at the 95% confidence level. This agreed with the dispersion data of the coverslip manufacturer and therefore validates our method. Our method provides a sample specific and accurate group refractive index quantification using the same Fourier domain interferometer that is to be calibrated for the length. This reduces significantly the requirements of the calibration transfer standard.

  13. Enhanced dual-frequency pattern scheme based on spatial-temporal fringes method

    NASA Astrophysics Data System (ADS)

    Wang, Minmin; Zhou, Canlin; Si, Shuchun; Lei, Zhenkun; Li, Xiaolei; Li, Hui; Li, YanJie

    2018-07-01

    One of the major challenges of employing a dual-frequency phase-shifting algorithm for phase retrieval is its sensitivity to noise. Yun et al proposed a dual-frequency method based on the Fourier transform profilometry, yet the low-frequency lobes are close to each other for accurate band-pass filtering. In the light of this problem, a novel dual-frequency pattern based on the spatial-temporal fringes (STF) method is developed in this paper. Three fringe patterns with two different frequencies are required. The low-frequency phase is obtained from two low-frequency fringe patterns by the STF method, so the signal lobes can be extracted accurately as they are far away from each other. The high-frequency phase is retrieved from another fringe pattern without the impact of the DC component. Simulations and experiments are conducted to demonstrate the excellent precision of the proposed method.

  14. Creating analytically divergence-free velocity fields from grid-based data

    NASA Astrophysics Data System (ADS)

    Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.

    2016-10-01

    We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.

  15. Isothermal separation processes

    NASA Technical Reports Server (NTRS)

    England, C.

    1982-01-01

    The isothermal processes of membrane separation, supercritical extraction and chromatography were examined using availability analysis. The general approach was to derive equations that identified where energy is consumed in these processes and how they compare with conventional separation methods. These separation methods are characterized by pure work inputs, chiefly in the form of a pressure drop which supplies the required energy. Equations were derived for the energy requirement in terms of regular solution theory. This approach is believed to accurately predict the work of separation in terms of the heat of solution and the entropy of mixing. It can form the basis of a convenient calculation method for optimizing membrane and solvent properties for particular applications. Calculations were made on the energy requirements for a membrane process separating air into its components.

  16. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    PubMed

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  17. Estimating net rainfall, evaporation and water storage of a bare soil from sequential L-band emissivities

    NASA Technical Reports Server (NTRS)

    Stroosnijder, L.; Lascano, R. J.; Newton, R. W.; Vanbavel, C. H. M.

    1984-01-01

    A general method to use a time series of L-band emissivities as an input to a hydrological model for continuously monitoring the net rainfall and evaporation as well as the water content over the entire soil profile is proposed. The model requires a sufficiently accurate and general relation between soil emissivity and surface moisture content. A model which requires the soil hydraulic properties as an additional input, but does not need any weather data was developed. The method is shown to be numerically consistent.

  18. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  19. Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods

    NASA Astrophysics Data System (ADS)

    Kozdon, J. E.; Wilcox, L.

    2013-12-01

    Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.

  20. Semiautomated Segmentation of Polycystic Kidneys in T2-Weighted MR Images.

    PubMed

    Kline, Timothy L; Edwards, Marie E; Korfiatis, Panagiotis; Akkus, Zeynettin; Torres, Vicente E; Erickson, Bradley J

    2016-09-01

    The objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing. We developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity. The MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%). The MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.

  1. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    PubMed

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  2. Estimating abundance and survival in the endangered Point Arena Mountain beaver using noninvasive genetic methods

    Treesearch

    William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz

    2013-01-01

    The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...

  3. Quantifying the Thermal Fatigue of CPV Modules

    NASA Astrophysics Data System (ADS)

    Bosco, Nick; Kurtz, Sarah

    2010-10-01

    A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative study between cities demonstrates a significant difference in the accumulated damage. These differences are most sensitive to the number of larger (ΔT) thermal cycles experienced for a location. High frequency data (<1/min) may be required to most accurately employ this method.

  4. Applications of an automated stem measurer for precision forestry

    Treesearch

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  5. Lattice Boltzmann model for simulation of magnetohydrodynamics

    NASA Technical Reports Server (NTRS)

    Chen, Shiyi; Chen, Hudong; Martinez, Daniel; Matthaeus, William

    1991-01-01

    A numerical method, based on a discrete Boltzmann equation, is presented for solving the equations of magnetohydrodynamics (MHD). The algorithm provides advantages similar to the cellular automaton method in that it is local and easily adapted to parallel computing environments. Because of much lower noise levels and less stringent requirements on lattice size, the method appears to be more competitive with traditional solution methods. Examples show that the model accurately reproduces both linear and nonlinear MHD phenomena.

  6. Selection of actuator locations for static shape control of large space structures by heuristic integer programing

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Adelman, H. M.

    1984-01-01

    Orbiting spacecraft such as large space antennas have to maintain a highly accurate space to operate satisfactorily. Such structures require active and passive controls to mantain an accurate shape under a variety of disturbances. Methods for the optimum placement of control actuators for correcting static deformations are described. In particular, attention is focused on the case were control locations have to be selected from a large set of available sites, so that integer programing methods are called for. The effectiveness of three heuristic techniques for obtaining a near-optimal site selection is compared. In addition, efficient reanalysis techniques for the rapid assessment of control effectiveness are presented. Two examples are used to demonstrate the methods: a simple beam structure and a 55m space-truss-parabolic antenna.

  7. Analysis and evaluation of methods for backcalculation of Mr values : volume 1 : research report : final report.

    DOT National Transportation Integrated Search

    1993-01-01

    Use of the 1986 AASHTO Design Guide requires accurate estimates of the resilient modulus of flexible pavement materials. Traditionally, these properties have been determined from either laboratory testing or by backcalculation from deflection data. S...

  8. Simple Approaches for Measuring Dry Atmospheric Nitrogen Deposition to Watersheds

    EPA Science Inventory

    Assessing the effects of atmospheric nitrogen (N) deposition on surface water quality requires accurate accounts of total N deposition (wet, dry, and cloud vapor); however, dry deposition is difficult to measure and is often spatially variable. Affordable passive sampling methods...

  9. A mobile phone user interface for image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.

    2014-02-01

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.

  10. A Mobile Phone User Interface for Image-Based Dietary Assessment

    PubMed Central

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use. PMID:28572696

  11. A Mobile Phone User Interface for Image-Based Dietary Assessment.

    PubMed

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A; Boushey, Carol J; Delp, Edward J

    2014-02-02

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.

  12. PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences

    PubMed Central

    Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong

    2015-01-01

    Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288

  13. An evaluation of methods for estimating decadal stream loads

    NASA Astrophysics Data System (ADS)

    Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-11-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  14. An evaluation of methods for estimating decadal stream loads

    USGS Publications Warehouse

    Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.

    2016-01-01

    Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.

  15. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  16. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nutaro, James; Kuruganti, Teja

    Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less

  18. Gas Chromatic Mass Spectrometer

    NASA Technical Reports Server (NTRS)

    Wey, Chowen

    1995-01-01

    Gas chromatograph/mass spectrometer (GC/MS) used to measure and identify combustion species present in trace concentration. Advanced extractive diagnostic method measures to parts per billion (PPB), as well as differentiates between different types of hydrocarbons. Applicable for petrochemical, waste incinerator, diesel transporation, and electric utility companies in accurately monitoring types of hydrocarbon emissions generated by fuel combustion, in order to meet stricter environmental requirements. Other potential applications include manufacturing processes requiring precise detection of toxic gaseous chemicals, biomedical applications requiring precise identification of accumulative gaseous species, and gas utility operations requiring high-sensitivity leak detection.

  19. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less

  20. Singlet oxygen detection in biological systems: Uses and limitations.

    PubMed

    Koh, Eugene; Fluhr, Robert

    2016-07-02

    The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations.

  1. A facile electrode preparation method for accurate electrochemical measurements of double-side-coated electrode from commercial Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Zhou, Ge; Wang, Qiyu; Wang, Shuo; Ling, Shigang; Zheng, Jieyun; Yu, Xiqian; Li, Hong

    2018-04-01

    The post mortem electrochemical analysis, including charge-discharge and electrochemical impedance spectroscopy (EIS) measurements, are critical steps for revealing the failure mechanisms of commercial lithium-ion batteries (LIBs). These post measurements usually require the reassembling of coin-cell with electrode which is often double-side-coated in commercial LIBs. It is difficult to use such double-side-coated electrode to perform accurate electrochemical measurements because the back side of the electrode is coated with active materials, rather than single-side-coated electrode that is often used in coin-cell measurements. In this study, we report a facile tape-covering sample preparation method, which can effectively suppress the influence of back side of the double-side-coated electrodes on capacity and EIS measurements in coin-cells. By tape-covering the unwanted side, the areal capacity of the desired investigated side of the electrode has been accurately measured with an experimental error of about 0.5% at various current densities, and accurate EIS measurements and analysis have been conducted as well.

  2. Calculation of steady and unsteady transonic flow using a Cartesian mesh and gridless boundary conditions with application to aeroelasticity

    NASA Astrophysics Data System (ADS)

    Kirshman, David

    A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.

  3. Transforming Multidisciplinary Customer Requirements to Product Design Specifications

    NASA Astrophysics Data System (ADS)

    Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu

    2017-09-01

    With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.

  4. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  5. Overview of NASA GRC Electrified Aircraft Propulsion Systems Analysis Methods

    NASA Technical Reports Server (NTRS)

    Schnulo, Sydney

    2017-01-01

    The accurate modeling and analysis of electrified aircraft propulsion concepts require intricate subsystem system component coupling. The major challenge in electrified aircraft propulsion concept modeling lies in understanding how the subsystems "talk" to each other and the dependencies they have on one another.

  6. Accurate measurement of transgene copy number in crop plants using droplet digital PCR.

    PubMed

    Collier, Ray; Dasgupta, Kasturi; Xing, Yan-Ping; Hernandez, Bryan Tarape; Shao, Min; Rohozinski, Dominica; Kovak, Emma; Lin, Jeanie; de Oliveira, Maria Luiza P; Stover, Ed; McCue, Kent F; Harmon, Frank G; Blechl, Ann; Thomson, James G; Thilmony, Roger

    2017-06-01

    Genetic transformation is a powerful means for the improvement of crop plants, but requires labor- and resource-intensive methods. An efficient method for identifying single-copy transgene insertion events from a population of independent transgenic lines is desirable. Currently, transgene copy number is estimated by either Southern blot hybridization analyses or quantitative polymerase chain reaction (qPCR) experiments. Southern hybridization is a convincing and reliable method, but it also is expensive, time-consuming and often requires a large amount of genomic DNA and radioactively labeled probes. Alternatively, qPCR requires less DNA and is potentially simpler to perform, but its results can lack the accuracy and precision needed to confidently distinguish between one- and two-copy events in transgenic plants with large genomes. To address this need, we developed a droplet digital PCR-based method for transgene copy number measurement in an array of crops: rice, citrus, potato, maize, tomato and wheat. The method utilizes specific primers to amplify target transgenes, and endogenous reference genes in a single duplexed reaction containing thousands of droplets. Endpoint amplicon production in the droplets is detected and quantified using sequence-specific fluorescently labeled probes. The results demonstrate that this approach can generate confident copy number measurements in independent transgenic lines in these crop species. This method and the compendium of probes and primers will be a useful resource for the plant research community, enabling the simple and accurate determination of transgene copy number in these six important crop species. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  7. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    PubMed

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  9. The calibration methods for Multi-Filter Rotating Shadowband Radiometer: a review

    NASA Astrophysics Data System (ADS)

    Chen, Maosi; Davis, John; Tang, Hongzhao; Ownby, Carolyn; Gao, Wei

    2013-09-01

    The continuous, over two-decade data record from the Multi-Filter Rotating Shadowband Radiometer (MFRSR) is ideal for climate research which requires timely and accurate information of important atmospheric components such as gases, aerosols, and clouds. Except for parameters derived from MFRSR measurement ratios, which are not impacted by calibration error, most applications require accurate calibration factor(s), angular correction, and spectral response function(s) from calibration. Although a laboratory lamp (or reference) calibration can provide all the information needed to convert the instrument readings to actual radiation, in situ calibration methods are implemented routinely (daily) to fill the gaps between lamp calibrations. In this paper, the basic structure and the data collection and pretreatment of the MFRSR are described. The laboratory lamp calibration and its limitations are summarized. The cloud screening algorithms for MFRSR data are presented. The in situ calibration methods, the standard Langley method and its variants, the ratio-Langley method, the general method, Alexandrov's comprehensive method, and Chen's multi-channel method, are outlined. The reason that all these methods do not fit for all situations is that they assume some properties, such as aerosol optical depth (AOD), total optical depth (TOD), precipitable water vapor (PWV), effective size of aerosol particles, or angstrom coefficient, are invariant over time. These properties are not universal and some of them rarely happen. In practice, daily calibration factors derived from these methods should be smoothed to restrain error.

  10. Numerical Simulations of Hypersonic Boundary Layer Transition

    NASA Astrophysics Data System (ADS)

    Bartkowicz, Matthew David

    Numerical schemes for supersonic flows tend to use large amounts of artificial viscosity for stability. This tends to damp out the small scale structures in the flow. Recently some low-dissipation methods have been proposed which selectively eliminate the artificial viscosity in regions which do not require it. This work builds upon the low-dissipation method of Subbareddy and Candler which uses the flux vector splitting method of Steger and Warming but identifies the dissipation portion to eliminate it. Computing accurate fluxes typically relies on large grid stencils or coupled linear systems that become computationally expensive to solve. Unstructured grids allow for CFD solutions to be obtained on complex geometries, unfortunately, it then becomes difficult to create a large stencil or the coupled linear system. Accurate solutions require grids that quickly become too large to be feasible. In this thesis a method is proposed to obtain more accurate solutions using relatively local data, making it suitable for unstructured grids composed of hexahedral elements. Fluxes are reconstructed using local gradients to extend the range of data used. The method is then validated on several test problems. Simulations of boundary layer transition are then performed. An elliptic cone at Mach 8 is simulated based on an experiment at the Princeton Gasdynamics Laboratory. A simulated acoustic noise boundary condition is imposed to model the noisy conditions of the wind tunnel and the transitioning boundary layer observed. A computation of an isolated roughness element is done based on an experiment in Purdue's Mach 6 quiet wind tunnel. The mechanism for transition is identified as an instability in the upstream separation region and a comparison is made to experimental data. In the CFD a fully turbulent boundary layer is observed downstream.

  11. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less

  12. Dynamic earthquake rupture simulations on nonplanar faults embedded in 3D geometrically complex, heterogeneous elastic solids

    NASA Astrophysics Data System (ADS)

    Duru, Kenneth; Dunham, Eric M.

    2016-01-01

    Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.

  13. Localizing on-scalp MEG sensors using an array of magnetic dipole coils.

    PubMed

    Pfeiffer, Christoph; Andersen, Lau M; Lundqvist, Daniel; Hämäläinen, Matti; Schneiderman, Justin F; Oostenveld, Robert

    2018-01-01

    Accurate estimation of the neural activity underlying magnetoencephalography (MEG) signals requires co-registration i.e., determination of the position and orientation of the sensors with respect to the head. In modern MEG systems, an array of hundreds of low-Tc SQUID sensors is used to localize a set of small, magnetic dipole-like (head-position indicator, HPI) coils that are attached to the subject's head. With accurate prior knowledge of the positions and orientations of the sensors with respect to one another, the HPI coils can be localized with high precision, and thereby the positions of the sensors in relation to the head. With advances in magnetic field sensing technologies, e.g., high-Tc SQUIDs and optically pumped magnetometers (OPM), that require less extreme operating temperatures than low-Tc SQUID sensors, on-scalp MEG is on the horizon. To utilize the full potential of on-scalp MEG, flexible sensor arrays are preferable. Conventional co-registration is impractical for such systems as the relative positions and orientations of the sensors to each other are subject-specific and hence not known a priori. Herein, we present a method for co-registration of on-scalp MEG sensors. We propose to invert the conventional co-registration approach and localize the sensors relative to an array of HPI coils on the subject's head. We show that given accurate prior knowledge of the positions of the HPI coils with respect to one another, the sensors can be localized with high precision. We simulated our method with realistic parameters and layouts for sensor and coil arrays. Results indicate co-registration is possible with sub-millimeter accuracy, but the performance strongly depends upon a number of factors. Accurate calibration of the coils and precise determination of the positions and orientations of the coils with respect to one another are crucial. Finally, we propose methods to tackle practical challenges to further improve the method.

  14. Localizing on-scalp MEG sensors using an array of magnetic dipole coils

    PubMed Central

    Andersen, Lau M.; Lundqvist, Daniel; Hämäläinen, Matti; Schneiderman, Justin F.; Oostenveld, Robert

    2018-01-01

    Accurate estimation of the neural activity underlying magnetoencephalography (MEG) signals requires co-registration i.e., determination of the position and orientation of the sensors with respect to the head. In modern MEG systems, an array of hundreds of low-Tc SQUID sensors is used to localize a set of small, magnetic dipole-like (head-position indicator, HPI) coils that are attached to the subject’s head. With accurate prior knowledge of the positions and orientations of the sensors with respect to one another, the HPI coils can be localized with high precision, and thereby the positions of the sensors in relation to the head. With advances in magnetic field sensing technologies, e.g., high-Tc SQUIDs and optically pumped magnetometers (OPM), that require less extreme operating temperatures than low-Tc SQUID sensors, on-scalp MEG is on the horizon. To utilize the full potential of on-scalp MEG, flexible sensor arrays are preferable. Conventional co-registration is impractical for such systems as the relative positions and orientations of the sensors to each other are subject-specific and hence not known a priori. Herein, we present a method for co-registration of on-scalp MEG sensors. We propose to invert the conventional co-registration approach and localize the sensors relative to an array of HPI coils on the subject’s head. We show that given accurate prior knowledge of the positions of the HPI coils with respect to one another, the sensors can be localized with high precision. We simulated our method with realistic parameters and layouts for sensor and coil arrays. Results indicate co-registration is possible with sub-millimeter accuracy, but the performance strongly depends upon a number of factors. Accurate calibration of the coils and precise determination of the positions and orientations of the coils with respect to one another are crucial. Finally, we propose methods to tackle practical challenges to further improve the method. PMID:29746486

  15. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  16. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  17. Spectral ratio method for measuring emissivity

    USGS Publications Warehouse

    Watson, K.

    1992-01-01

    The spectral ratio method is based on the concept that although the spectral radiances are very sensitive to small changes in temperature the ratios are not. Only an approximate estimate of temperature is required thus, for example, we can determine the emissivity ratio to an accuracy of 1% with a temperature estimate that is only accurate to 12.5 K. Selecting the maximum value of the channel brightness temperatures is an unbiased estimate. Laboratory and field spectral data are easily converted into spectral ratio plots. The ratio method is limited by system signal:noise and spectral band-width. The images can appear quite noisy because ratios enhance high frequencies and may require spatial filtering. Atmospheric effects tend to rescale the ratios and require using an atmospheric model or a calibration site. ?? 1992.

  18. Evaluation of methods for determining hardware projected life

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.

  19. A Hybrid On-line Verification Method of Relay Setting

    NASA Astrophysics Data System (ADS)

    Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin

    2017-05-01

    Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.

  20. The Voronoi Implicit Interface Method for computing multiphase physics.

    PubMed

    Saye, Robert I; Sethian, James A

    2011-12-06

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.

  1. An efficient approach to BAC based assembly of complex genomes.

    PubMed

    Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David

    2016-01-01

    There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.

  2. System for routine surface anthropometry using reprojection registration

    NASA Astrophysics Data System (ADS)

    Sadleir, R. J.; Owens, R. A.; Hartmann, P. E.

    2003-11-01

    Range data measurement can be usefully applied to non-invasive monitoring of anthropometric changes due to disease, healing or during normal physiological processes. We have developed a computer vision system that allows routine capture of biological surface shapes and accurate measurement of anthropometric changes, using a structured light stripe triangulation system. In many applications involving relocation of soft tissue for image-guided surgery or anthropometry it is neither accurate nor practical to apply fiducial markers directly to the body. This system features a novel method of achieving subject re-registration that involves application of fiducials by a standard data projector. Calibration of this reprojector is achieved using a variation of structured lighting techniques. The method allows accurate and comparable repositioning of elastic surfaces. Tests of repositioning using the reprojector found a significant improvement in subject registration compared to an earlier method which used video overlay comparison only. It has a current application to the measurement of breast volume changes in lactating mothers, but may be extended to any application where repeatable positioning and measurement is required.

  3. In situ accurate determination of the zero time delay between two independent ultrashort laser pulses by observing the oscillation of an atomic excited wave packet.

    PubMed

    Zhang, Qun; Hepburn, John W

    2008-08-15

    We propose a novel method that uses the oscillation of an atomic excited wave packet observed through a pump-probe technique to accurately determine the zero time delay between a pair of ultrashort laser pulses. This physically based approach provides an easy fix for the intractable problem of synchronizing two different femtosecond laser pulses in a practical experimental environment, especially where an in situ time zero measurement with high accuracy is required.

  4. A new measurement method of coatings thickness based on lock-in thermography

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Yu; Meng, Xiang-bin; Ma, Yong-chao

    2016-05-01

    Coatings have been widely used in modern industry and it plays an important role. Coatings thickness is directly related to the performance of the functional coatings, therefore, rapid and accurate coatings thickness inspection has great significance. Existing coatings thickness measurement method is difficult to achieve fast and accurate on-site non-destructive coatings inspection due to cost, accuracy, destruction during inspection and other reasons. This paper starts from the introduction of the principle of lock-in thermography, and then performs an in-depth study on the application of lock-in thermography in coatings inspection through numerical modeling and analysis. The numerical analysis helps explore the relationship between coatings thickness and phase, and the relationship lays the foundation for accurate calculation of coatings thickness. The author sets up a lock-in thermography inspection system and uses thermal barrier coatings specimens to conduct an experiment. The specimen coatings thickness is measured and calibrated to verify the quantitative inspection. Experiment results show that the lock-in thermography method can perform fast coatings inspection and the inspection accuracy is about 95%. Therefore, the method can meet the field testing requirements for engineering projects.

  5. The generalized scattering coefficient method for plane wave scattering in layered structures

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Li, Chao; Wang, Huai-Yu; Zhou, Yun-Song

    2017-02-01

    The generalized scattering coefficient (GSC) method is pedagogically derived and employed to study the scattering of plane waves in homogeneous and inhomogeneous layered structures. The numerical stabilities and accuracies of this method and other commonly used numerical methods are discussed and compared. For homogeneous layered structures, concise scattering formulas with clear physical interpretations and strong numerical stability are obtained by introducing the GSCs. For inhomogeneous layered structures, three numerical methods are employed: the staircase approximation method, the power series expansion method, and the differential equation based on the GSCs. We investigate the accuracies and convergence behaviors of these methods by comparing their predictions to the exact results. The conclusions are as follows. The staircase approximation method has a slow convergence in spite of its simple and intuitive implementation, and a fine stratification within the inhomogeneous layer is required for obtaining accurate results. The expansion method results are sensitive to the expansion order, and the treatment becomes very complicated for relatively complex configurations, which restricts its applicability. By contrast, the GSC-based differential equation possesses a simple implementation while providing fast and accurate results.

  6. Lightweight, Miniature Inertial Measurement System

    NASA Technical Reports Server (NTRS)

    Tang, Liang; Crassidis, Agamemnon

    2012-01-01

    A miniature, lighter-weight, and highly accurate inertial navigation system (INS) is coupled with GPS receivers to provide stable and highly accurate positioning, attitude, and inertial measurements while being subjected to highly dynamic maneuvers. In contrast to conventional methods that use extensive, groundbased, real-time tracking and control units that are expensive, large, and require excessive amounts of power to operate, this method focuses on the development of an estimator that makes use of a low-cost, miniature accelerometer array fused with traditional measurement systems and GPS. Through the use of a position tracking estimation algorithm, onboard accelerometers are numerically integrated and transformed using attitude information to obtain an estimate of position in the inertial frame. Position and velocity estimates are subject to drift due to accelerometer sensor bias and high vibration over time, and so require the integration with GPS information using a Kalman filter to provide highly accurate and reliable inertial tracking estimations. The method implemented here uses the local gravitational field vector. Upon determining the location of the local gravitational field vector relative to two consecutive sensors, the orientation of the device may then be estimated, and the attitude determined. Improved attitude estimates further enhance the inertial position estimates. The device can be powered either by batteries, or by the power source onboard its target platforms. A DB9 port provides the I/O to external systems, and the device is designed to be mounted in a waterproof case for all-weather conditions.

  7. Effect of Heat Generation of Ultrasound Transducer on Ultrasonic Power Measured by Calorimetric Method

    NASA Astrophysics Data System (ADS)

    Uchida, Takeyoshi; Kikuchi, Tsuneo

    2013-07-01

    Ultrasonic power is one of the key quantities closely related to the safety of medical ultrasonic equipment. An ultrasonic power standard is required for establishment of safety. Generally, an ultrasonic power standard below approximately 20 W is established by the radiation force balance (RFB) method as the most accurate measurement method. However, RFB is not suitable for high ultrasonic power because of thermal damage to the absorbing target. Consequently, an alternative method to RFB is required. We have been developing a measurement technique for high ultrasonic power by the calorimetric method. In this study, we examined the effect of heat generation of an ultrasound transducer on ultrasonic power measured by the calorimetric method. As a result, an excessively high ultrasonic power was measured owing to the effect of heat generation from internal loss in the transducer. A reference ultrasound transducer with low heat generation is required for a high ultrasonic power standard established by the calorimetric method.

  8. Information Measures for Statistical Orbit Determination

    ERIC Educational Resources Information Center

    Mashiku, Alinda K.

    2013-01-01

    The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…

  9. Writing a curriculum vitae, resume or data sheet.

    PubMed

    Saltman, D

    1995-02-01

    This paper outlines a method for the preparation of a curriculum vitae, resume or data sheet, which is an essential document for professional people seeking employment or promotion. However, it needs to be accurate and relevant to the circumstances of the position, and requires regular updating.

  10. RECOVERY OF SEMI-VOLATILE ORGANIC COMPOUNDS DURING SAMPLE PREPARATION: IMPLICATIONS FOR CHARACTERIZATION OF AIRBORNE PARTICULATE MATTER

    EPA Science Inventory

    Semi-volatile compounds present special analytical challenges not met by conventional methods for analysis of ambient particulate matter (PM). Accurate quantification of PM-associated organic compounds requires validation of the laboratory procedures for recovery over a wide v...

  11. Quantitative PCR for Detection and Enumeration of Genetic Markers of Bovine Fecal Pollution

    EPA Science Inventory

    Accurate assessment of health risks associated with bovine (cattle) fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for the detection of two recently described cow feces-spec...

  12. Direct Allocation Costing: Informed Management Decisions in a Changing Environment.

    ERIC Educational Resources Information Center

    Mancini, Cesidio G.; Goeres, Ernest R.

    1995-01-01

    It is argued that colleges and universities can use direct allocation costing to provide quantitative information needed for decision making. This method of analysis requires institutions to modify traditional ideas of costing, looking to the private sector for examples of accurate costing techniques. (MSE)

  13. Iterative combination of national phenotype, genotype, pedigree, and foreign information

    USDA-ARS?s Scientific Manuscript database

    Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...

  14. Estimation of relative free energies of binding using pre-computed ensembles based on the single-step free energy perturbation and the site-identification by Ligand competitive saturation approaches.

    PubMed

    Raman, E Prabhu; Lakkaraju, Sirish Kaushik; Denny, Rajiah Aldrin; MacKerell, Alexander D

    2017-06-05

    Accurate and rapid estimation of relative binding affinities of ligand-protein complexes is a requirement of computational methods for their effective use in rational ligand design. Of the approaches commonly used, free energy perturbation (FEP) methods are considered one of the most accurate, although they require significant computational resources. Accordingly, it is desirable to have alternative methods of similar accuracy but greater computational efficiency to facilitate ligand design. In the present study relative free energies of binding are estimated for one or two non-hydrogen atom changes in compounds targeting the proteins ACK1 and p38 MAP kinase using three methods. The methods include standard FEP, single-step free energy perturbation (SSFEP) and the site-identification by ligand competitive saturation (SILCS) ligand grid free energy (LGFE) approach. Results show the SSFEP and SILCS LGFE methods to be competitive with or better than the FEP results for the studied systems, with SILCS LGFE giving the best agreement with experimental results. This is supported by additional comparisons with published FEP data on p38 MAP kinase inhibitors. While both the SSFEP and SILCS LGFE approaches require a significant upfront computational investment, they offer a 1000-fold computational savings over FEP for calculating the relative affinities of ligand modifications once those pre-computations are complete. An illustrative example of the potential application of these methods in the context of screening large numbers of transformations is presented. Thus, the SSFEP and SILCS LGFE approaches represent viable alternatives for actively driving ligand design during drug discovery and development. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Laboratory Methods for the Measurement of Pollutants in Water and Waste Effluents

    NASA Technical Reports Server (NTRS)

    Ballinger, Dwight G.

    1971-01-01

    The requirement for accurate, precise, and rapid analytical procedures for the examination of water and waste samples requires the use of a variety of instruments. The instrumentation in water laboratories includes atomic absorption, UV-visible. and infrared spectrophotometers, automatic colorimetric analyzers, gas chromatographs and mass spectrometers. Because of the emphasis on regulatory action, attention is being directed toward quality control of analytical results. Among the challenging problems are the differentiation of metallic species in water at nanogram concentrations, rapid measurement of free cyanide and free ammonia, more sensitive methods for arsenic and selenium and improved characterization of organic contaminants.

  16. Low-dimensional, morphologically accurate models of subthreshold membrane potential

    PubMed Central

    Kellems, Anthony R.; Roos, Derrick; Xiao, Nan; Cox, Steven J.

    2009-01-01

    The accurate simulation of a neuron’s ability to integrate distributed synaptic input typically requires the simultaneous solution of tens of thousands of ordinary differential equations. For, in order to understand how a cell distinguishes between input patterns we apparently need a model that is biophysically accurate down to the space scale of a single spine, i.e., 1 μm. We argue here that one can retain this highly detailed input structure while dramatically reducing the overall system dimension if one is content to accurately reproduce the associated membrane potential at a small number of places, e.g., at the site of action potential initiation, under subthreshold stimulation. The latter hypothesis permits us to approximate the active cell model with an associated quasi-active model, which in turn we reduce by both time-domain (Balanced Truncation) and frequency-domain (ℋ2 approximation of the transfer function) methods. We apply and contrast these methods on a suite of typical cells, achieving up to four orders of magnitude in dimension reduction and an associated speed-up in the simulation of dendritic democratization and resonance. We also append a threshold mechanism and indicate that this reduction has the potential to deliver an accurate quasi-integrate and fire model. PMID:19172386

  17. Three-Signal Method for Accurate Measurements of Depolarization Ratio with Lidar

    NASA Technical Reports Server (NTRS)

    Reichardt, Jens; Baumgart, Rudolf; McGee, Thomsa J.

    2003-01-01

    A method is presented that permits the determination of atmospheric depolarization-ratio profiles from three elastic-backscatter lidar signals with different sensitivity to the state of polarization of the backscattered light. The three-signal method is insensitive to experimental errors and does not require calibration of the measurement, which could cause large systematic uncertainties of the results, as is the case in the lidar technique conventionally used for the observation of depolarization ratios.

  18. A machine learning method for fast and accurate characterization of depth-of-interaction gamma cameras

    NASA Astrophysics Data System (ADS)

    Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen

    2017-11-01

    Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of  ≈1.14 mm for the simulated PET detector and  ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.

  19. New method for GC/FID and GC-C-IRMS Analysis of plasma free fatty acid concentration and isotopic enrichment

    PubMed Central

    Kangani, Cyrous O.; Kelley, David E.; DeLany, James P.

    2008-01-01

    A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards free fatty acids so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methylester derivative after thin-layer chromatography. The [13C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF3/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry. PMID:18757250

  20. New method for GC/FID and GC-C-IRMS analysis of plasma free fatty acid concentration and isotopic enrichment.

    PubMed

    Kangani, Cyrous O; Kelley, David E; Delany, James P

    2008-09-15

    A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids (FFAs) in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards FFAs so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methyl ester derivative after thin-layer chromatography. The [(13)C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF(3)/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry.

  1. Evaluation of immunoturbidimetric rheumatoid factor method from Diagam on Abbott c8000 analyzer: comparison with immunonephelemetric method.

    PubMed

    Dupuy, Anne Marie; Hurstel, Rémy; Bargnoux, Anne Sophie; Badiou, Stéphanie; Cristol, Jean Paul

    2014-01-01

    Rheumatoid factor (RF) consists of autoantibodies and because of its heterogeneity its determination is not easy. Currently, nephelometry and Elisa method are considered as reference methods. Due to consolidation, many laboratories have fully automated turbidimetric apparatus, and specific nephelemetric systems are not always available. In addition, nephelemetry is more accurate, but time consuming, expensive, and requires a specific device, resulting in a lower efficiency. Turbidimetry could be an attractive alternative. The turbidimetric RF test from Diagam meets the requirements of accuracy and precision for optimal clinical use, with an acceptable measuring range, and could be an alternative in the determination of RF, without the associated cost of a dedicated instrument, making consolidation and saving blood possible.

  2. Simple Criteria to Determine the Set of Key Parameters of the DRPE Method by a Brute-force Attack

    NASA Astrophysics Data System (ADS)

    Nalegaev, S. S.; Petrov, N. V.

    Known techniques of breaking Double Random Phase Encoding (DRPE), which bypass the resource-intensive brute-force method, require at least two conditions: the attacker knows the encryption algorithm; there is an access to the pairs of source and encoded images. Our numerical results show that for the accurate recovery by numerical brute-force attack, someone needs only some a priori information about the source images, which can be quite general. From the results of our numerical experiments with optical data encryption DRPE with digital holography, we have proposed four simple criteria for guaranteed and accurate data recovery. These criteria can be applied, if the grayscale, binary (including QR-codes) or color images are used as a source.

  3. Methods and techniques for measuring gas emissions from agricultural and animal feeding operations.

    PubMed

    Hu, Enzhu; Babcock, Esther L; Bialkowski, Stephen E; Jones, Scott B; Tuller, Markus

    2014-01-01

    Emissions of gases from agricultural and animal feeding operations contribute to climate change, produce odors, degrade sensitive ecosystems, and pose a threat to public health. The complexity of processes and environmental variables affecting these emissions complicate accurate and reliable quantification of gas fluxes and production rates. Although a plethora of measurement technologies exist, each method has its limitations that exacerbate accurate quantification of gas fluxes. Despite a growing interest in gas emission measurements, only a few available technologies include real-time, continuous monitoring capabilities. Commonly applied state-of-the-art measurement frameworks and technologies were critically examined and discussed, and recommendations for future research to address real-time monitoring requirements for forthcoming regulation and management needs are provided.

  4. Molybdenum disulfide and water interaction parameters

    NASA Astrophysics Data System (ADS)

    Heiranian, Mohammad; Wu, Yanbin; Aluru, Narayana R.

    2017-09-01

    Understanding the interaction between water and molybdenum disulfide (MoS2) is of crucial importance to investigate the physics of various applications involving MoS2 and water interfaces. An accurate force field is required to describe water and MoS2 interactions. In this work, water-MoS2 force field parameters are derived using the high-accuracy random phase approximation (RPA) method and validated by comparing to experiments. The parameters obtained from the RPA method result in water-MoS2 interface properties (solid-liquid work of adhesion) in good comparison to the experimental measurements. An accurate description of MoS2-water interaction will facilitate the study of MoS2 in applications such as DNA sequencing, sea water desalination, and power generation.

  5. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  6. Eye movement perimetry in glaucoma.

    PubMed

    Trope, G E; Eizenman, M; Coyle, E

    1989-08-01

    Present-day computerized perimetry is often inaccurate and unreliable owing to the need to maintain central fixation over long periods while repressing the normal response to presentation of peripheral stimuli. We tested a new method of perimetry that does not require prolonged central fixation. During this test eye movements were encouraged on presentation of a peripheral target. Twenty-three eyes were studied with an Octopus perimeter, with a technician monitoring eye movements. The sensitivity was 100% and the specificity 23%. The low specificity was due to the technician's inability to accurately monitor small eye movements in the central 6 degrees field. If small eye movements are monitored accurately with an eye tracker, eye movement perimetry could become an alternative method to standard perimetry.

  7. Prediction of phospholipidosis-inducing potential of drugs by in vitro biochemical and physicochemical assays followed by multivariate analysis.

    PubMed

    Kuroda, Yukihiro; Saito, Madoka

    2010-03-01

    An in vitro method to predict phospholipidosis-inducing potential of cationic amphiphilic drugs (CADs) was developed using biochemical and physicochemical assays. The following parameters were applied to principal component analysis, as well as physicochemical parameters: pK(a) and clogP; dissociation constant of CADs from phospholipid, inhibition of enzymatic phospholipid degradation, and metabolic stability of CADs. In the score plot, phospholipidosis-inducing drugs (amiodarone, propranolol, imipramine, chloroquine) were plotted locally forming the subspace for positive CADs; while non-inducing drugs (chlorpromazine, chloramphenicol, disopyramide, lidocaine) were placed scattering out of the subspace, allowing a clear discrimination between both classes of CADs. CADs that often produce false results by conventional physicochemical or cell-based assay methods were accurately determined by our method. Basic and lipophilic disopyramide could be accurately predicted as a nonphospholipidogenic drug. Moreover, chlorpromazine, which is often falsely predicted as a phospholipidosis-inducing drug by in vitro methods, could be accurately determined. Because this method uses the pharmacokinetic parameters pK(a), clogP, and metabolic stability, which are usually obtained in the early stages of drug development, the method newly requires only the two parameters, binding to phospholipid, and inhibition of lipid degradation enzyme. Therefore, this method provides a cost-effective approach to predict phospholipidosis-inducing potential of a drug. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  8. Measurement of limb volume: laser scanning versus volume displacement.

    PubMed

    McKinnon, John Gregory; Wong, Vanessa; Temple, Walley J; Galbraith, Callum; Ferry, Paul; Clynch, George S; Clynch, Colin

    2007-10-01

    Determining the prevalence and treatment success of surgical lymphedema requires accurate and reproducible measurement. A new method of measurement of limb volume is described. A series of inanimate objects of known and unknown volume was measured using digital laser scanning and water displacement. A similar comparison was made with 10 human volunteers. Digital scanning was evaluated by comparison to the established method of water displacement, then to itself to determine reproducibility of measurement. (1) Objects of known volume: Laser scanning accurately measured the calculated volume but water displacement became less accurate as the size of the object increased. (2) Objects of unknown volume: As average volume increased, there was an increasing bias of underestimation of volume by the water displacement method. The coefficient of reproducibility of water displacement was 83.44 ml. In contrast, the reproducibility of the digital scanning method was 19.0 ml. (3) Human data: The mean difference between water displacement volume and laser scanning volume was 151.7 ml (SD +/- 189.5). The coefficient of reproducibility of water displacement was 450.8 ml whereas for laser scanning it was 174 ml. Laser scanning is an innovative method of measuring tissue volume that combines precision and reproducibility and may have clinical utility for measuring lymphedema. 2007 Wiley-Liss, Inc

  9. Fully automated tumor segmentation based on improved fuzzy connectedness algorithm in brain MR images.

    PubMed

    Harati, Vida; Khayati, Rasoul; Farzan, Abdolreza

    2011-07-01

    Uncontrollable and unlimited cell growth leads to tumor genesis in the brain. If brain tumors are not diagnosed early and cured properly, they could cause permanent brain damage or even death to patients. As in all methods of treatments, any information about tumor position and size is important for successful treatment; hence, finding an accurate and a fully automated method to give information to physicians is necessary. A fully automatic and accurate method for tumor region detection and segmentation in brain magnetic resonance (MR) images is suggested. The presented approach is an improved fuzzy connectedness (FC) algorithm based on a scale in which the seed point is selected automatically. This algorithm is independent of the tumor type in terms of its pixels intensity. Tumor segmentation evaluation results based on similarity criteria (similarity index (SI), overlap fraction (OF), and extra fraction (EF) are 92.89%, 91.75%, and 3.95%, respectively) indicate a higher performance of the proposed approach compared to the conventional methods, especially in MR images, in tumor regions with low contrast. Thus, the suggested method is useful for increasing the ability of automatic estimation of tumor size and position in brain tissues, which provides more accurate investigation of the required surgery, chemotherapy, and radiotherapy procedures. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Space mapping method for the design of passive shields

    NASA Astrophysics Data System (ADS)

    Sergeant, Peter; Dupré, Luc; Melkebeek, Jan

    2006-04-01

    The aim of the paper is to find the optimal geometry of a passive shield for the reduction of the magnetic stray field of an axisymmetric induction heater. For the optimization, a space mapping algorithm is used that requires two models. The first is an accurate model with a high computational effort as it contains finite element models. The second is less accurate, but it has a low computational effort as it uses an analytical model: the shield is replaced by a number of mutually coupled coils. The currents in the shield are found by solving an electrical circuit. Space mapping combines both models to obtain the optimal passive shield fast and accurately. The presented optimization technique is compared with gradient, simplex, and genetic algorithms.

  11. Characterization of in-flight performance of ion propulsion systems

    NASA Astrophysics Data System (ADS)

    Sovey, James S.; Rawlin, Vincent K.

    1993-06-01

    In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.

  12. Characterization of in-flight performance of ion propulsion systems

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Rawlin, Vincent K.

    1993-01-01

    In-flight measurements of ion propulsion performance, ground test calibrations, and diagnostic performance measurements were reviewed. It was found that accelerometers provided the most accurate in-flight thrust measurements compared with four other methods that were surveyed. An experiment has also demonstrated that pre-flight alignment of the thrust vector was sufficiently accurate so that gimbal adjustments and use of attitude control thrusters were not required to counter disturbance torques caused by thrust vector misalignment. The effects of facility background pressure, facility enhanced charge-exchange reactions, and contamination on ground-based performance measurements are also discussed. Vacuum facility pressures for inert-gas ion thruster life tests and flight qualification tests will have to be less than 2 mPa to ensure accurate performance measurements.

  13. Singlet oxygen detection in biological systems: Uses and limitations

    PubMed Central

    Koh, Eugene; Fluhr, Robert

    2016-01-01

    ABSTRACT The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations. PMID:27231787

  14. Automated analysis of plethysmograms for functional studies of hemodynamics

    NASA Astrophysics Data System (ADS)

    Zatrudina, R. Sh.; Isupov, I. B.; Gribkov, V. Yu.

    2018-04-01

    The most promising method for the quantitative determination of cardiovascular tone indicators and of cerebral hemodynamics indicators is the method of impedance plethysmography. The accurate determination of these indicators requires the correct identification of the characteristic points in the thoracic impedance plethysmogram and the cranial impedance plethysmogram respectively. An algorithm for automatic analysis of these plethysmogram is presented. The algorithm is based on the hard temporal relationships between the phases of the cardiac cycle and the characteristic points of the plethysmogram. The proposed algorithm does not require estimation of initial data and selection of processing parameters. Use of the method on healthy subjects showed a very low detection error of characteristic points.

  15. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  16. A new automated passive capillary lysimeter for logging real-time drainage water fluxes

    USDA-ARS?s Scientific Manuscript database

    Effective monitoring of chemical transport through the soil profile requires accurate and appropriate instrumentation to measure drainage water fluxes below the root zone of cropping system. The objectives of this study were to methodically describe in detail the construction and installation of a n...

  17. Constructing Sample Space with Combinatorial Reasoning: A Mixed Methods Study

    ERIC Educational Resources Information Center

    McGalliard, William A., III.

    2012-01-01

    Recent curricular developments suggest that students at all levels need to be statistically literate and able to efficiently and accurately make probabilistic decisions. Furthermore, statistical literacy is a requirement to being a well-informed citizen of society. Research also recognizes that the ability to reason probabilistically is supported…

  18. Permanent Disability Evaluation

    PubMed Central

    Chovil, A. C.

    1975-01-01

    This paper is a review of the theory and practice of disability evaluation with emphasis on the distinction between medical impairment and disability. The requirements for making an accurate assessment of medical impairments are discussed. The author suggests three basic standards which can be used for establishing a simplified method of assessing physical impairment. PMID:20469213

  19. Continuous flow hygroscopicity-resolved relaxed eddy accumulation (Hy-Res REA) method of measuring size-resolved sodium chloride particle fluxes

    EPA Science Inventory

    The accurate representation of aerosols in climate models requires direct ambient measurement of the size- and composition-dependent particle production fluxes. Here, we present the design, testing, and analysis of data collected through the first instrument capable of measuring ...

  20. The Voronoi Implicit Interface Method for computing multiphase physics

    PubMed Central

    Saye, Robert I.; Sethian, James A.

    2011-01-01

    We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269

  1. The Voronoi Implicit Interface Method for computing multiphase physics

    DOE PAGES

    Saye, Robert I.; Sethian, James A.

    2011-11-21

    In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less

  2. Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model

    NASA Astrophysics Data System (ADS)

    Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.

    2007-05-01

    Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem and predict the strip crown, a new customized semi-analytical modeling technique that couples the Finite Element Method (FEM) with classical solid mechanics was developed to model the deflection of the rolls and strip while under load. The technique employed offers several important advantages over traditional methods to calculate strip crown, including continuity of elastic foundations, non-iterative solution when using predetermined foundation moduli, continuous third-order displacement fields, simple stress-field determination, and a comparatively faster solution time.

  3. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  4. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  5. Error Reduction Program. [combustor performance evaluation codes

    NASA Technical Reports Server (NTRS)

    Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.

    1985-01-01

    The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.

  6. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    PubMed

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  7. Binding and Diffusion of Lithium in Graphite: Quantum Monte Carlo Benchmarks and Validation of van der Waals Density Functional Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganesh, P.; Kim, Jeongnim; Park, Changwon

    2014-11-03

    In highly accurate diffusion quantum Monte Carlo (QMC) studies of the adsorption and diffusion of atomic lithium in AA-stacked graphite are compared with van der Waals-including density functional theory (DFT) calculations. Predicted QMC lattice constants for pure AA graphite agree with experiment. Pure AA-stacked graphite is shown to challenge many van der Waals methods even when they are accurate for conventional AB graphite. Moreover, the highest overall DFT accuracy, considering pure AA-stacked graphite as well as lithium binding and diffusion, is obtained by the self-consistent van der Waals functional vdW-DF2, although errors in binding energies remain. Empirical approaches based onmore » point charges such as DFT-D are inaccurate unless the local charge transfer is assessed. Our results demonstrate that the lithium carbon system requires a simultaneous highly accurate description of both charge transfer and van der Waals interactions, favoring self-consistent approaches.« less

  8. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    NASA Technical Reports Server (NTRS)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  9. Modeling and scaleup of steamflood in a heterogeneous reservoir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dehghani, K.; Basham, W.M.; Durlofsky, L.J.

    1995-11-01

    A series of simulation runs was conducted for different geostatistically derived cross-sectional models to study the degree of heterogeneity required for proper modeling of steamfloods in a thick, heavy-oil reservoir with thin diatomite barriers Different methods for coarsening the most detailed models were applied, and performance predictions for the coarsened and detailed models compared. Use of a general scaleup method provided the most accurate coarse grid models.

  10. An Automated Method for High-Definition Transcranial Direct Current Stimulation Modeling*

    PubMed Central

    Huang, Yu; Su, Yuzhuo; Rorden, Christopher; Dmochowski, Jacek; Datta, Abhishek; Parra, Lucas C.

    2014-01-01

    Targeted transcranial stimulation with electric currents requires accurate models of the current flow from scalp electrodes to the human brain. Idiosyncratic anatomy of individual brains and heads leads to significant variability in such current flows across subjects, thus, necessitating accurate individualized head models. Here we report on an automated processing chain that computes current distributions in the head starting from a structural magnetic resonance image (MRI). The main purpose of automating this process is to reduce the substantial effort currently required for manual segmentation, electrode placement, and solving of finite element models. In doing so, several weeks of manual labor were reduced to no more than 4 hours of computation time and minimal user interaction, while current-flow results for the automated method deviated by less than 27.9% from the manual method. Key facilitating factors are the addition of three tissue types (skull, scalp and air) to a state-of-the-art automated segmentation process, morphological processing to correct small but important segmentation errors, and automated placement of small electrodes based on easily reproducible standard electrode configurations. We anticipate that such an automated processing will become an indispensable tool to individualize transcranial direct current stimulation (tDCS) therapy. PMID:23367144

  11. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    NASA Astrophysics Data System (ADS)

    An, Zhe; Rey, Daniel; Ye, Jingxin; Abarbanel, Henry D. I.

    2017-01-01

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of the full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. We show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.

  12. Wavenumber-extended high-order oscillation control finite volume schemes for multi-dimensional aeroacoustic computations

    NASA Astrophysics Data System (ADS)

    Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong

    2008-04-01

    A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.

  13. Electromagnetic Extended Finite Elements for High-Fidelity Multimaterial Problems LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siefert, Christopher; Bochev, Pavel Blagoveston; Kramer, Richard Michael Jack

    Surface effects are critical to the accurate simulation of electromagnetics (EM) as current tends to concentrate near material surfaces. Sandia EM applications, which include exploding bridge wires for detonator design, electromagnetic launch of flyer plates for material testing and gun design, lightning blast-through for weapon safety, electromagnetic armor, and magnetic flux compression generators, all require accurate resolution of surface effects. These applications operate in a large deformation regime, where body-fitted meshes are impractical and multimaterial elements are the only feasible option. State-of-the-art methods use various mixture models to approximate the multi-physics of these elements. The empirical nature of these modelsmore » can significantly compromise the accuracy of the simulation in this very important surface region. We propose to substantially improve the predictive capability of electromagnetic simulations by removing the need for empirical mixture models at material surfaces. We do this by developing an eXtended Finite Element Method (XFEM) and an associated Conformal Decomposition Finite Element Method (CDFEM) which satisfy the physically required compatibility conditions at material interfaces. We demonstrate the effectiveness of these methods for diffusion and diffusion-like problems on node, edge and face elements in 2D and 3D. We also present preliminary work on h -hierarchical elements and remap algorithms.« less

  14. Next Day Building Load Predictions based on Limited Input Features Using an On-Line Laterally Primed Adaptive Resonance Theory Artificial Neural Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Christian Birk; Robinson, Matt; Yasaei, Yasser

    Optimal integration of thermal energy storage within commercial building applications requires accurate load predictions. Several methods exist that provide an estimate of a buildings future needs. Methods include component-based models and data-driven algorithms. This work implemented a previously untested algorithm for this application that is called a Laterally Primed Adaptive Resonance Theory (LAPART) artificial neural network (ANN). The LAPART algorithm provided accurate results over a two month period where minimal historical data and a small amount of input types were available. These results are significant, because common practice has often overlooked the implementation of an ANN. ANN have often beenmore » perceived to be too complex and require large amounts of data to provide accurate results. The LAPART neural network was implemented in an on-line learning manner. On-line learning refers to the continuous updating of training data as time occurs. For this experiment, training began with a singe day and grew to two months of data. This approach provides a platform for immediate implementation that requires minimal time and effort. The results from the LAPART algorithm were compared with statistical regression and a component-based model. The comparison was based on the predictions linear relationship with the measured data, mean squared error, mean bias error, and cost savings achieved by the respective prediction techniques. The results show that the LAPART algorithm provided a reliable and cost effective means to predict the building load for the next day.« less

  15. A novel implementation of homodyne time interval analysis method for primary vibration calibration

    NASA Astrophysics Data System (ADS)

    Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo

    2011-12-01

    In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.

  16. Bayesian-based estimation of acoustic surface impedance: Finite difference frequency domain approach.

    PubMed

    Bockman, Alexander; Fackler, Cameron; Xiang, Ning

    2015-04-01

    Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.

  17. Development of cable drive systems for an automated assembly project

    NASA Technical Reports Server (NTRS)

    Monroe, Charles A., Jr.

    1990-01-01

    In a robotic assembly project, a method was needed to accurately position a robot and a structure which the robot was to assemble. The requirements for high precision and relatively long travel distances dictated the use of cable drive systems. The design of the mechanisms used in translating the robot and in rotating the assembly under construction is discussed. The design criteria are discussed, and the effect of particular requirements on the design is noted. Finally, the measured performance of the completed mechanism is compared with design requirements.

  18. Proximal sensing for soil carbon accounting

    NASA Astrophysics Data System (ADS)

    England, Jacqueline R.; Viscarra Rossel, Raphael A.

    2018-05-01

    Maintaining or increasing soil organic carbon (C) is vital for securing food production and for mitigating greenhouse gas (GHG) emissions, climate change, and land degradation. Some land management practices in cropping, grazing, horticultural, and mixed farming systems can be used to increase organic C in soil, but to assess their effectiveness, we need accurate and cost-efficient methods for measuring and monitoring the change. To determine the stock of organic C in soil, one requires measurements of soil organic C concentration, bulk density, and gravel content, but using conventional laboratory-based analytical methods is expensive. Our aim here is to review the current state of proximal sensing for the development of new soil C accounting methods for emissions reporting and in emissions reduction schemes. We evaluated sensing techniques in terms of their rapidity, cost, accuracy, safety, readiness, and their state of development. The most suitable method for measuring soil organic C concentrations appears to be visible-near-infrared (vis-NIR) spectroscopy and, for bulk density, active gamma-ray attenuation. Sensors for measuring gravel have not been developed, but an interim solution with rapid wet sieving and automated measurement appears useful. Field-deployable, multi-sensor systems are needed for cost-efficient soil C accounting. Proximal sensing can be used for soil organic C accounting, but the methods need to be standardized and procedural guidelines need to be developed to ensure proficient measurement and accurate reporting and verification. These are particularly important if the schemes use financial incentives for landholders to adopt management practices to sequester soil organic C. We list and discuss requirements for developing new soil C accounting methods based on proximal sensing, including requirements for recording, verification, and auditing.

  19. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  20. Image Contrast Immersion Method for Measuring Refractive Index Applied to Spider Silks

    DTIC Science & Technology

    2011-09-26

    12.880665. 8. A. J. Werner, “Methods in high precision refractometry of optical glasses,” Appl. Opt. 7(5), 837–843 (1968). 9. Y. S. Liu, “Direct...transparent, low visibility orb web. Refractometry is the most widely used technique for accurately measuring n for transparent media. It has been...in use for more than a century. There are several standard refractometry methods [8]. Most require a bulk sample with surfaces polished to optical

  1. Nonideal isentropic gas flow through converging-diverging nozzles

    NASA Technical Reports Server (NTRS)

    Bober, W.; Chow, W. L.

    1990-01-01

    A method for treating nonideal gas flows through converging-diverging nozzles is described. The method incorporates the Redlich-Kwong equation of state. The Runge-Kutta method is used to obtain a solution. Numerical results were obtained for methane gas. Typical plots of pressure, temperature, and area ratios as functions of Mach number are given. From the plots, it can be seen that there exists a range of reservoir conditions that require the gas to be treated as nonideal if an accurate solution is to be obtained.

  2. Molecular Building Block-Based Electronic Charges for High-Throughput Screening of Metal-Organic Frameworks for Adsorption Applications.

    PubMed

    Argueta, Edwin; Shaji, Jeena; Gopalan, Arun; Liao, Peilin; Snurr, Randall Q; Gómez-Gualdrón, Diego A

    2018-01-09

    Metal-organic frameworks (MOFs) are porous crystalline materials with attractive properties for gas separation and storage. Their remarkable tunability makes it possible to create millions of MOF variations but creates the need for fast material screening to identify promising structures. Computational high-throughput screening (HTS) is a possible solution, but its usefulness is tied to accurate predictions of MOF adsorption properties. Accurate adsorption simulations often require an accurate description of electrostatic interactions, which depend on the electronic charges of the MOF atoms. HTS-compatible methods to assign charges to MOF atoms need to accurately reproduce electrostatic potentials (ESPs) and be computationally affordable, but current methods present an unsatisfactory trade-off between computational cost and accuracy. We illustrate a method to assign charges to MOF atoms based on ab initio calculations on MOF molecular building blocks. A library of building blocks with built-in charges is thus created and used by an automated MOF construction code to create hundreds of MOFs with charges "inherited" from the constituent building blocks. The molecular building block-based (MBBB) charges are similar to REPEAT charges-which are charges that reproduce ESPs obtained from ab initio calculations on crystallographic unit cells of nanoporous crystals-and thus similar predictions of adsorption loadings, heats of adsorption, and Henry's constants are obtained with either method. The presented results indicate that the MBBB method to assign charges to MOF atoms is suitable for use in computational high-throughput screening of MOFs for applications that involve adsorption of molecules such as carbon dioxide.

  3. Orientation of airborne laser scanning point clouds with multi-view, multi-scale image blocks.

    PubMed

    Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik

    2009-01-01

    Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.

  4. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks

    PubMed Central

    Rönnholm, Petri; Hyyppä, Hannu; Hyyppä, Juha; Haggrén, Henrik

    2009-01-01

    Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters. PMID:22454569

  5. Contribution a l'inspection automatique des pieces flexibles a l'etat libre sans gabarit de conformation

    NASA Astrophysics Data System (ADS)

    Sattarpanah Karganroudi, Sasan

    The competitive industrial market demands manufacturing companies to provide the markets with a higher quality of production. The quality control department in industrial sectors verifies geometrical requirements of products with consistent tolerances. These requirements are presented in Geometric Dimensioning and Tolerancing (GD&T) standards. However, conventional measuring and dimensioning methods for manufactured parts are time-consuming and costly. Nowadays manual and tactile measuring methods have been replaced by Computer-Aided Inspection (CAI) methods. The CAI methods apply improvements in computational calculations and 3-D data acquisition devices (scanners) to compare the scan mesh of manufactured parts with the Computer-Aided Design (CAD) model. Metrology standards, such as ASME-Y14.5 and ISO-GPS, require implementing the inspection in free-state, wherein the part is only under its weight. Non-rigid parts are exempted from the free-state inspection rule because of their significant geometrical deviation in a free-state with respect to the tolerances. Despite the developments in CAI methods, inspection of non-rigid parts still remains a serious challenge. Conventional inspection methods apply complex fixtures for non-rigid parts to retrieve the functional shape of these parts on physical fixtures; however, the fabrication and setup of these fixtures are sophisticated and expensive. The cost of fixtures has doubled since the client and manufacturing sectors require repetitive and independent inspection fixtures. To eliminate the need for costly and time-consuming inspection fixtures, fixtureless inspection methods of non-rigid parts based on CAI methods have been developed. These methods aim at distinguishing flexible deformations of parts in a free-state from defects. Fixtureless inspection methods are required to be automatic, reliable, reasonably accurate and repeatable for non-rigid parts with complex shapes. The scan model, which is acquired as point clouds, represent the shape of a part in a free-state. Afterward, the inspection of defects is performed by comparing the scan and CAD models, but these models are presented in different coordinate systems. Indeed, the scan model is presented in the measurement coordinate system whereas the CAD model is introduced in the designed coordinate system. To accomplish the inspection and facilitate an accurate comparison between the models, the registration process is required to align the scan and CAD models in a common coordinate system. The registration includes a virtual compensation for the flexible deformation of the parts in a free-state. Then, the inspection is implemented as a geometrical comparison between the CAD and scan models. This thesis focuses on developing automatic and accurate fixtureless CAI methods for non-rigid parts along with assessing the robustness of the methods. To this end, an automatic fixtureless CAI method for non-rigid parts based on filtering registration points is developed to identify and quantify defects more accurately on the surface of scan models. The flexible deformation of parts in a free-state in our developed automatic fixtureless CAI method is compensated by applying FE non-rigid Registration (FENR) to deform the CAD model towards the scan mesh. The displacement boundary conditions (BCs) for FENR are determined based on the corresponding sample points, which are generated by the Generalized Numerical Inspection Fixture (GNIF) method on the CAD and scan models. These corresponding sample points are evenly distributed on the surface of the models. The comparison between this deformed CAD model and the scan mesh intend to evaluate and quantify the defects on the scan model. However, some sample points can be located close or on defect areas which result in an inaccurate estimation of defects. These sample points are automatically filtered out in our CAI method based on curvature and von Mises stress criteria. Once filtered out, the remaining sample points are used in a new FENR, which allows an accurate evaluation of defects with respect to the tolerances. The performance and robustness of all CAI methods are generally required to be assessed with respect to the actual measurements. This thesis also introduces a new validation metric for Verification and Validation (V&V) of CAI methods based on ASME recommendations. The developed V&V approach uses a nonparametric statistical hypothesis test, namely the Kolmogorov-Smirnov (K-S) test. In addition to validating the defects size, the K-S test allows a deeper evaluation based on distance distribution of defects. The robustness of CAI method with respect to uncertainties such as scanning noise is quantitatively assessed using the developed validation metric. Due to the compliance of non-rigid parts, a geometrically deviated part can still be assembled in the assembly-state. This thesis also presents a fixtureless CAI method for geometrically deviated (presenting defects) non-rigid parts to evaluate the feasibility of mounting these parts in the functional assembly-state. Our developed Virtual Mounting Assembly-State Inspection (VMASI) method performs a non-rigid registration to virtually mount the scan mesh in assembly-state. To this end, the point clouds of scan model representing the part in a free-state is deformed to meet the assembly constraints such as fixation position (e.g. mounting holes). In some cases, the functional shape of a deviated part can be retrieved by applying assembly loads, which are limited to permissible loads, on the surface of the part. The required assembly loads are estimated through our developed Restraining Pressures Optimization (RPO) aiming at displacing the deviated scan model to achieve the tolerance for mounting holes. Therefore, the deviated scan model can be assembled if the mounting holes on the predicted functional shape of scan model attain the tolerance range. Different industrial parts are used to evaluate the performance of our developed methods in this thesis. The automatic inspection for identifying different types of small (local) and big (global) defects on the parts results in an accurate evaluation of defects. The robustness of this inspection method is also validated with respect to different levels of scanning noise, which shows promising results. Meanwhile, the VMASI method is performed on various parts with different types of defects, which concludes that in some cases the functional shape of deviated parts can be retrieved by mounting them on a virtual fixture in assembly-state under restraining loads.

  6. Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data

    NASA Astrophysics Data System (ADS)

    Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.

    2015-06-01

    In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.

  7. Accuracy of available methods for quantifying the heat power generation of nanoparticles for magnetic hyperthermia.

    PubMed

    Andreu, Irene; Natividad, Eva

    2013-12-01

    In magnetic hyperthermia, characterising the specific functionality of magnetic nanoparticle arrangements is essential to plan the therapies by simulating maximum achievable temperatures. This functionality, i.e. the heat power released upon application of an alternating magnetic field, is quantified by means of the specific absorption rate (SAR), also referred to as specific loss power (SLP). Many research groups are currently involved in the SAR/SLP determination of newly synthesised materials by several methods, either magnetic or calorimetric, some of which are affected by important and unquantifiable uncertainties that may turn measurements into rough estimates. This paper reviews all these methods, discussing in particular sources of uncertainties, as well as their possible minimisation. In general, magnetic methods, although accurate, do not operate in the conditions of magnetic hyperthermia. Calorimetric methods do, but the easiest to implement, the initial-slope method in isoperibol conditions, derives inaccuracies coming from the lack of matching between thermal models, experimental set-ups and measuring conditions, while the most accurate, the pulse-heating method in adiabatic conditions, requires more complex set-ups.

  8. Continuing education: online monitoring of haemodialysis dose.

    PubMed

    Vartia, Aarne

    2018-01-25

    Kt/V urea reflects the efficacy of haemodialysis scaled to patient size (urea distribution volume). The guidelines recommend monthly Kt/V measurements based on blood samples. Modern haemodialysis machines are equipped with accessories monitoring the dose online at every session without extra costs, blood samples and computers. To describe the principles, devices, benefits and shortcomings of online monitoring of haemodialysis dose. A critical literature overview and discussion. UV absorbance methods measure Kt/V, ionic dialysance Kt (product of clearance and treatment time; cleared volume without scaling). Both are easy and useful methods, but comparison is difficult due to problems in scaling of the dialysis dose to the patient's size. The best dose estimation method is the one which predicts the quality of life and survival most accurately. There is some evidence on the predictive value of ionic dialysance Kt, but more documentation is required on the UV method. Online monitoring is a useful tool in everyday quality assurance, but blood samples are still required for more accurate kinetic modelling. After reading this article the reader should be able to: Understand the elements of the Kt/V equation for dialysis dose. Compare and contrast different methods of measurement of dialysis dose. Reflect on the importance of adequate dialysis dose for patient survival and life quality. © 2018 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  9. Efficient and robust pupil size and blink estimation from near-field video sequences for human-machine interaction.

    PubMed

    Chen, Siyuan; Epps, Julien

    2014-12-01

    Monitoring pupil and blink dynamics has applications in cognitive load measurement during human-machine interaction. However, accurate, efficient, and robust pupil size and blink estimation pose significant challenges to the efficacy of real-time applications due to the variability of eye images, hence to date, require manual intervention for fine tuning of parameters. In this paper, a novel self-tuning threshold method, which is applicable to any infrared-illuminated eye images without a tuning parameter, is proposed for segmenting the pupil from the background images recorded by a low cost webcam placed near the eye. A convex hull and a dual-ellipse fitting method are also proposed to select pupil boundary points and to detect the eyelid occlusion state. Experimental results on a realistic video dataset show that the measurement accuracy using the proposed methods is higher than that of widely used manually tuned parameter methods or fixed parameter methods. Importantly, it demonstrates convenience and robustness for an accurate and fast estimate of eye activity in the presence of variations due to different users, task types, load, and environments. Cognitive load measurement in human-machine interaction can benefit from this computationally efficient implementation without requiring a threshold calibration beforehand. Thus, one can envisage a mini IR camera embedded in a lightweight glasses frame, like Google Glass, for convenient applications of real-time adaptive aiding and task management in the future.

  10. Linear segmentation algorithm for detecting layer boundary with lidar.

    PubMed

    Mao, Feiyue; Gong, Wei; Logan, Timothy

    2013-11-04

    The automatic detection of aerosol- and cloud-layer boundary (base and top) is important in atmospheric lidar data processing, because the boundary information is not only useful for environment and climate studies, but can also be used as input for further data processing. Previous methods have demonstrated limitations in defining the base and top, window-size setting, and have neglected the in-layer attenuation. To overcome these limitations, we present a new layer detection scheme for up-looking lidars based on linear segmentation with a reasonable threshold setting, boundary selecting, and false positive removing strategies. Preliminary results from both real and simulated data show that this algorithm cannot only detect the layer-base as accurate as the simple multi-scale method, but can also detect the layer-top more accurately than that of the simple multi-scale method. Our algorithm can be directly applied to uncalibrated data without requiring any additional measurements or window size selections.

  11. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  12. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  13. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  14. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  15. New high order schemes in BATS-R-US

    NASA Astrophysics Data System (ADS)

    Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.

    2013-12-01

    The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.

  16. Detection of Orbital Debris Collision Risks for the Automated Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Peret, L.; Legendre, P.; Delavault, S.; Martin, T.

    2007-01-01

    In this paper, we present a general collision risk assessment method, which has been applied through numerical simulations to the Automated Transfer Vehicle (ATV) case. During ATV ascent towards the International Space Station, close approaches between the ATV and objects of the USSTRACOM catalog will be monitored through collision rosk assessment. Usually, collision risk assessment relies on an exclusion volume or a probability threshold method. Probability methods are more effective than exclusion volumes but require accurate covariance data. In this work, we propose to use a criterion defined by an adaptive exclusion area. This criterion does not require any probability calculation but is more effective than exclusion volume methods as demonstrated by our numerical experiments. The results of these studies, when confirmed and finalized, will be used for the ATV operations.

  17. Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography

    NASA Astrophysics Data System (ADS)

    Moraes, Christopher; Sun, Yu; Simmons, Craig A.

    2009-06-01

    Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.

  18. Potential energy surface interpolation with neural networks for instanton rate calculations

    NASA Astrophysics Data System (ADS)

    Cooper, April M.; Hallmen, Philipp P.; Kästner, Johannes

    2018-03-01

    Artificial neural networks are used to fit a potential energy surface (PES). We demonstrate the benefits of using not only energies but also their first and second derivatives as training data for the neural network. This ensures smooth and accurate Hessian surfaces, which are required for rate constant calculations using instanton theory. Our aim was a local, accurate fit rather than a global PES because instanton theory requires information on the potential only in the close vicinity of the main tunneling path. Elongations along vibrational normal modes at the transition state are used as coordinates for the neural network. The method is applied to the hydrogen abstraction reaction from methanol, calculated on a coupled-cluster level of theory. The reaction is essential in astrochemistry to explain the deuteration of methanol in the interstellar medium.

  19. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  20. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption

    NASA Astrophysics Data System (ADS)

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (˜100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ˜0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  1. Combination volumetric and gravimetric sorption instrument for high accuracy measurements of methane adsorption.

    PubMed

    Burress, Jacob; Bethea, Donald; Troub, Brandon

    2017-05-01

    The accurate measurement of adsorbed gas up to high pressures (∼100 bars) is critical for the development of new materials for adsorbed gas storage. The typical Sievert-type volumetric method introduces accumulating errors that can become large at maximum pressures. Alternatively, gravimetric methods employing microbalances require careful buoyancy corrections. In this paper, we present a combination gravimetric and volumetric system for methane sorption measurements on samples between ∼0.5 and 1 g. The gravimetric method described requires no buoyancy corrections. The tandem use of the gravimetric method allows for a check on the highest uncertainty volumetric measurements. The sources and proper calculation of uncertainties are discussed. Results from methane measurements on activated carbon MSC-30 and metal-organic framework HKUST-1 are compared across methods and within the literature.

  2. Ion mobility spectrometry: A personal view of its development at UCSB

    DTIC Science & Technology

    2014-09-15

    molecules. As we progressed we realized that new, more accurate algorithms were needed to augment our early projection approximation (PA) for determining...required. The goal was to maintain some of the speed of the projection approximation and retain the accuracy of the trajectory method. Christian...Bleiholder, while a postdoc in my group, did just that by development of the projection superposition approximation (PSA) [31–35]. This new method is 100

  3. Critical Reading Skills and Translation Ability of Thai EFL Students: Pragmatic, Syntactic, and Semantic Aspects

    ERIC Educational Resources Information Center

    Sriwantaneeyakul, Suttawan

    2018-01-01

    Translation ability requires many language skills to produce an accurate and complete text; however, one important skill, critical reading in the research, has been neglected. This research, therefore, employed the explanatory sequential mixed method to investigate the differences in Thai-English translation ability between students with a high…

  4. Sensitivity and accuracy of DNA based methods used to describe aquatic communities for early detection of invasive fish species

    EPA Science Inventory

    For biomonitoring efforts aimed at early detection of aquatic invasive species (AIS), the ability to detect rare individuals is key and requires accurate species level identification to maintain a low occurrence probability of non-detection errors (failure to detect a present spe...

  5. Projecting Enrollment in Rural Schools: A Study of Three Vermont School Districts

    ERIC Educational Resources Information Center

    Grip, Richard S.

    2004-01-01

    Large numbers of rural districts have experienced sharp declines in enrollment, unlike their suburban counterparts. Accurate enrollment projections are required, whether a district needs to build new schools or consolidate existing ones. For school districts having more than 600 students, a quantitative method such as the Cohort-Survival Ratio…

  6. Trends in OMR Techniques and Equipment.

    ERIC Educational Resources Information Center

    Ward, Obie; Poulos, Cynthia

    Various aspects of the Optical Mark Reader (OMR) used by the Atlanta Public School System are discussed. First considered are the required features of the OMR scanner. Following this, methods of motivating users to record data accurately are described. Finally, a description of how forms are designed for the convenience of users is provided. (PB)

  7. Shining a Light on Awareness: A Review of Functional Near-Infrared Spectroscopy for Prolonged Disorders of Consciousness

    PubMed Central

    Rupawala, Mohammed; Dehghani, Hamid; Lucas, Samuel J. E.; Tino, Peter; Cruse, Damian

    2018-01-01

    Qualitative clinical assessments of the recovery of awareness after severe brain injury require an assessor to differentiate purposeful behavior from spontaneous behavior. As many such behaviors are minimal and inconsistent, behavioral assessments are susceptible to diagnostic errors. Advanced neuroimaging tools can bypass behavioral responsiveness and reveal evidence of covert awareness and cognition within the brains of some patients, thus providing a means for more accurate diagnoses, more accurate prognoses, and, in some instances, facilitated communication. The majority of reports to date have employed the neuroimaging methods of functional magnetic resonance imaging, positron emission tomography, and electroencephalography (EEG). However, each neuroimaging method has its own advantages and disadvantages (e.g., signal resolution, accessibility, etc.). Here, we describe a burgeoning technique of non-invasive optical neuroimaging—functional near-infrared spectroscopy (fNIRS)—and review its potential to address the clinical challenges of prolonged disorders of consciousness. We also outline the potential for simultaneous EEG to complement the fNIRS signal and suggest the future directions of research that are required in order to realize its clinical potential. PMID:29872420

  8. Validation of SCIAMACHY and TOMS UV Radiances Using Ground and Space Observations

    NASA Technical Reports Server (NTRS)

    Hilsenrath, E.; Bhartia, P. K.; Bojkov, B. R.; Kowalewski, M.; Labow, G.; Ahmad, Z.

    2004-01-01

    Verification of a stratospheric ozone recovery remains a high priority for environmental research and policy definition. Models predict an ozone recovery at a much lower rate than the measured depletion rate observed to date. Therefore improved precision of the satellite and ground ozone observing systems are required over the long term to verify its recovery. We show that validation of satellite radiances from space and from the ground can be a very effective means for correcting long term drifts of backscatter type satellite measurements and can be used to cross calibrate all B W instruments in orbit (TOMS, SBW/2, GOME, SCIAMACHY, OM, GOME-2, OMPS). This method bypasses the retrieval algorithms used for both satellite and ground based measurements that are normally used to validate and correct the satellite data. Radiance comparisons employ forward models and are inherently more accurate than inverse (retrieval) algorithms. This approach however requires well calibrated instruments and an accurate radiative transfer model that accounts for aerosols. TOMS and SCIAMACHY calibrations are checked to demonstrate this method and to demonstrate applicability for long term trends.

  9. FPA Depot - Web Application

    NASA Technical Reports Server (NTRS)

    Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam

    2011-01-01

    Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.

  10. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    NASA Astrophysics Data System (ADS)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  11. Quantification of optical absorption coefficient from acoustic spectra in the optical diffusive regime using photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zijian; Favazza, Christopher; Wang, Lihong V.

    2012-02-01

    Photoacoustic (PA) tomography (PAT) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Multi-wavelength PAT can noninvasively monitor hemoglobin oxygen saturation (sO2) with high sensitivity and fine spatial resolution. However, accurate quantification in PAT requires knowledge of the optical fluence distribution, acoustic wave attenuation, and detection system bandwidth. We propose a method to circumvent this requirement using acoustic spectra of PA signals acquired at two optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560 and 575 nm were quantified with errors of ><5%.

  12. Accurate, safe, and rapid method of intraoperative tumor identification for totally laparoscopic distal gastrectomy: injection of mixed fluid of sodium hyaluronate and patent blue.

    PubMed

    Nakagawa, Masatoshi; Ehara, Kazuhisa; Ueno, Masaki; Tanaka, Tsuyoshi; Kaida, Sachiko; Udagawa, Harushi

    2014-04-01

    In totally laparoscopic distal gastrectomy, determining the resection line with safe proximal margins is often difficult, particularly for tumors located in a relatively upper area. This is because, in contrast to open surgery, identifying lesions by palpating or opening the stomach is essentially impossible. This study introduces a useful method of tumor identification that is accurate, safe, and rapid. On the operation day, after inducing general anesthesia, a mixture of sodium hyaluronate and patent blue is injected into the submucosal layer of the proximal margin. When resecting stomach, all marker spots should be on the resected side. In all cases, the proximal margin is examined histologically by using frozen sections during the operation. From October 2009 to September 2011, a prospective study that evaluated this method was performed. A total of 34 patients who underwent totally laparoscopic distal gastrectomy were enrolled in this study. Approximately 5 min was required to complete the procedure. Proximal margins were negative in all cases, and the mean ± standard deviation length of the proximal margin was 23.5 ± 12.8 mm. No side effects, such as allergy, were encountered. As a method of tumor identification for totally laparoscopic distal gastrectomy, this procedure appears accurate, safe, and rapid.

  13. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  14. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  15. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    NASA Astrophysics Data System (ADS)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  16. Computational considerations for the simulation of shock-induced sound

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Carpenter, Mark H.

    1996-01-01

    The numerical study of aeroacoustic problems places stringent demands on the choice of a computational algorithm, because it requires the ability to propagate disturbances of small amplitude and short wavelength. The demands are particularly high when shock waves are involved, because the chosen algorithm must also resolve discontinuities in the solution. The extent to which a high-order-accurate shock-capturing method can be relied upon for aeroacoustics applications that involve the interaction of shocks with other waves has not been previously quantified. Such a study is initiated in this work. A fourth-order-accurate essentially nonoscillatory (ENO) method is used to investigate the solutions of inviscid, compressible flows with shocks in a quasi-one-dimensional nozzle flow. The design order of accuracy is achieved in the smooth regions of a steady-state test case. However, in an unsteady test case, only first-order results are obtained downstream of a sound-shock interaction. The difficulty in obtaining a globally high-order-accurate solution in such a case with a shock-capturing method is demonstrated through the study of a simplified, linear model problem. Some of the difficult issues and ramifications for aeroacoustics simulations of flows with shocks that are raised by these results are discussed.

  17. Molecular Spectroscopy by Ab Initio Methods

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Partridge, Harry; Arnold, James O. (Technical Monitor)

    1994-01-01

    Due to recent advances in methods and computers, the accuracy of ab calculations has reached a point where these methods can be used to provide accurate spectroscopic constants for small molecules; this will be illustrated with several examples. We will show how ab initio calculations where used to identify the Hermann infrared system in N2 and two band systems in CO. The identification of all three of these band systems relied on very accurate calculations of quintet states. The analysis of the infrared spectra of cool stars requires knowledge of the intensity of vibrational transitions in SiO for high nu and J levels. While experiment can supply very accurate dipole moments for nu = 0 to 3, this is insufficient to construct a global dipole moment function. We show how theory, combined by the experiment, can be used to generate the line intensities up to nu = 40 and J = 250. The spectroscopy of transition metal containing systems is very difficult for both theory and experiment. We will discuss the identification of the ground state of Ti2 and the spectroscopy of AlCu as examples of how theory can contribute to the understanding of these complex systems.

  18. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  19. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  20. A parametric method for determining the number of signals in narrow-band direction finding

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Fuhrmann, Daniel R.

    1991-08-01

    A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).

  1. Spatial recurrence analysis: A sensitive and fast detection tool in digital mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prado, T. L.; Galuzio, P. P.; Lopes, S. R.

    Efficient diagnostics of breast cancer requires fast digital mammographic image processing. Many breast lesions, both benign and malignant, are barely visible to the untrained eye and requires accurate and reliable methods of image processing. We propose a new method of digital mammographic image analysis that meets both needs. It uses the concept of spatial recurrence as the basis of a spatial recurrence quantification analysis, which is the spatial extension of the well-known time recurrence analysis. The recurrence-based quantifiers are able to evidence breast lesions in a way as good as the best standard image processing methods available, but with amore » better control over the spurious fragments in the image.« less

  2. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  3. A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation

    PubMed Central

    Ali Khan, Wajahat; Hur, Taeho; Muhammad Bilal, Hafiz Syed; Ul Hassan, Anees; Lee, Sungyoung

    2018-01-01

    The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user’s perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants. PMID:29783712

  4. A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation.

    PubMed

    Hussain, Jamil; Khan, Wajahat Ali; Hur, Taeho; Bilal, Hafiz Syed Muhammad; Bang, Jaehun; Hassan, Anees Ul; Afzal, Muhammad; Lee, Sungyoung

    2018-05-18

    The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user's perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants.

  5. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  6. Autonomous celestial navigation based on Earth ultraviolet radiance and fast gradient statistic feature extraction

    NASA Astrophysics Data System (ADS)

    Lu, Shan; Zhang, Hanmo

    2016-01-01

    To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.

  7. Structure and Stability of Molecular Crystals with Many-Body Dispersion-Inclusive Density Functional Tight Binding.

    PubMed

    Mortazavi, Majid; Brandenburg, Jan Gerit; Maurer, Reinhard J; Tkatchenko, Alexandre

    2018-01-18

    Accurate prediction of structure and stability of molecular crystals is crucial in materials science and requires reliable modeling of long-range dispersion interactions. Semiempirical electronic structure methods are computationally more efficient than their ab initio counterparts, allowing structure sampling with significant speedups. We combine the Tkatchenko-Scheffler van der Waals method (TS) and the many-body dispersion method (MBD) with third-order density functional tight-binding (DFTB3) via a charge population-based method. We find an overall good performance for the X23 benchmark database of molecular crystals, despite an underestimation of crystal volume that can be traced to the DFTB parametrization. We achieve accurate lattice energy predictions with DFT+MBD energetics on top of vdW-inclusive DFTB3 structures, resulting in a speedup of up to 3000 times compared with a full DFT treatment. This suggests that vdW-inclusive DFTB3 can serve as a viable structural prescreening tool in crystal structure prediction.

  8. Calculation of protein-ligand binding affinities.

    PubMed

    Gilson, Michael K; Zhou, Huan-Xiang

    2007-01-01

    Accurate methods of computing the affinity of a small molecule with a protein are needed to speed the discovery of new medications and biological probes. This paper reviews physics-based models of binding, beginning with a summary of the changes in potential energy, solvation energy, and configurational entropy that influence affinity, and a theoretical overview to frame the discussion of specific computational approaches. Important advances are reported in modeling protein-ligand energetics, such as the incorporation of electronic polarization and the use of quantum mechanical methods. Recent calculations suggest that changes in configurational entropy strongly oppose binding and must be included if accurate affinities are to be obtained. The linear interaction energy (LIE) and molecular mechanics Poisson-Boltzmann surface area (MM-PBSA) methods are analyzed, as are free energy pathway methods, which show promise and may be ready for more extensive testing. Ultimately, major improvements in modeling accuracy will likely require advances on multiple fronts, as well as continued validation against experiment.

  9. Separation and quantitation of polyethylene glycols 400 and 3350 from human urine by high-performance liquid chromatography.

    PubMed

    Ryan, C M; Yarmush, M L; Tompkins, R G

    1992-04-01

    Polyethylene glycol 3350 (PEG 3350) is useful as an orally administered probe to measure in vivo intestinal permeability to macromolecules. Previous methods to detect polyethylene glycol (PEG) excreted in the urine have been hampered by inherent inaccuracies associated with liquid-liquid extraction and turbidimetric analysis. For accurate quantitation by previous methods, radioactive labels were required. This paper describes a method to separate and quantitate PEG 3350 and PEG 400 in human urine that is independent of radioactive labels and is accurate in clinical practice. The method uses sized regenerated cellulose membranes and mixed ion-exchange resin for sample preparation and high-performance liquid chromatography with refractive index detection for analysis. The 24-h excretion for normal individuals after an oral dose of 40 g of PEG 3350 and 5 g of PEG 400 was 0.12 +/- 0.04% of the original dose of PEG 3350 and 26.3 +/- 5.1% of the original dose of PEG 400.

  10. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  11. Incorporation of Fixed Installation Costs into Optimization of Groundwater Remediation with a New Efficient Surrogate Nonlinear Mixed Integer Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Shoemaker, Christine; Wan, Ying

    2016-04-01

    Optimization of nonlinear water resources management issues which have a mixture of fixed (e.g. construction cost for a well) and variable (e.g. cost per gallon of water pumped) costs has been not well addressed because prior algorithms for the resulting nonlinear mixed integer problems have required many groundwater simulations (with different configurations of decision variable), especially when the solution space is multimodal. In particular heuristic methods like genetic algorithms have often been used in the water resources area, but they require so many groundwater simulations that only small systems have been solved. Hence there is a need to have a method that reduces the number of expensive groundwater simulations. A recently published algorithm for nonlinear mixed integer programming using surrogates was shown in this study to greatly reduce the computational effort for obtaining accurate answers to problems involving fixed costs for well construction as well as variable costs for pumping because of a substantial reduction in the number of groundwater simulations required to obtain an accurate answer. Results are presented for a US EPA hazardous waste site. The nonlinear mixed integer surrogate algorithm is general and can be used on other problems arising in hydrology with open source codes in Matlab and python ("pySOT" in Bitbucket).

  12. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions.

    PubMed

    Baczewski, Andrew D; Miller, Nicholas C; Shanker, Balasubramaniam

    2012-04-01

    The analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require O(N2) operations, N being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodic dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in O(N) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.

  13. Statistical Orbit Determination using the Particle Filter for Incorporating Non-Gaussian Uncertainties

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell

    2012-01-01

    The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.

  14. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  15. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  16. Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.

    PubMed

    Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar

    2014-01-01

    We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.

  17. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  18. A neural network method to correct bidirectional effects in water-leaving radiance

    NASA Astrophysics Data System (ADS)

    Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut

    2017-02-01

    The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.

  19. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Guidance for laboratories performing molecular pathology for cancer patients

    PubMed Central

    Cree, Ian A; Deans, Zandra; Ligtenberg, Marjolijn J L; Normanno, Nicola; Edsjö, Anders; Rouleau, Etienne; Solé, Francesc; Thunnissen, Erik; Timens, Wim; Schuuring, Ed; Dequeker, Elisabeth; Murray, Samuel; Dietel, Manfred; Groenen, Patricia; Van Krieken, J Han

    2014-01-01

    Molecular testing is becoming an important part of the diagnosis of any patient with cancer. The challenge to laboratories is to meet this need, using reliable methods and processes to ensure that patients receive a timely and accurate report on which their treatment will be based. The aim of this paper is to provide minimum requirements for the management of molecular pathology laboratories. This general guidance should be augmented by the specific guidance available for different tumour types and tests. Preanalytical considerations are important, and careful consideration of the way in which specimens are obtained and reach the laboratory is necessary. Sample receipt and handling follow standard operating procedures, but some alterations may be necessary if molecular testing is to be performed, for instance to control tissue fixation. DNA and RNA extraction can be standardised and should be checked for quality and quantity of output on a regular basis. The choice of analytical method(s) depends on clinical requirements, desired turnaround time, and expertise available. Internal quality control, regular internal audit of the whole testing process, laboratory accreditation, and continual participation in external quality assessment schemes are prerequisites for delivery of a reliable service. A molecular pathology report should accurately convey the information the clinician needs to treat the patient with sufficient information to allow for correct interpretation of the result. Molecular pathology is developing rapidly, and further detailed evidence-based recommendations are required for many of the topics covered here. PMID:25012948

  1. Using radiance predicted by the P3 approximation in a spherical geometry to predict tissue optical properties

    NASA Astrophysics Data System (ADS)

    Dickey, Dwayne J.; Moore, Ronald B.; Tulip, John

    2001-01-01

    For photodynamic therapy of solid tumors, such as prostatic carcinoma, to be achieved, an accurate model to predict tissue parameters and light dose must be found. Presently, most analytical light dosimetry models are fluence based and are not clinically viable for tissue characterization. Other methods of predicting optical properties, such as Monet Carlo, are accurate but far too time consuming for clinical application. However, radiance predicted by the P3-Approximation, an anaylitical solution to the transport equation, may be a viable and accurate alternative. The P3-Approximation accurately predicts optical parameters in intralipid/methylene blue based phantoms in a spherical geometry. The optical parameters furnished by the radiance, when introduced into fluence predicted by both P3- Approximation and Grosjean Theory, correlate well with experimental data. The P3-Approximation also predicts the optical properties of prostate tissue, agreeing with documented optical parameters. The P3-Approximation could be the clinical tool necessary to facilitate PDT of solid tumors because of the limited number of invasive measurements required and the speed in which accurate calculations can be performed.

  2. Neuropsychological Test Selection for Cognitive Impairment Classification: A Machine Learning Approach

    PubMed Central

    Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.

    2016-01-01

    Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171

  3. Generic sample preparation combined with high-resolution liquid chromatography-time-of-flight mass spectrometry for unification of urine screening in doping-control laboratories.

    PubMed

    Peters, R J B; Oosterink, J E; Stolker, A A M; Georgakopoulos, C; Nielen, M W F

    2010-04-01

    A unification of doping-control screening procedures of prohibited small molecule substances--including stimulants, narcotics, steroids, beta2-agonists and diuretics--is highly urgent in order to free resources for new classes such as banned proteins. Conceptually this may be achieved by the use of a combination of one gas chromatography-time-of-flight mass spectrometry method and one liquid chromatography-time-of-flight mass spectrometry method. In this work a quantitative screening method using high-resolution liquid chromatography in combination with accurate-mass time-of-flight mass spectrometry was developed and validated for determination of glucocorticosteroids, beta2-agonists, thiazide diuretics, and narcotics and stimulants in urine. To enable the simultaneous isolation of all the compounds of interest and the necessary purification of the resulting extracts, a generic extraction and hydrolysis procedure was combined with a solid-phase extraction modified for these groups of compounds. All 56 compounds are determined using positive electrospray ionisation with the exception of the thiazide diuretics for which the best sensitivity was obtained by using negative electrospray ionisation. The results show that, with the exception of clenhexyl, procaterol, and reproterol, all compounds can be detected below the respective minimum required performance level and the results for linearity, repeatability, within-lab reproducibility, and accuracy show that the method can be used for quantitative screening. If qualitative screening is sufficient the instrumental analysis may be limited to positive ionisation, because all analytes including the thiazides can be detected at the respective minimum required levels in the positive mode. The results show that the application of accurate-mass time-of-flight mass spectrometry in combination with generic extraction and purification procedures is suitable for unification and expansion of the window of screening methods of doping laboratories. Moreover, the full-scan accurate-mass data sets obtained still allow retrospective examination for emerging doping agents, without re-analyzing the samples.

  4. Lung vessel segmentation in CT images using graph-cuts

    NASA Astrophysics Data System (ADS)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  5. The extended Fourier pseudospectral time-domain method for atmospheric sound propagation.

    PubMed

    Hornikx, Maarten; Waxler, Roger; Forssén, Jens

    2010-10-01

    An extended Fourier pseudospectral time-domain (PSTD) method is presented to model atmospheric sound propagation by solving the linearized Euler equations. In this method, evaluation of spatial derivatives is based on an eigenfunction expansion. Evaluation on a spatial grid requires only two spatial points per wavelength. Time iteration is done using a low-storage optimized six-stage Runge-Kutta method. This method is applied to two-dimensional non-moving media models, one with screens and one for an urban canyon, with generally high accuracy in both amplitude and phase. For a moving atmosphere, accurate results have been obtained in models with both a uniform and a logarithmic wind velocity profile over a rigid ground surface and in the presence of a screen. The method has also been validated for three-dimensional sound propagation over a screen. For that application, the developed method is in the order of 100 times faster than the second-order-accurate FDTD solution to the linearized Euler equations. The method is found to be well suited for atmospheric sound propagation simulations where effects of complex meteorology and straight rigid boundary surfaces are to be investigated.

  6. Monitoring for airborne allergens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burge, H.A.

    1992-07-01

    Monitoring for allergens can provide some information on the kinds and levels of exposure experienced by local patient populations, providing volumetric methods are used for sample collection and analysis is accurate and consistent. Such data can also be used to develop standards for the specific environment and to begin to develop predictive models. Comparing outdoor allergen aerosols between different monitoring sites requires identical collection and analysis methods and some kind of rational standard, whether arbitrary, or based on recognized health effects.32 references.

  7. Development of Physics-Based Hurricane Wave Response Functions: Application to Selected Sites on the U.S. Gulf Coast

    NASA Astrophysics Data System (ADS)

    McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.

    2013-12-01

    Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.

  8. Large scale nonlinear programming for the optimization of spacecraft trajectories

    NASA Astrophysics Data System (ADS)

    Arrieta-Camacho, Juan Jose

    Despite the availability of high fidelity mathematical models, the computation of accurate optimal spacecraft trajectories has never been an easy task. While simplified models of spacecraft motion can provide useful estimates on energy requirements, sizing, and cost; the actual launch window and maneuver scheduling must rely on more accurate representations. We propose an alternative for the computation of optimal transfers that uses an accurate representation of the spacecraft dynamics. Like other methodologies for trajectory optimization, this alternative is able to consider all major disturbances. In contrast, it can handle explicitly equality and inequality constraints throughout the trajectory; it requires neither the derivation of costate equations nor the identification of the constrained arcs. The alternative consist of two steps: (1) discretizing the dynamic model using high-order collocation at Radau points, which displays numerical advantages, and (2) solution to the resulting Nonlinear Programming (NLP) problem using an interior point method, which does not suffer from the performance bottleneck associated with identifying the active set, as required by sequential quadratic programming methods; in this way the methodology exploits the availability of sound numerical methods, and next generation NLP solvers. In practice the methodology is versatile; it can be applied to a variety of aerospace problems like homing, guidance, and aircraft collision avoidance; the methodology is particularly well suited for low-thrust spacecraft trajectory optimization. Examples are presented which consider the optimization of a low-thrust orbit transfer subject to the main disturbances due to Earth's gravity field together with Lunar and Solar attraction. Other example considers the optimization of a multiple asteroid rendezvous problem. In both cases, the ability of our proposed methodology to consider non-standard objective functions and constraints is illustrated. Future research directions are identified, involving the automatic scheduling and optimization of trajectory correction maneuvers. The sensitivity information provided by the methodology is expected to be invaluable in such research pursuit. The collocation scheme and nonlinear programming algorithm presented in this work, complement other existing methodologies by providing reliable and efficient numerical methods able to handle large scale, nonlinear dynamic models.

  9. Active learning based segmentation of Crohns disease from abdominal MRI.

    PubMed

    Mahapatra, Dwarikanath; Vos, Franciscus M; Buhmann, Joachim M

    2016-05-01

    This paper proposes a novel active learning (AL) framework, and combines it with semi supervised learning (SSL) for segmenting Crohns disease (CD) tissues from abdominal magnetic resonance (MR) images. Robust fully supervised learning (FSL) based classifiers require lots of labeled data of different disease severities. Obtaining such data is time consuming and requires considerable expertise. SSL methods use a few labeled samples, and leverage the information from many unlabeled samples to train an accurate classifier. AL queries labels of most informative samples and maximizes gain from the labeling effort. Our primary contribution is in designing a query strategy that combines novel context information with classification uncertainty and feature similarity. Combining SSL and AL gives a robust segmentation method that: (1) optimally uses few labeled samples and many unlabeled samples; and (2) requires lower training time. Experimental results show our method achieves higher segmentation accuracy than FSL methods with fewer samples and reduced training effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Integrated force method versus displacement method for finite element analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Berke, L.; Gallagher, R. H.

    1991-01-01

    A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EEs) are integrated with the global compatibility conditions (CCs) to form the governing set of equations. In IFM the CCs are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.

  11. Integrated force method versus displacement method for finite element analysis

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.

    1990-01-01

    A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.

  12. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    PubMed Central

    Jha, Abhinav K.; Kupinski, Matthew A.; Rodríguez, Jeffrey J.; Stephen, Renu M.; Stopeck, Alison T.

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both accuracy and precision. We also propose consistency checks for this evaluation technique. PMID:22713231

  13. Accurate method for luminous transmittance and signal detection quotients measurements in sunglasses lenses

    NASA Astrophysics Data System (ADS)

    Loureiro, A. D.; Gomes, L. M.; Ventura, L.

    2018-02-01

    The international standard ISO 12312-1 proposes transmittance tests that quantify how dark sunglasses lenses are and whether or not they are suitable for driving. To perform these tests a spectrometer is required. In this study, we present and analyze theoretically an accurate alternative method for performing these measurements using simple components. Using three LEDs and a four-channel sensor we generated weighting functions similar to the standard ones for luminous and traffic lights transmittances. From 89 sunglasses lens spectroscopy data, we calculated luminous transmittance and signal detection quotients using our obtained weighting functions and the standard ones. Mean-difference Tukey plots were used to compare the results. All tested sunglasses lenses were classified in the right category and correctly as suitable or not for driving. The greatest absolute errors for luminous transmittance and red, yellow, green and blue signal detection quotients were 0.15%, 0.17, 0.06, 0.04 and 0.18, respectively. This method will be used in a device capable to perform transmittance tests (visible, traffic lights and ultraviolet (UV)) according to the standard. It is important to measure rightly luminous transmittance and relative visual attenuation quotients to report correctly whether or not sunglasses are suitable for driving. Moreover, standard UV requirements depend on luminous transmittance.

  14. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE PAGES

    An, Zhe; Rey, Daniel; Ye, Jingxin; ...

    2017-01-16

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  15. Estimating the state of a geophysical system with sparse observations: time delay methods to achieve accurate initial states for prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Zhe; Rey, Daniel; Ye, Jingxin

    The problem of forecasting the behavior of a complex dynamical system through analysis of observational time-series data becomes difficult when the system expresses chaotic behavior and the measurements are sparse, in both space and/or time. Despite the fact that this situation is quite typical across many fields, including numerical weather prediction, the issue of whether the available observations are "sufficient" for generating successful forecasts is still not well understood. An analysis by Whartenby et al. (2013) found that in the context of the nonlinear shallow water equations on a β plane, standard nudging techniques require observing approximately 70 % of themore » full set of state variables. Here we examine the same system using a method introduced by Rey et al. (2014a), which generalizes standard nudging methods to utilize time delayed measurements. Here, we show that in certain circumstances, it provides a sizable reduction in the number of observations required to construct accurate estimates and high-quality predictions. In particular, we find that this estimate of 70 % can be reduced to about 33 % using time delays, and even further if Lagrangian drifter locations are also used as measurements.« less

  16. Magnetic dipole moment estimation and compensation for an accurate attitude control in nano-satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Sako, Nobutada; Nakasuka, Shinichi

    2011-06-01

    Nano-satellites provide space access to broader range of satellite developers and attract interests as an application of the space developments. These days several new nano-satellite missions are proposed with sophisticated objectives such as remote-sensing and observation of astronomical objects. In these advanced missions, some nano-satellites must meet strict attitude requirements for obtaining scientific data or images. For LEO nano-satellite, a magnetic attitude disturbance dominates over other environmental disturbances as a result of small moment of inertia, and this effect should be cancelled for a precise attitude control. This research focuses on how to cancel the magnetic disturbance in orbit. This paper presents a unique method to estimate and compensate the residual magnetic moment, which interacts with the geomagnetic field and causes the magnetic disturbance. An extended Kalman filter is used to estimate the magnetic disturbance. For more practical considerations of the magnetic disturbance compensation, this method has been examined in the PRISM (Pico-satellite for Remote-sensing and Innovative Space Missions). This method will be also used for a nano-astrometry satellite mission. This paper concludes that use of the magnetic disturbance estimation and compensation are useful for nano-satellites missions which require a high accurate attitude control.

  17. A Method for Monitoring Organic Chlorides, Hydrochloric Acid and Chlorine in Air

    NASA Technical Reports Server (NTRS)

    Dennison, J. E.; Menichelli, R. P.

    1971-01-01

    While not commonly presented in nonurban atmospheres, organic chlorides, hydrochloric acid and chlorine are significant in industrial air pollution and industrial hygiene. Based on a microcoulometer, a much more sensitive method than has heretofore been available has been developed for monitoring these air impurities. The method has a response time (90%) of about twenty seconds, requires no calibration, is accurate to +/- 2.5%, and specific except for bromide and iodide interferences. The instrument is portable and has been operated unattended for 18 hours without difficulty.

  18. An innovative exercise method to simulate orbital EVA work - Applications to PLSS automatic controls

    NASA Technical Reports Server (NTRS)

    Lantz, Renee; Vykukal, H.; Webbon, Bruce

    1987-01-01

    An exercise method has been proposed which may satisfy the current need for a laboratory simulation representative of muscular, cardiovascular, respiratory, and thermoregulatory responses to work during orbital extravehicular activity (EVA). The simulation incorporates arm crank ergometry with a unique body support mechanism that allows all body position stabilization forces to be reacted at the feet. By instituting this exercise method in laboratory experimentation, an advanced portable life support system (PLSS) thermoregulatory control system can be designed to more accurately reflect the specific work requirements of orbital EVA.

  19. A visual training tool for the Photoload sampling technique

    Treesearch

    Violet J. Holley; Robert E. Keane

    2010-01-01

    This visual training aid is designed to provide Photoload users a tool to increase the accuracy of fuel loading estimations when using the Photoload technique. The Photoload Sampling Technique (RMRS-GTR-190) provides fire managers a sampling method for obtaining consistent, accurate, inexpensive, and quick estimates of fuel loading. It is designed to require only one...

  20. Physical oceanography from satellites: Currents and the slope of the sea surface

    NASA Technical Reports Server (NTRS)

    Sturges, W.

    1974-01-01

    A global scheme using satellite altimetry in conjunction with thermometry techniques provides for more accurate determinations of first order leveling networks by overcoming discrepancies between ocean leveling and land leveling methods. The high noise content in altimetry signals requires filtering or correction for tides, etc., as well as carefully planned sampling schemes.

  1. Mapping the potential for high severity wildfire in the western United States

    Treesearch

    Greg Dillon; Penny Morgan; Zack Holden

    2011-01-01

    Each year, large areas are burned in wildfires across the Western United States. Assessing the ecological effects of these fires is crucial to effective postfire management. This requires accurate, efficient, and economical methods to assess the severity of fires at broad landscape scales (Brennan and Hardwick 1999; Parsons and others 2010). While postfire assessment...

  2. Hg0 and HgCl2 Reference Gas Standards: NIST Traceability and Comparability (And EPA ALT Methods for Hg and HCl )

    EPA Science Inventory

    EPA and NIST have collaborated to establish the necessary procedures for establishing the required NIST traceability of commercially-provided Hg0 and HgCl2 reference generators. This presentation will discuss the approach of a joint EPA/NIST study to accurately quantify the tru...

  3. A comparison of five sampling techniques to estimate surface fuel loading in montane forests

    Treesearch

    Pamela G. Sikkink; Robert E. Keane

    2008-01-01

    Designing a fuel-sampling program that accurately and efficiently assesses fuel load at relevant spatial scales requires knowledge of each sample method's strengths and weaknesses.We obtained loading values for six fuel components using five fuel load sampling techniques at five locations in western Montana, USA. The techniques included fixed-area plots, planar...

  4. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-07-27

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. Creative Commons Attribution License

  6. Star tracking method based on multiexposure imaging for intensified star trackers.

    PubMed

    Yu, Wenbo; Jiang, Jie; Zhang, Guangjun

    2017-07-20

    The requirements for the dynamic performance of star trackers are rapidly increasing with the development of space exploration technologies. However, insufficient knowledge of the angular acceleration has largely decreased the performance of the existing star tracking methods, and star trackers may even fail to track under highly dynamic conditions. This study proposes a star tracking method based on multiexposure imaging for intensified star trackers. The accurate estimation model of the complete motion parameters, including the angular velocity and angular acceleration, is established according to the working characteristic of multiexposure imaging. The estimation of the complete motion parameters is utilized to generate the predictive star image accurately. Therefore, the correct matching and tracking between stars in the real and predictive star images can be reliably accomplished under highly dynamic conditions. Simulations with specific dynamic conditions are conducted to verify the feasibility and effectiveness of the proposed method. Experiments with real starry night sky observation are also conducted for further verification. Simulations and experiments demonstrate that the proposed method is effective and shows excellent performance under highly dynamic conditions.

  7. Fuzzy-C-Means Clustering Based Segmentation and CNN-Classification for Accurate Segmentation of Lung Nodules

    PubMed Central

    K, Jalal Deen; R, Ganesan; A, Merline

    2017-01-01

    Objective: Accurate segmentation of abnormal and healthy lungs is very crucial for a steadfast computer-aided disease diagnostics. Methods: For this purpose a stack of chest CT scans are processed. In this paper, novel methods are proposed for segmentation of the multimodal grayscale lung CT scan. In the conventional methods using Markov–Gibbs Random Field (MGRF) model the required regions of interest (ROI) are identified. Result: The results of proposed FCM and CNN based process are compared with the results obtained from the conventional method using MGRF model. The results illustrate that the proposed method can able to segment the various kinds of complex multimodal medical images precisely. Conclusion: However, in this paper, to obtain an exact boundary of the regions, every empirical dispersion of the image is computed by Fuzzy C-Means Clustering segmentation. A classification process based on the Convolutional Neural Network (CNN) classifier is accomplished to distinguish the normal tissue and the abnormal tissue. The experimental evaluation is done using the Interstitial Lung Disease (ILD) database. PMID:28749127

  8. 3D Boolean operations in virtual surgical planning.

    PubMed

    Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun

    2017-10-01

    Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.

  9. Monitoring the metering performance of an electronic voltage transformer on-line based on cyber-physics correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang

    2017-10-01

    Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.

  10. Triangulation methods for automated docking

    NASA Technical Reports Server (NTRS)

    Bales, John W.

    1996-01-01

    An automated docking system must have a reliable method for determining range and orientation of the passive (target) vehicle with respect to the active vehicle. This method must also provide accurate information on the rates of change of range to and orientation of the passive vehicle. The method must be accurate within required tolerances and capable of operating in real time. The method being developed at Marshall Space Flight Center employs a single TV camera, a laser illumination system and a target consisting, in its minimal configuration, of three retro-reflectors. Two of the retro-reflectors are mounted flush to the same surface, with the third retro-reflector mounted to a post fixed midway between the other two and jutting at a right angle from the surface. For redundancy, two additional retroreflectors are mounted on the surface on a line at right angles to the line containing the first two retro-reflectors, and equally spaced on either side of the post. The target vehicle will contain a large target for initial acquisition and several smaller targets for close range.

  11. An Edge-Based Method for the Incompressible Navier-Stokes Equations on Polygonal Meshes

    NASA Astrophysics Data System (ADS)

    Wright, Jeffrey A.; Smith, Richard W.

    2001-05-01

    A pressure-based method is presented for discretizing the unsteady incompressible Navier-Stokes equations using hybrid unstructured meshes. The edge-based data structure and assembly procedure adopted lead naturally to a strictly conservative discretization, which is valid for meshes composed of n-sided polygons. Particular attention is given to the construction of a pressure-velocity coupling procedure which is supported by edge data, resulting in a relatively simple numerical method that is consistent with the boundary and initial conditions required by the incompressible Navier-Stokes equations. Edge formulas are presented for assembling the momentum equations, which are based on an upwind-biased linear reconstruction of the velocity field. Similar formulas are presented for assembling the pressure equation. The method is demonstrated to be second-order accurate in space and time for two Navier-Stokes problems admitting an exact solution. Results for several other well-known problems are also presented, including lid-driven cavity flow, impulsively started cylinder flow, and unsteady vortex shedding from a circular cylinder. Although the method is by construction minimalist, it is shown to be accurate and robust for the problems considered.

  12. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime

    NASA Astrophysics Data System (ADS)

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro; Wang, Lihong V.

    2012-06-01

    Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed.

  13. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime

    PubMed Central

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro

    2012-01-01

    Abstract. Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed. PMID:22734767

  14. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime.

    PubMed

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro; Wang, Lihong V

    2012-06-01

    Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed.

  15. Using Image Processing to Determine Emphysema Severity

    NASA Astrophysics Data System (ADS)

    McKenzie, Alexander; Sadun, Alberto

    2010-10-01

    Currently X-rays and computerized tomography (CT) scans are used to detect emphysema, but other tests are required to accurately quantify the amount of lung that has been affected by the disease. These images clearly show if a patient has emphysema, but are unable by visual scan alone, to quantify the degree of the disease, as it presents as subtle, dark spots on the lung. Our goal is to use these CT scans to accurately diagnose and determine emphysema severity levels in patients. This will be accomplished by performing several different analyses of CT scan images of several patients representing a wide range of severity of the disease. In addition to analyzing the original CT data, this process will convert the data to one and two bit images and will then examine the deviation from a normal distribution curve to determine skewness. Our preliminary results show that this method of assessment appears to be more accurate and robust than the currently utilized methods, which involve looking at percentages of radiodensities in the air passages of the lung.

  16. Image Processing Diagnostics: Emphysema

    NASA Astrophysics Data System (ADS)

    McKenzie, Alex

    2009-10-01

    Currently the computerized tomography (CT) scan can detect emphysema sooner than traditional x-rays, but other tests are required to measure more accurately the amount of affected lung. CT scan images show clearly if a patient has emphysema, but is unable by visual scan alone, to quantify the degree of the disease, as it appears merely as subtle, barely distinct, dark spots on the lung. Our goal is to create a software plug-in to interface with existing open source medical imaging software, to automate the process of accurately diagnosing and determining emphysema severity levels in patients. This will be accomplished by performing a number of statistical calculations using data taken from CT scan images of several patients representing a wide range of severity of the disease. These analyses include an examination of the deviation from a normal distribution curve to determine skewness, a commonly used statistical parameter. Our preliminary results show that this method of assessment appears to be more accurate and robust than currently utilized methods which involve looking at percentages of radiodensities in air passages of the lung.

  17. Measurement of compressed breast thickness by optical stereoscopic photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Albert H.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2009-02-15

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of themore » breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.« less

  18. Delamination Modeling of Composites for Improved Crash Analysis

    NASA Technical Reports Server (NTRS)

    Fleming, David C.

    1999-01-01

    Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the literature. Examples show that it is possible to accurately model delamination propagation in this case. However, the computational demands required for accurate solution are great and reliable property data may not be available to support general crash modeling efforts. Additional examples are modeled including an impact-loaded beam, damage initiation in laminated crushing specimens, and a scaled aircraft subfloor structures in which composite sandwich structures are used as energy-absorbing elements. These examples illustrate some of the difficulties in modeling delamination as part of a finite element crash analysis.

  19. Performance of commercial platforms for rapid genotyping of polymorphisms affecting warfarin dose.

    PubMed

    King, Cristi R; Porche-Sorbet, Rhonda M; Gage, Brian F; Ridker, Paul M; Renaud, Yannick; Phillips, Michael S; Eby, Charles

    2008-06-01

    Initiation of warfarin therapy is associated with bleeding owing to its narrow therapeutic window and unpredictable therapeutic dose. Pharmacogenetic-based dosing algorithms can improve accuracy of initial warfarin dosing but require rapid genotyping for cytochrome P-450 2C9 (CYP2C9) *2 and *3 single nucleotide polymorphisms (SNPs) and a vitamin K epoxide reductase (VKORC1) SNP. We evaluated 4 commercial systems: INFINITI analyzer (AutoGenomics, Carlsbad, CA), Invader assay (Third Wave Technologies, Madison, WI), Tag-It Mutation Detection assay (Luminex Molecular Diagnostics, formerly Tm Bioscience, Toronto, Canada), and Pyrosequencing (Biotage, Uppsala, Sweden). We genotyped 112 DNA samples and resolved any discrepancies with bidirectional sequencing. The INFINITI analyzer was 100% accurate for all SNPs and required 8 hours. Invader and Tag-It were 100% accurate for CYP2C9 SNPs, 99% accurate for VKORC1 -1639/3673 SNP, and required 3 hours and 8 hours, respectively. Pyrosequencing was 99% accurate for CYP2C9 *2, 100% accurate for CYP2C9 *3, and 100% accurate for VKORC1 and required 4 hours. Current commercial platforms provide accurate and rapid genotypes for pharmacogenetic dosing during initiation of warfarin therapy.

  20. Towards the development of universal, fast and highly accurate docking/scoring methods: a long way to go

    PubMed Central

    Moitessier, N; Englebienne, P; Lee, D; Lawandi, J; Corbeil, C R

    2008-01-01

    Accelerating the drug discovery process requires predictive computational protocols capable of reducing or simplifying the synthetic and/or combinatorial challenge. Docking-based virtual screening methods have been developed and successfully applied to a number of pharmaceutical targets. In this review, we first present the current status of docking and scoring methods, with exhaustive lists of these. We next discuss reported comparative studies, outlining criteria for their interpretation. In the final section, we describe some of the remaining developments that would potentially lead to a universally applicable docking/scoring method. PMID:18037925

  1. Improvements to the kernel function method of steady, subsonic lifting surface theory

    NASA Technical Reports Server (NTRS)

    Medan, R. T.

    1974-01-01

    The application of a kernel function lifting surface method to three dimensional, thin wing theory is discussed. A technique for determining the influence functions is presented. The technique is shown to require fewer quadrature points, while still calculating the influence functions accurately enough to guarantee convergence with an increasing number of spanwise quadrature points. The method also treats control points on the wing leading and trailing edges. The report introduces and employs an aspect of the kernel function method which apparently has never been used before and which significantly enhances the efficiency of the kernel function approach.

  2. New Automotive Air Conditioning System Simulation Tool Developed in MATLAB/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiss, T.; Chaney, L.; Meyer, J.

    Further improvements in vehicle fuel efficiency require accurate evaluation of the vehicle's transient total power requirement. When operated, the air conditioning (A/C) system is the largest auxiliary load on a vehicle; therefore, accurate evaluation of the load it places on the vehicle's engine and/or energy storage system is especially important. Vehicle simulation software, such as 'Autonomie,' has been used by OEMs to evaluate vehicles' energy performance. A transient A/C simulation tool incorporated into vehicle simulation models would also provide a tool for developing more efficient A/C systems through a thorough consideration of the transient A/C system performance. The dynamic systemmore » simulation software Matlab/Simulink was used to develop new and more efficient vehicle energy system controls. The various modeling methods used for the new simulation tool are described in detail. Comparison with measured data is provided to demonstrate the validity of the model.« less

  3. New signal processing technique for density profile reconstruction using reflectometry.

    PubMed

    Clairet, F; Ricaud, B; Briolle, F; Heuraux, S; Bottereau, C

    2011-08-01

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10(16) m(-1). For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.

  4. Joint reconstruction of the initial pressure and speed of sound distributions from combined photoacoustic and ultrasound tomography measurements

    NASA Astrophysics Data System (ADS)

    Matthews, Thomas P.; Anastasio, Mark A.

    2017-12-01

    The initial pressure and speed of sound (SOS) distributions cannot both be stably recovered from photoacoustic computed tomography (PACT) measurements alone. Adjunct ultrasound computed tomography (USCT) measurements can be employed to estimate the SOS distribution. Under the conventional image reconstruction approach for combined PACT/USCT systems, the SOS is estimated from the USCT measurements alone and the initial pressure is estimated from the PACT measurements by use of the previously estimated SOS. This approach ignores the acoustic information in the PACT measurements and may require many USCT measurements to accurately reconstruct the SOS. In this work, a joint reconstruction method where the SOS and initial pressure distributions are simultaneously estimated from combined PACT/USCT measurements is proposed. This approach allows accurate estimation of both the initial pressure distribution and the SOS distribution while requiring few USCT measurements.

  5. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pelt, Daniël M.; Gürsoy, Dogˇa; Palenstijn, Willem Jan

    2016-04-28

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it ismore » shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy's standard reconstruction method.« less

  6. Use of Foodomics for Control of Food Processing and Assessing of Food Safety.

    PubMed

    Josić, D; Peršurić, Ž; Rešetar, D; Martinović, T; Saftić, L; Kraljević Pavelić, S

    Food chain, food safety, and food-processing sectors face new challenges due to globalization of food chain and changes in the modern consumer preferences. In addition, gradually increasing microbial resistance, changes in climate, and human errors in food handling remain a pending barrier for the efficient global food safety management. Consequently, a need for development, validation, and implementation of rapid, sensitive, and accurate methods for assessment of food safety often termed as foodomics methods is required. Even though, the growing role of these high-throughput foodomic methods based on genomic, transcriptomic, proteomic, and metabolomic techniques has yet to be completely acknowledged by the regulatory agencies and bodies. The sensitivity and accuracy of these methods are superior to previously used standard analytical procedures and new methods are suitable to address a number of novel requirements posed by the food production sector and global food market. © 2017 Elsevier Inc. All rights reserved.

  7. Integration of TomoPy and the ASTRA toolbox for advanced processing and reconstruction of tomographic synchrotron data

    PubMed Central

    Pelt, Daniël M.; Gürsoy, Doǧa; Palenstijn, Willem Jan; Sijbers, Jan; De Carlo, Francesco; Batenburg, Kees Joost

    2016-01-01

    The processing of tomographic synchrotron data requires advanced and efficient software to be able to produce accurate results in reasonable time. In this paper, the integration of two software toolboxes, TomoPy and the ASTRA toolbox, which, together, provide a powerful framework for processing tomographic data, is presented. The integration combines the advantages of both toolboxes, such as the user-friendliness and CPU-efficient methods of TomoPy and the flexibility and optimized GPU-based reconstruction methods of the ASTRA toolbox. It is shown that both toolboxes can be easily installed and used together, requiring only minor changes to existing TomoPy scripts. Furthermore, it is shown that the efficient GPU-based reconstruction methods of the ASTRA toolbox can significantly decrease the time needed to reconstruct large datasets, and that advanced reconstruction methods can improve reconstruction quality compared with TomoPy’s standard reconstruction method. PMID:27140167

  8. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  9. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  10. Single Locked Nucleic Acid-Enhanced Nanopore Genetic Discrimination of Pathogenic Serotypes and Cancer Driver Mutations.

    PubMed

    Tian, Kai; Chen, Xiaowei; Luan, Binquan; Singh, Prashant; Yang, Zhiyu; Gates, Kent S; Lin, Mengshi; Mustapha, Azlin; Gu, Li-Qun

    2018-05-22

    Accurate and rapid detection of single-nucleotide polymorphism (SNP) in pathogenic mutants is crucial for many fields such as food safety regulation and disease diagnostics. Current detection methods involve laborious sample preparations and expensive characterizations. Here, we investigated a single locked nucleic acid (LNA) approach, facilitated by a nanopore single-molecule sensor, to accurately determine SNPs for detection of Shiga toxin producing Escherichia coli (STEC) serotype O157:H7, and cancer-derived EGFR L858R and KRAS G12D driver mutations. Current LNA applications that require incorporation and optimization of multiple LNA nucleotides. But we found that in the nanopore system, a single LNA introduced in the probe is sufficient to enhance the SNP discrimination capability by over 10-fold, allowing accurate detection of the pathogenic mutant DNA mixed in a large amount of the wild-type DNA. Importantly, the molecular mechanistic study suggests that such a significant improvement is due to the effect of the single-LNA that both stabilizes the fully matched base-pair and destabilizes the mismatched base-pair. This sensitive method, with a simplified, low cost, easy-to-operate LNA design, could be generalized for various applications that need rapid and accurate identification of single-nucleotide variations.

  11. A macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Wang, Yulun

    1993-01-01

    This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.

  12. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; Thomas, J. D.

    2000-01-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  13. Time-Spectral Rotorcraft Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua I.; Murman, Scott M.; Pulliam, Thomas H.

    2014-01-01

    The Time-Spectral method is derived as a Fourier collocation scheme and applied to NASA's overset Reynolds-averaged Navier-Stokes (RANS) solver OVERFLOW. The paper outlines the Time-Spectral OVERFLOWimplementation. Successful low-speed laminar plunging NACA 0012 airfoil simulations demonstrate the capability of the Time-Spectral method to resolve the highly-vortical wakes typical of more expensive three-dimensional rotorcraft configurations. Dealiasing, in the form of spectral vanishing viscosity (SVV), facilitates the convergence of Time-Spectral calculations of high-frequency flows. Finally, simulations of the isolated V-22 Osprey tiltrotor for both hover and forward (edgewise) flight validate the three-dimensional Time-Spectral OVERFLOW implementation. The Time-Spectral hover simulation matches the time-accurate calculation using a single harmonic. Significantly more temporal modes and SVV are required to accurately compute the forward flight case because of its more active, high-frequency wake.

  14. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography.

    PubMed

    Qin, J X; Shiota, T; Thomas, J D

    2000-11-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  15. MLFMA-accelerated Nyström method for ultrasonic scattering - Numerical results and experimental validation

    NASA Astrophysics Data System (ADS)

    Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron

    2018-04-01

    Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.

  16. Elementary solutions of coupled model equations in the kinetic theory of gases

    NASA Technical Reports Server (NTRS)

    Kriese, J. T.; Siewert, C. E.; Chang, T. S.

    1974-01-01

    The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.

  17. A time-lapse photography method for monitoring salmon (Oncorhynchus spp.) passage and abundance in streams

    PubMed Central

    Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.

    2016-01-01

    Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Degenhardt, R.; PFH, Private University of Applied Sciences Goettingen, Composite Engineering Campus Stade; Araujo, F. C. de

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. Formore » unstiffened cylindrical composite shells a proposal for a new design method is presented.« less

  19. Bayesian approach to analyzing holograms of colloidal particles.

    PubMed

    Dimiduk, Thomas G; Manoharan, Vinothan N

    2016-10-17

    We demonstrate a Bayesian approach to tracking and characterizing colloidal particles from in-line digital holograms. We model the formation of the hologram using Lorenz-Mie theory. We then use a tempered Markov-chain Monte Carlo method to sample the posterior probability distributions of the model parameters: particle position, size, and refractive index. Compared to least-squares fitting, our approach allows us to more easily incorporate prior information about the parameters and to obtain more accurate uncertainties, which are critical for both particle tracking and characterization experiments. Our approach also eliminates the need to supply accurate initial guesses for the parameters, so it requires little tuning.

  20. Improved method for rapid and accurate isolation and identification of Streptococcus mutans and Streptococcus sobrinus from human plaque samples.

    PubMed

    Villhauer, Alissa L; Lynch, David J; Drake, David R

    2017-08-01

    Mutans streptococci (MS), specifically Streptococcus mutans (SM) and Streptococcus sobrinus (SS), are bacterial species frequently targeted for investigation due to their role in the etiology of dental caries. Differentiation of S. mutans and S. sobrinus is an essential part of exploring the role of these organisms in disease progression and the impact of the presence of either/both on a subject's caries experience. Of vital importance to the study of these organisms is an identification protocol that allows us to distinguish between the two species in an easy, accurate, and timely manner. While conducting a 5-year birth cohort study in a Northern Plains American Indian tribe, the need for a more rapid procedure for isolating and identifying high volumes of MS was recognized. We report here on the development of an accurate and rapid method for MS identification. Accuracy, ease of use, and material and time requirements for morphological differentiation on selective agar, biochemical tests, and various combinations of PCR primers were compared. The final protocol included preliminary identification based on colony morphology followed by PCR confirmation of species identification using primers targeting regions of the glucosyltransferase (gtf) genes of SM and SS. This method of isolation and identification was found to be highly accurate, more rapid than the previous methodology used, and easily learned. It resulted in more efficient use of both time and material resources. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Monitoring of formaldehyde in air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balmat, J.L.; Meadows, G.W.

    1985-10-01

    Any one of several monitoring methods, depending on requirement and circumstance, can be used to measure employee exposure to formaldehyde. Ordinarily, monitoring at DuPont is performed by sampling with impingers containing 1% aqueous sodium bisulfite or with silica gel tubes. The collected formaldehyde is measured spectrophotometrically after reaction with chromotropic acid. Results from studies on a selected number of formaldehyde monitoring methods reveal that reliable methods are available for area and personnel monitoring over both short term and long term. Accurate results are obtained from short-term monitoring (15 min at 1 L/min) with impingers of formaldehyde concentrations as low asmore » 0.14 ppm. The current studies show that long-term monitoring (8 hr at 0.5 L/min) can be performed accurately at concentrations as low as 0.05 ppm. Accurate results also are obtained from short-term monitoring (15 min at 500 mL/min) with silica gel tubes of concentrations as low as 0.11 ppm formaldehyde. Passive monitors provide the most convenient means of obtaining 8-hour time-weighted average (TWA) data. The Pro-Tek Formaldehyde Badge was demonstrated to reliably monitor formaldehyde concentrations varying from 0-0.5 ppm or 0-3 ppm. Investigation of the Lion Formaldemeter disclosed that instantaneous and accurate (+/- 5%) measurement of formaldehyde in air can be made over a concentration range of 0.3-5 ppm in the absence of other substances that are oxidizable in its fuel cell detector.« less

  2. Development of Dimensionless Surge Response Functions for Hazard Assessment at Panama City, Florida

    NASA Astrophysics Data System (ADS)

    Taylor, N. R.; Irish, J. L.; Hagen, S. C.; Kaihatu, J. M.; McLaughlin, P. W.

    2013-12-01

    Reliable and robust methods of extreme value analysis in hurricane surge forecasting are of high importance in the coastal engineering profession. The Joint Probability Method (JPM) has become the preferred statistical method over the Historical Surge Population (HSP) method, due to its ability to give more accurate surge predictions, as demonstrated by Irish et. al in 2011 (J. Geophys. Res.). One disadvantage to this method is its high computational cost; a single location can require hundreds of simulated storms, each needing one thousand computational hours or more to complete. One way of overcoming this issue is to use an interpolating function, called a surge response function, to reduce the required number of simulations to a manageable number. These sampling methods, which use physical scaling laws, have been shown to significantly reduce the number of simulated storms needed for application of the JPM method. In 2008, Irish et. al. (J. Phys. Oceanogr.) demonstrated that hurricane surge scales primarily as a function of storm size and intensity. Additionally, Song et. al. in 2012 (Nat. Hazards) has shown that surge response functions incorporating bathymetric variations yield highly accurate surge estimates along the Texas coastline. This study applies the Song. et. al. model to 73 stations along the open coast, and 273 stations within the bays, in Panama City, Florida. The model performs well for the open coast and bay areas; surge levels at most stations along the open coast were predicted with RMS errors below 0.40 meters, and R2 values at or above 0.80. The R2 values for surge response functions within bays were consistently at or above 0.75. Surge levels at most stations within the North Bay and East Bay were predicted with RMS errors below 0.40 meters; within the West Bay, surge was predicted with RMS errors below 0.52 meters. Accurately interpolating surge values along the Panama City coast and bays enables efficient use of the JPM model in order to develop reliable probabilistic surge estimates for use in planning and design for hurricane mitigation.

  3. Fast and high-order numerical algorithms for the solution of multidimensional nonlinear fractional Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Mohebbi, Akbar

    2018-02-01

    In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.

  4. Prediction of Stereochemistry using Q2MM

    PubMed Central

    2016-01-01

    Conspectus The standard method of screening ligands for selectivity in asymmetric, transition metal-catalyzed reactions requires experimental testing of hundreds of ligands from ligand libraries. This “trial and error” process is costly in terms of time as well as resources and, in general, is scientifically and intellectually unsatisfying as it reveals little about the underlying mechanism behind the selectivity. The accurate computational prediction of stereoselectivity in enantioselective catalysis requires adequate conformational sampling of the selectivity-determining transition state but has to be fast enough to compete with experimental screening techniques to be useful for the synthetic chemist. Although electronic structure calculations are accurate and general, they are too slow to allow for sampling or fast screening of ligand libraries. The combined requirements can be fulfilled by using appropriately fitted transition state force fields (TSFFs) that represent the transition state as a minimum and allow fast conformational sampling using Monte Carlo. Quantum-guided molecular mechanics (Q2MM) is an automated force field parametrization method that generates accurate, reaction-specific TSFFs by fitting the functional form of an arbitrary force field using only electronic structure calculations by minimization of an objective function. A key feature that distinguishes the Q2MM method from many other automated parametrization procedures is the use of the Hessian matrix in addition to geometric parameters and relative energies. This alleviates the known problems of overfitting of TSFFs. After validation of the TSFF by comparison to electronic structure results for a test set and available experimental data, the stereoselectivity of a reaction can be calculated by summation over the Boltzman-averaged relative energies of the conformations leading to the different stereoisomers. The Q2MM method has been applied successfully to perform virtual ligand screens on a range of transition metal-catalyzed reactions that are important from both an industrial and an academic perspective. In this Account, we provide an overview of the continued improvement of the prediction of stereochemistry using Q2MM-derived TSFFs using four examples from different stages of development: (i) Pd-catalyzed allylation, (ii) OsO4-catalyzed asymmetric dihydroxylation (AD) of alkenes, (iii) Rh-catalyzed hydrogenation of enamides, and (iv) Ru-catalyzed hydrogenation of ketones. In the current form, correlation coefficients of 0.8–0.9 between calculated and experimental ee values are typical for a wide range of substrate–ligand combinations, and suitable ligands can be predicted for a given substrate with ∼80% accuracy. Although the generation of a TSFF requires an initial effort and will therefore be most useful for widely used reactions that require frequent screening campaigns, the method allows for a rapid virtual screen of large ligand libraries to focus experimental efforts on the most promising substrate–ligand combinations. PMID:27064579

  5. Dynamic earthquake rupture simulation on nonplanar faults embedded in 3D geometrically complex, heterogeneous Earth models

    NASA Astrophysics Data System (ADS)

    Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.

    2014-12-01

    Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.

  6. Three dimensional scattering center imaging techniques

    NASA Technical Reports Server (NTRS)

    Younger, P. R.; Burnside, W. D.

    1991-01-01

    Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.

  7. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    PubMed

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  8. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  9. Aerodynamic laser-heated contactless furnace for neutron scattering experiments at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Landron, Claude; Hennet, Louis; Coutures, Jean-Pierre; Jenkins, Tudor; Alétru, Chantal; Greaves, Neville; Soper, Alan; Derbyshire, Gareth

    2000-04-01

    Conventional radiative furnaces require sample containment that encourages contamination at elevated temperatures and generally need windows which restrict the entrance and exit solid angles required for diffraction and scattering measurements. We describe a contactless windowless furnace based on aerodynamic levitation and laser heating which has been designed for high temperature neutron scattering experiments. Data from initial experiments are reported for crystalline and amorphous oxides at temperatures up to 1900 °C, using the spallation neutron source ISIS together with our laser-heated aerodynamic levitator. Accurate reproduction of thermal expansion coefficients and radial distribution functions have been obtained, demonstrating the utility of aerodynamic levitation methods for neutron scattering methods.

  10. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    PubMed

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  11. SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.

    PubMed

    Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E

    2018-04-10

    An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.

  12. Non-steady state modelling of wheel-rail contact problem

    NASA Astrophysics Data System (ADS)

    Guiral, A.; Alonso, A.; Baeza, L.; Giménez, J. G.

    2013-01-01

    Among all the algorithms to solve the wheel-rail contact problem, Kalker's FastSim has become the most useful computation tool since it combines a low computational cost and enough precision for most of the typical railway dynamics problems. However, some types of dynamic problems require the use of a non-steady state analysis. Alonso and Giménez developed a non-stationary method based on FastSim, which provides both, sufficiently accurate results and a low computational cost. However, it presents some limitations; the method is developed for one time-dependent creepage and its accuracy for varying normal forces has not been checked. This article presents the required changes in order to deal with both problems and compares its results with those given by Kalker's Variational Method for rolling contact.

  13. A New Calibration Method for Commercial RGB-D Sensors.

    PubMed

    Darwish, Walid; Tang, Shenjun; Li, Wenbin; Chen, Wu

    2017-05-24

    Commercial RGB-D sensors such as Kinect and Structure Sensors have been widely used in the game industry, where geometric fidelity is not of utmost importance. For applications in which high quality 3D is required, i.e., 3D building models of centimeter‑level accuracy, accurate and reliable calibrations of these sensors are required. This paper presents a new model for calibrating the depth measurements of RGB-D sensors based on the structured light concept. Additionally, a new automatic method is proposed for the calibration of all RGB-D parameters, including internal calibration parameters for all cameras, the baseline between the infrared and RGB cameras, and the depth error model. When compared with traditional calibration methods, this new model shows a significant improvement in depth precision for both near and far ranges.

  14. Determination of Carbonyl Groups in Pyrolysis Bio-oils Using Potentiometric Titration: Review and Comparison of Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Black, Stuart; Ferrell, Jack R.

    Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here in this study, we present a modification of the traditional carbonyl oximation procedures that results inmore » less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. Some compounds such as carbohydrates are not measured by the traditional method (modified Nicolaides method), resulting in low estimations of the carbonyl content. Furthermore, we have shown that reaction completion for the traditional method can take up to 300 hours. The new method presented here (the modified Faix method) reduces the reaction time to 2 hours, uses triethanolamine (TEA) in the place of pyridine, and requires a smaller sample size for the analysis. Carbonyl contents determined using this new method are consistently higher than when using the traditional titration methods.« less

  15. Assessing and comparison of different machine learning methods in parent-offspring trios for genotype imputation.

    PubMed

    Mikhchi, Abbas; Honarvar, Mahmood; Kashan, Nasser Emam Jomeh; Aminafshar, Mehdi

    2016-06-21

    Genotype imputation is an important tool for prediction of unknown genotypes for both unrelated individuals and parent-offspring trios. Several imputation methods are available and can either employ universal machine learning methods, or deploy algorithms dedicated to infer missing genotypes. In this research the performance of eight machine learning methods: Support Vector Machine, K-Nearest Neighbors, Extreme Learning Machine, Radial Basis Function, Random Forest, AdaBoost, LogitBoost, and TotalBoost compared in terms of the imputation accuracy, computation time and the factors affecting imputation accuracy. The methods employed using real and simulated datasets to impute the un-typed SNPs in parent-offspring trios. The tested methods show that imputation of parent-offspring trios can be accurate. The Random Forest and Support Vector Machine were more accurate than the other machine learning methods. The TotalBoost performed slightly worse than the other methods.The running times were different between methods. The ELM was always most fast algorithm. In case of increasing the sample size, the RBF requires long imputation time.The tested methods in this research can be an alternative for imputation of un-typed SNPs in low missing rate of data. However, it is recommended that other machine learning methods to be used for imputation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Determination of Carbonyl Groups in Pyrolysis Bio-oils Using Potentiometric Titration: Review and Comparison of Methods

    DOE PAGES

    Black, Stuart; Ferrell, Jack R.

    2016-01-06

    Carbonyl compounds present in bio-oils are known to be responsible for bio-oil property changes upon storage and during upgrading. As such, carbonyl content has previously been used as a method of tracking bio-oil aging and condensation reactions with less variability than viscosity measurements. Given the importance of carbonyls in bio-oils, accurate analytical methods for their quantification are very important for the bio-oil community. Potentiometric titration methods based on carbonyl oximation have long been used for the determination of carbonyl content in pyrolysis bio-oils. Here in this study, we present a modification of the traditional carbonyl oximation procedures that results inmore » less reaction time, smaller sample size, higher precision, and more accurate carbonyl determinations. Some compounds such as carbohydrates are not measured by the traditional method (modified Nicolaides method), resulting in low estimations of the carbonyl content. Furthermore, we have shown that reaction completion for the traditional method can take up to 300 hours. The new method presented here (the modified Faix method) reduces the reaction time to 2 hours, uses triethanolamine (TEA) in the place of pyridine, and requires a smaller sample size for the analysis. Carbonyl contents determined using this new method are consistently higher than when using the traditional titration methods.« less

  17. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  18. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  19. Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications

    PubMed Central

    2013-01-01

    Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109

  20. Importance and globalization status of good manufacturing practice (GMP) requirements for pharmaceutical excipients

    PubMed Central

    Abdellah, Abubaker; Noordin, Mohamed Ibrahim; Wan Ismail, Wan Azman

    2013-01-01

    Pharmaceutical excipients are no longer inert materials but it is effective and able to improve the characteristics of the products’ quality, stability, functionality, safety, solubility and acceptance of patients. It can interact with the active ingredients and alter the medicament characteristics. The globalization of medicines’ supply enhances the importance of globalized good manufacturing practice (GMP) requirements for pharmaceutical excipients. This review was intended to assess the globalization status of good manufacturing practice (GMP) requirements for pharmaceutical excipients. The review outcomes demonstrate that there is a lack of accurately defined methods to evaluate and measure excipients’ safety. Furthermore good manufacturing practice requirements for excipients are not effectively globalized. PMID:25685037

  1. Application of the conjugate-gradient method to ground-water models

    USGS Publications Warehouse

    Manteuffel, T.A.; Grove, D.B.; Konikow, Leonard F.

    1984-01-01

    The conjugate-gradient method can solve efficiently and accurately finite-difference approximations to the ground-water flow equation. An aquifer-simulation model using the conjugate-gradient method was applied to a problem of ground-water flow in an alluvial aquifer at the Rocky Mountain Arsenal, Denver, Colorado. For this application, the accuracy and efficiency of the conjugate-gradient method compared favorably with other available methods for steady-state flow. However, its efficiency relative to other available methods depends on the nature of the specific problem. The main advantage of the conjugate-gradient method is that it does not require the use of iteration parameters, thereby eliminating this partly subjective procedure. (USGS)

  2. 3D liver volume reconstructed for palpation training.

    PubMed

    Tibamoso, Gerardo; Perez-Gutierrez, Byron; Uribe-Quevedo, Alvaro

    2013-01-01

    Virtual Reality systems for medical procedures such as the palpation of different organs, requires fast, robust, accurate and reliable computational methods for providing realism during interaction with the 3D biological models. This paper presents the segmentation, reconstruction and palpation simulation of a healthy liver volume as a tool for training. The chosen method considers the mechanical characteristics and liver properties for correctly simulating palpation interactions, which results appropriate as a complementary tool for training medical students in familiarizing with the liver anatomy.

  3. Adaptive Implicit Non-Equilibrium Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philip, Bobby; Wang, Zhen; Berrill, Mark A

    2013-01-01

    We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  4. Minimum Energy Pathways for Chemical Reactions

    NASA Technical Reports Server (NTRS)

    Walch, S. P.; Langhoff, S. R. (Technical Monitor)

    1995-01-01

    Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives useful results for a number of chemically important systems. The talk will focus on a number of applications to reactions leading to NOx and soot formation in hydrocarbon combustion.

  5. Calibrations of the LHD Thomson scattering system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, I., E-mail: yamadai@nifs.ac.jp; Funaba, H.; Yasuhara, R.

    2016-11-15

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  6. Calibrations of the LHD Thomson scattering system.

    PubMed

    Yamada, I; Funaba, H; Yasuhara, R; Hayashi, H; Kenmochi, N; Minami, T; Yoshikawa, M; Ohta, K; Lee, J H; Lee, S H

    2016-11-01

    The Thomson scattering diagnostic systems are widely used for the measurements of absolute local electron temperatures and densities of fusion plasmas. In order to obtain accurate and reliable temperature and density data, careful calibrations of the system are required. We have tried several calibration methods since the second LHD experiment campaign in 1998. We summarize the current status of the calibration methods for the electron temperature and density measurements by the LHD Thomson scattering diagnostic system. Future plans are briefly discussed.

  7. Regional measurement of body nitrogen

    NASA Technical Reports Server (NTRS)

    Palmer, H. E.

    1976-01-01

    Studies of methods for determining changes in the muscle mass of arms and legs are described. N-13 measurements were made in phantom and cadaver parts after neutron irradiation. The reproducibility in these measurements was found to be excellent and the radiation dose required to provide sufficient activation was determined. Potassium-40 measurements were made on persons who lost muscle mass due to leg injuries. It appears that K-40 measurements may provide the most accurate and convenient method for determining muscle mass changes.

  8. Translation position determination in ptychographic coherent diffraction imaging.

    PubMed

    Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M

    2013-06-03

    Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.

  9. A fast calibration method for 3-D tracking of ultrasound images using a spatial localizer.

    PubMed

    Pagoulatos, N; Haynor, D R; Kim, Y

    2001-09-01

    We have developed a fast calibration method for computing the position and orientation of 2-D ultrasound (US) images in 3-D space where a position sensor is mounted on the US probe. This calibration is required in the fields of 3-D ultrasound and registration of ultrasound with other imaging modalities. Most of the existing calibration methods require a complex and tedious experimental procedure. Our method is simple and it is based on a custom-built phantom. Thirty N-fiducials (markers in the shape of the letter "N") embedded in the phantom provide the basis for our calibration procedure. We calibrated a 3.5-MHz sector phased-array probe with a magnetic position sensor, and we studied the accuracy and precision of our method. A typical calibration procedure requires approximately 2 min. We conclude that we can achieve accurate and precise calibration using a single US image, provided that a large number (approximately ten) of N-fiducials are captured within the US image, enabling a representative sampling of the imaging plane.

  10. Improved dynamic analysis method using load-dependent Ritz vectors

    NASA Technical Reports Server (NTRS)

    Escobedo-Torres, J.; Ricles, J. M.

    1993-01-01

    The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.

  11. A generalised multiple-mass based method for the determination of the live mass of a force transducer

    NASA Astrophysics Data System (ADS)

    Montalvão, Diogo; Baker, Thomas; Ihracska, Balazs; Aulaqi, Muhammad

    2017-01-01

    Many applications in Experimental Modal Analysis (EMA) require that the sensors' masses are known. This is because the added mass from sensors will affect the structural mode shapes, and in particular its natural frequencies. EMA requires the measurement of the exciting forces at given coordinates, which is often made using piezoelectric force transducers. In such a case, the live mass of the force transducer, i.e. the mass as 'seen' by the structure in perpendicular directions must be measured somehow, so that compensation methods like mass cancelation can be performed. This however presents a problem on how to obtain an accurate measurement for the live mass. If the system is perfectly calibrated, then a reasonably accurate estimate can be made using a straightforward method available in most classical textbooks based on Newton's second law. However, this is often not the case (for example when the transducer's sensitivity changed over time, when it is unknown or when the connection influences the transmission of the force). In a self-calibrating iterative method, both the live mass and calibration factor are determined, but this paper shows that the problem may be ill-conditioned, producing misleading results if certain conditions are not met. Therefore, a more robust method is presented and discussed in this paper, reducing the ill-conditioning problems and the need to know the calibration factors beforehand. The three methods will be compared and discussed through numerical and experimental examples, showing that classical EMA still is a field of research that deserves the attention from scientists and engineers.

  12. Solution of axisymmetric and two-dimensional inviscid flow over blunt bodies by the method of lines

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II

    1978-01-01

    Comparisons with experimental data and the results of other computational methods demonstrated that very accurate solutions can be obtained by using relatively few lines with the method of lines approach. This method is semidiscrete and has relatively low core storage requirements as compared with fully discrete methods since very little data were stored across the shock layer. This feature is very attractive for three dimensional problems because it enables computer storage requirements to be reduced by approximately an order of magnitude. In the present study it was found that nine lines was a practical upper limit for two dimensional and axisymmetric problems. This condition limits application of the method to smooth body geometries where relatively few lines would be adequate to describe changes in the flow variables around the body. Extension of the method to three dimensions was conceptually straightforward; however, three dimensional applications would also be limited to smooth body geometries although not necessarily to total of nine lines.

  13. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms are discussed especially for fiber-reinforced composites. PMID:25620817

  14. Accurate 3D reconstruction of bony surfaces using ultrasonic synthetic aperture techniques for robotic knee arthroplasty.

    PubMed

    Kerr, William; Rowe, Philip; Pierce, Stephen Gareth

    2017-06-01

    Robotically guided knee arthroplasty systems generally require an individualized, preoperative 3D model of the knee joint. This is typically measured using Computed Tomography (CT) which provides the required accuracy for preoperative surgical intervention planning. Ultrasound imaging presents an attractive alternative to CT, allowing for reductions in cost and the elimination of doses of ionizing radiation, whilst maintaining the accuracy of the 3D model reconstruction of the joint. Traditional phased array ultrasound imaging methods, however, are susceptible to poor resolution and signal to noise ratios (SNR). Alleviating these weaknesses by offering superior focusing power, synthetic aperture methods have been investigated extensively within ultrasonic non-destructive testing. Despite this, they have yet to be fully exploited in medical imaging. In this paper, the ability of a robotic deployed ultrasound imaging system based on synthetic aperture methods to accurately reconstruct bony surfaces is investigated. Employing the Total Focussing Method (TFM) and the Synthetic Aperture Focussing Technique (SAFT), two samples were imaged which were representative of the bones of the knee joint: a human-shaped, composite distal femur and a bovine distal femur. Data were captured using a 5MHz, 128 element 1D phased array, which was manipulated around the samples using a robotic positioning system. Three dimensional surface reconstructions were then produced and compared with reference models measured using a precision laser scanner. Mean errors of 0.82mm and 0.88mm were obtained for the composite and bovine samples, respectively, thus demonstrating the feasibility of the approach to deliver the sub-millimetre accuracy required for the application. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Metrology for hydrogen energy applications: a project to address normative requirements

    NASA Astrophysics Data System (ADS)

    Haloua, Frédérique; Bacquart, Thomas; Arrhenius, Karine; Delobelle, Benoît; Ent, Hugo

    2018-03-01

    Hydrogen represents a clean and storable energy solution that could meet worldwide energy demands and reduce greenhouse gases emission. The joint research project (JRP) ‘Metrology for sustainable hydrogen energy applications’ addresses standardisation needs through pre- and co-normative metrology research in the fast emerging sector of hydrogen fuel that meet the requirements of the European Directive 2014/94/EU by supplementing the revision of two ISO standards that are currently too generic to enable a sustainable implementation of hydrogen. The hydrogen purity dispensed at refueling points should comply with the technical specifications of ISO 14687-2 for fuel cell electric vehicles. The rapid progress of fuel cell technology now requires revising this standard towards less constraining limits for the 13 gaseous impurities. In parallel, optimized validated analytical methods are proposed to reduce the number of analyses. The study aims also at developing and validating traceable methods to assess accurately the hydrogen mass absorbed and stored in metal hydride tanks; this is a research axis for the revision of the ISO 16111 standard to develop this safe storage technique for hydrogen. The probability of hydrogen impurity presence affecting fuel cells and analytical techniques for traceable measurements of hydrogen impurities will be assessed and new data of maximum concentrations of impurities based on degradation studies will be proposed. Novel validated methods for measuring the hydrogen mass absorbed in hydrides tanks AB, AB2 and AB5 types referenced to ISO 16111 will be determined, as the methods currently available do not provide accurate results. The outputs here will have a direct impact on the standardisation works for ISO 16111 and ISO 14687-2 revisions in the relevant working groups of ISO/TC 197 ‘Hydrogen technologies’.

  16. Instillation and Fixation Methods Useful in Mouse Lung Cancer Research.

    PubMed

    Limjunyawong, Nathachit; Mock, Jason; Mitzner, Wayne

    2015-08-31

    The ability to instill live agents, cells, or chemicals directly into the lung without injuring or killing the mice is an important tool in lung cancer research. Although there are a number of methods that have been published showing how to intubate mice for pulmonary function measurements, none are without potential problems for rapid tracheal instillation in large cohorts of mice. In the present paper, a simple and quick method is described that enables an investigator to carry out such instillations in an efficient manner. The method does not require any special tools or lighting and can be learned with very little practice. It involves anesthetizing a mouse, making a small incision in the neck to visualize the trachea, and then inserting an intravenous catheter directly. The small incision is quickly closed with tissue adhesive, and the mice are allowed to recover. A skilled student or technician can do instillations at an average rate of 2 min/mouse. Once the cancer is established, there is frequently a need for quantitative histologic analysis of the lungs. Traditionally pathologists usually do not bother to standardize lung inflation during fixation, and analyses are often based on a scoring system that can be quite subjective. While this may sometime be sufficiently adequate for gross estimates of the size of a lung tumor, any proper stereological quantification of lung structure or cells requires a reproducible fixation procedure and subsequent lung volume measurement. Here we describe simple reliable procedures for both fixing the lungs under pressure and then accurately measuring the fixed lung volume. The only requirement is a laboratory balance that is accurate over a range of 1 mg-300 g. The procedures presented here thus could greatly improve the ability to create, treat, and analyze lung cancers in mice.

  17. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  18. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less

  19. Quantification of urinary zwitterionic organic acids using weak-anion exchange chromatography with tandem MS detection.

    PubMed

    Bishop, Michael Jason; Crow, Brian S; Kovalcik, Kasey D; George, Joe; Bralley, James A

    2007-04-01

    A rapid and accurate quantitative method was developed and validated for the analysis of four urinary organic acids with nitrogen containing functional groups, formiminoglutamic acid (FIGLU), pyroglutamic acid (PYRGLU), 5-hydroxyindoleacetic acid (5-HIAA), and 2-methylhippuric acid (2-METHIP) by liquid chromatography tandem mass spectrometry (LC/MS/MS). The chromatography was developed using a weak anion-exchange amino column that provided mixed-mode retention of the analytes. The elution gradient relied on changes in mobile phase pH over a concave gradient, without the use of counter-ions or concentrated salt buffers. A simple sample preparation was used, only requiring the dilution of urine prior to instrumental analysis. The method was validated based on linearity (r2>or=0.995), accuracy (85-115%), precision (C.V.<12%), sample preparation stability (

  20. Dereplication, Aggregation and Scoring Tool (DAS Tool) v1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SIEBER, CHRISTIAN

    Communities of uncultivated microbes are critical to ecosystem function and microorganism health, and a key objective of metagenomic studies is to analyze organism-specific metabolic pathways and reconstruct community interaction networks. This requires accurate assignment of genes to genomes, yet existing binning methods often fail to predict a reasonable number of genomes and report many bins of low quality and completeness. Furthermore, the performance of existing algorithms varies between samples and biotypes. Here, we present a dereplication, aggregation and scoring strategy, DAS Tool, that combines the strengths of a flexible set of established binning algorithms. DAS Tools applied to a constructedmore » community generated more accurate bins than any automated method. Further, when applied to samples of different complexity, including soil, natural oil seeps, and the human gut, DAS Tool recovered substantially more near-complete genomes than any single binning method alone. Included were three genomes from a novel lineage . The ability to reconstruct many near-complete genomes from metagenomics data will greatly advance genome-centric analyses of ecosystems.« less

  1. High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing

    PubMed Central

    Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting

    2015-01-01

    Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156

  2. Computation of records of streamflow at control structures

    USGS Publications Warehouse

    Collins, Dannie L.

    1977-01-01

    Traditional methods of computing streamflow records on large, low-gradient streams require a continuous record of water-surface slope over a natural channel reach. This slope must be of sufficient magnitude to be accuratly measured with available stage measuring devices. On highly regulated streams, this slope approaches zero during periods of low flow and accurate measurement is difficult. Methods are described to calibrate multipurpose regulating control structures to more accurately compute streamflow records on highly-regulated streams. Hydraulic theory, assuming steady, uniform flow during a computational interval, is described for five different types of flow control. The controls are: Tainter gates, hydraulic turbines, fixed spillways, navigation locks, and crest gates. Detailed calibration procedures are described for the five different controls as well as for several flow regimes for some of the controls. The instrumentation package and computer programs necessary to collect and process the field data are discussed. Two typical calibration procedures and measurement data are presented to illustrate the accuracy of the methods. (Woodard-USGS)

  3. Robust approximation of image illumination direction in a segmentation-based crater detection algorithm for spacecraft navigation

    NASA Astrophysics Data System (ADS)

    Maass, Bolko

    2016-12-01

    This paper describes an efficient and easily implemented algorithmic approach to extracting an approximation to an image's dominant projected illumination direction, based on intermediary results from a segmentation-based crater detection algorithm (CDA), at a computational cost that is negligible in comparison to that of the prior stages of the CDA. Most contemporary CDAs built for spacecraft navigation use this illumination direction as a means of improving performance or even require it to function at all. Deducing the illumination vector from the image alone reduces the reliance on external information such as the accurate knowledge of the spacecraft inertial state, accurate time base and solar system ephemerides. Therefore, a method such as the one described in this paper is a prerequisite for true "Lost in Space" operation of a purely segmentation-based crater detecting and matching method for spacecraft navigation. The proposed method is verified using ray-traced lunar elevation model data, asteroid image data, and in a laboratory setting with a camera in the loop.

  4. Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation

    PubMed Central

    Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David

    2017-01-01

    The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751

  5. Variable Threshold Method for Determining the Boundaries of Imaged Subvisible Particles.

    PubMed

    Cavicchi, Richard E; Collett, Cayla; Telikepalli, Srivalli; Hu, Zhishang; Carrier, Michael; Ripple, Dean C

    2017-06-01

    An accurate assessment of particle characteristics and concentrations in pharmaceutical products by flow imaging requires accurate particle sizing and morphological analysis. Analysis of images begins with the definition of particle boundaries. Commonly a single threshold defines the level for a pixel in the image to be included in the detection of particles, but depending on the threshold level, this results in either missing translucent particles or oversizing of less transparent particles due to the halos and gradients in intensity near the particle boundaries. We have developed an imaging analysis algorithm that sets the threshold for a particle based on the maximum gray value of the particle. We show that this results in tighter boundaries for particles with high contrast, while conserving the number of highly translucent particles detected. The method is implemented as a plugin for FIJI, an open-source image analysis software. The method is tested for calibration beads in water and glycerol/water solutions, a suspension of microfabricated rods, and stir-stressed aggregates made from IgG. The result is that appropriate thresholds are automatically set for solutions with a range of particle properties, and that improved boundaries will allow for more accurate sizing results and potentially improved particle classification studies. Published by Elsevier Inc.

  6. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  7. Discrepancy between the composition of some commercial cat foods and their package labelling and suitability for meeting nutritional requirements.

    PubMed

    Gosper, E C; Raubenheimer, D; Machovsky-Capuska, G E; Chaves, A V

    2016-01-01

    To investigate if the label information and nutrient composition of commercial cat foods are accurate and compliant with the Australian Standard (AS 5812-2011) and if they meet the nutritional requirements of an adult cat. A chemical analysis of 10 wet and 10 dry commercial cat foods labelled as 'nutritionally complete' for the adult cat was performed. The results were compared with the package composition values, the Australian Standard and the unique dietary requirements of the cat. In addition, the results of the chemical analysis were compared with the nutrient requirements published by the Association of the American Feed Control Officials and the National Research Council. When compared with the Australian Standard, 9 of the 20 cat foods did not adhere to their 'guaranteed analysis' and 8 did not adhere to the standards for nutrient composition. Also, various deficiencies and excesses of crude protein, crude fat, fatty acid and amino acid were observed in the majority of the cat foods. The results of this study highlight a need for an improved method of ensuring that label information and nutrient composition are accurate and comply with the Australian Standard (AS 5812-2011) to ensure the adult cat's unique dietary requirements are being met by commercial adult cat food. © 2016 Australian Veterinary Association.

  8. QRTEngine: An easy solution for running online reaction time experiments using Qualtrics.

    PubMed

    Barnhoorn, Jonathan S; Haasnoot, Erwin; Bocanegra, Bruno R; van Steenbergen, Henk

    2015-12-01

    Performing online behavioral research is gaining increased popularity among researchers in psychological and cognitive science. However, the currently available methods for conducting online reaction time experiments are often complicated and typically require advanced technical skills. In this article, we introduce the Qualtrics Reaction Time Engine (QRTEngine), an open-source JavaScript engine that can be embedded in the online survey development environment Qualtrics. The QRTEngine can be used to easily develop browser-based online reaction time experiments with accurate timing within current browser capabilities, and it requires only minimal programming skills. After introducing the QRTEngine, we briefly discuss how to create and distribute a Stroop task. Next, we describe a study in which we investigated the timing accuracy of the engine under different processor loads using external chronometry. Finally, we show that the QRTEngine can be used to reproduce classic behavioral effects in three reaction time paradigms: a Stroop task, an attentional blink task, and a masked-priming task. These findings demonstrate that QRTEngine can be used as a tool for conducting online behavioral research even when this requires accurate stimulus presentation times.

  9. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Sourcemore » Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.« less

  10. Controlled-Root Approach To Digital Phase-Locked Loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A.; Thomas, J. Brooks

    1995-01-01

    Performance tailored more flexibly and directly to satisfy design requirements. Controlled-root approach improved method for analysis and design of digital phase-locked loops (DPLLs). Developed rigorously from first principles for fully digital loops, making DPLL theory and design simpler and more straightforward (particularly for third- or fourth-order DPLL) and controlling performance more accurately in case of high gain.

  11. Direct numerical simulation of transition and turbulence in a spatially evolving boundary layer

    NASA Technical Reports Server (NTRS)

    Rai, Man M.; Moin, Parviz

    1991-01-01

    A high-order-accurate finite-difference approach to direct simulations of transition and turbulence in compressible flows is described. Attention is given to the high-free-stream disturbance case in which transition to turbulence occurs close to the leading edge. In effect, computation requirements are reduced. A method for numerically generating free-stream disturbances is presented.

  12. Method for Determination of the Wind Velocity and Direction

    NASA Technical Reports Server (NTRS)

    Dahlin, Goesta Johan

    1988-01-01

    Accurate determination of the position of an artillery piece, for example, using sound measurement systems through measurement of the muzzle noise requires access to wind data that is representative of the portion of the air from where the sound wave is propagated up the microphone base of the system. The invention provides a system for determining such representative wind data.

  13. Simulation of Electric Propulsion Thrusters (Preprint)

    DTIC Science & Technology

    2011-02-07

    activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of

  14. Renormalization group theory outperforms other approaches in statistical comparison between upscaling techniques for porous media

    NASA Astrophysics Data System (ADS)

    Hanasoge, Shravan; Agarwal, Umang; Tandon, Kunj; Koelman, J. M. Vianney A.

    2017-09-01

    Determining the pressure differential required to achieve a desired flow rate in a porous medium requires solving Darcy's law, a Laplace-like equation, with a spatially varying tensor permeability. In various scenarios, the permeability coefficient is sampled at high spatial resolution, which makes solving Darcy's equation numerically prohibitively expensive. As a consequence, much effort has gone into creating upscaled or low-resolution effective models of the coefficient while ensuring that the estimated flow rate is well reproduced, bringing to the fore the classic tradeoff between computational cost and numerical accuracy. Here we perform a statistical study to characterize the relative success of upscaling methods on a large sample of permeability coefficients that are above the percolation threshold. We introduce a technique based on mode-elimination renormalization group theory (MG) to build coarse-scale permeability coefficients. Comparing the results with coefficients upscaled using other methods, we find that MG is consistently more accurate, particularly due to its ability to address the tensorial nature of the coefficients. MG places a low computational demand, in the manner in which we have implemented it, and accurate flow-rate estimates are obtained when using MG-upscaled permeabilities that approach or are beyond the percolation threshold.

  15. Design and evaluation of a freeform lens by using a method of luminous intensity mapping and a differential equation

    NASA Astrophysics Data System (ADS)

    Essameldin, Mahmoud; Fleischmann, Friedrich; Henning, Thomas; Lang, Walter

    2017-02-01

    Freeform optical systems are playing an important role in the field of illumination engineering for redistributing the light intensity, because of its capability of achieving accurate and efficient results. The authors have presented the basic idea of the freeform lens design method at the 117th annual meeting of the German Society of Applied Optics (DGAOProceedings). Now, we demonstrate the feasibility of the design method by designing and evaluating a freeform lens. The concepts of luminous intensity mapping, energy conservation and differential equation are combined in designing a lens for non-imaging applications. The required procedures to design a lens including the simulations are explained in detail. The optical performance is investigated by using a numerical simulation of optical ray tracing. For evaluation, the results are compared with another recently published design method, showing the accurate performance of the proposed method using a reduced number of mapping angles. As a part of the tolerance analyses of the fabrication processes, the influence of the light source misalignments (translation and orientation) on the beam-shaping performance is presented. Finally, the importance of considering the extended light source while designing a freeform lens using the proposed method is discussed.

  16. A novel knowledge-based potential for RNA 3D structure evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang

    2018-03-01

    Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).

  17. Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; Huang, Lianjie

    2015-01-28

    Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less

  18. Marker-free registration for the accurate integration of CT images and the subject's anatomy during navigation surgery of the maxillary sinus

    PubMed Central

    Kang, S-H; Kim, M-K; Kim, J-H; Park, H-K; Park, W

    2012-01-01

    Objective This study compared three marker-free registration methods that are applicable to a navigation system that can be used for maxillary sinus surgery, and evaluated the associated errors, with the aim of determining which registration method is the most applicable for operations that require accurate navigation. Methods The CT digital imaging and communications in medicine (DICOM) data of ten maxillary models in DICOM files were converted into stereolithography file format. All of the ten maxillofacial models were scanned three dimensionally using a light-based three-dimensional scanner. The methods applied for registration of the maxillofacial models utilized the tooth cusp, bony landmarks and maxillary sinus anterior wall area. The errors during registration were compared between the groups. Results There were differences between the three registration methods in the zygoma, sinus posterior wall, molar alveolar, premolar alveolar, lateral nasal aperture and the infraorbital areas. The error was smallest using the overlay method for the anterior wall of the maxillary sinus, and the difference was statistically significant. Conclusion The navigation error can be minimized by conducting registration using the anterior wall of the maxillary sinus during image-guided surgery of the maxillary sinus. PMID:22499127

  19. Comprehensive tire-road friction coefficient estimation based on signal fusion method under complex maneuvering operations

    NASA Astrophysics Data System (ADS)

    Li, L.; Yang, K.; Jia, G.; Ran, X.; Song, J.; Han, Z.-Q.

    2015-05-01

    The accurate estimation of the tire-road friction coefficient plays a significant role in the vehicle dynamics control. The estimation method should be timely and reliable for the controlling requirements, which means the contact friction characteristics between the tire and the road should be recognized before the interference to ensure the safety of the driver and passengers from drifting and losing control. In addition, the estimation method should be stable and feasible for complex maneuvering operations to guarantee the control performance as well. A signal fusion method combining the available signals to estimate the road friction is suggested in this paper on the basis of the estimated ones of braking, driving and steering conditions individually. Through the input characteristics and the states of the vehicle and tires from sensors the maneuvering condition may be recognized, by which the certainty factors of the friction of the three conditions mentioned above may be obtained correspondingly, and then the comprehensive road friction may be calculated. Experimental vehicle tests validate the effectiveness of the proposed method through complex maneuvering operations; the estimated road friction coefficient based on the signal fusion method is relatively timely and accurate to satisfy the control demands.

  20. "What-Where-Which" Episodic Retrieval Requires Conscious Recollection and Is Promoted by Semantic Knowledge

    PubMed Central

    Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane

    2015-01-01

    Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories. PMID:26630170

  1. "What-Where-Which" Episodic Retrieval Requires Conscious Recollection and Is Promoted by Semantic Knowledge.

    PubMed

    Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane

    2015-01-01

    Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories.

  2. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  3. Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method

    DOE PAGES

    Grayver, Alexander V.; Kolev, Tzanio V.

    2015-11-01

    Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less

  4. Large-scale 3D geoelectromagnetic modeling using parallel adaptive high-order finite element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grayver, Alexander V.; Kolev, Tzanio V.

    Here, we have investigated the use of the adaptive high-order finite-element method (FEM) for geoelectromagnetic modeling. Because high-order FEM is challenging from the numerical and computational points of view, most published finite-element studies in geoelectromagnetics use the lowest order formulation. Solution of the resulting large system of linear equations poses the main practical challenge. We have developed a fully parallel and distributed robust and scalable linear solver based on the optimal block-diagonal and auxiliary space preconditioners. The solver was found to be efficient for high finite element orders, unstructured and nonconforming locally refined meshes, a wide range of frequencies, largemore » conductivity contrasts, and number of degrees of freedom (DoFs). Furthermore, the presented linear solver is in essence algebraic; i.e., it acts on the matrix-vector level and thus requires no information about the discretization, boundary conditions, or physical source used, making it readily efficient for a wide range of electromagnetic modeling problems. To get accurate solutions at reduced computational cost, we have also implemented goal-oriented adaptive mesh refinement. The numerical tests indicated that if highly accurate modeling results were required, the high-order FEM in combination with the goal-oriented local mesh refinement required less computational time and DoFs than the lowest order adaptive FEM.« less

  5. Temporal and spatial binning of TCSPC data to improve signal-to-noise ratio and imaging speed

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Beier, Hope T.

    2016-03-01

    Time-correlated single photon counting (TCSPC) is the most robust method for fluorescence lifetime imaging using laser scanning microscopes. However, TCSPC is inherently slow making it ineffective to capture rapid events due to the single photon product per laser pulse causing extensive acquisition time limitations and the requirement of low fluorescence emission efficiency to avoid bias of measurement towards short lifetimes. Furthermore, thousands of photons per pixel are required for traditional instrument response deconvolution and fluorescence lifetime exponential decay estimation. Instrument response deconvolution and fluorescence exponential decay estimation can be performed in several ways including iterative least squares minimization and Laguerre deconvolution. This paper compares the limitations and accuracy of these fluorescence decay analysis techniques to accurately estimate double exponential decays across many data characteristics including various lifetime values, lifetime component weights, signal-to-noise ratios, and number of photons detected. Furthermore, techniques to improve data fitting, including binning data temporally and spatially, are evaluated as methods to improve decay fits and reduce image acquisition time. Simulation results demonstrate that binning temporally to 36 or 42 time bins, improves accuracy of fits for low photon count data. Such a technique reduces the required number of photons for accurate component estimation if lifetime values are known, such as for commercial fluorescent dyes and FRET experiments, and improve imaging speed 10-fold.

  6. A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays

    PubMed Central

    Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.

    2013-01-01

    Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767

  7. Evaluation of a simplified gross thrust calculation method for a J85-21 afterburning turbojet engine in an altitude facility

    NASA Technical Reports Server (NTRS)

    Baer-Riedhart, J. L.

    1982-01-01

    A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.

  8. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  9. ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.

    PubMed

    Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel

    2008-03-15

    Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.

  10. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less

  11. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  12. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  13. Statistical Post-Processing of Wind Speed Forecasts to Estimate Relative Economic Value

    NASA Astrophysics Data System (ADS)

    Courtney, Jennifer; Lynch, Peter; Sweeney, Conor

    2013-04-01

    The objective of this research is to get the best possible wind speed forecasts for the wind energy industry by using an optimal combination of well-established forecasting and post-processing methods. We start with the ECMWF 51 member ensemble prediction system (EPS) which is underdispersive and hence uncalibrated. We aim to produce wind speed forecasts that are more accurate and calibrated than the EPS. The 51 members of the EPS are clustered to 8 weighted representative members (RMs), chosen to minimize the within-cluster spread, while maximizing the inter-cluster spread. The forecasts are then downscaled using two limited area models, WRF and COSMO, at two resolutions, 14km and 3km. This process creates four distinguishable ensembles which are used as input to statistical post-processes requiring multi-model forecasts. Two such processes are presented here. The first, Bayesian Model Averaging, has been proven to provide more calibrated and accurate wind speed forecasts than the ECMWF EPS using this multi-model input data. The second, heteroscedastic censored regression is indicating positive results also. We compare the two post-processing methods, applied to a year of hindcast wind speed data around Ireland, using an array of deterministic and probabilistic verification techniques, such as MAE, CRPS, probability transform integrals and verification rank histograms, to show which method provides the most accurate and calibrated forecasts. However, the value of a forecast to an end-user cannot be fully quantified by just the accuracy and calibration measurements mentioned, as the relationship between skill and value is complex. Capturing the full potential of the forecast benefits also requires detailed knowledge of the end-users' weather sensitive decision-making processes and most importantly the economic impact it will have on their income. Finally, we present the continuous relative economic value of both post-processing methods to identify which is more beneficial to the wind energy industry of Ireland.

  14. Guidance for laboratories performing molecular pathology for cancer patients.

    PubMed

    Cree, Ian A; Deans, Zandra; Ligtenberg, Marjolijn J L; Normanno, Nicola; Edsjö, Anders; Rouleau, Etienne; Solé, Francesc; Thunnissen, Erik; Timens, Wim; Schuuring, Ed; Dequeker, Elisabeth; Murray, Samuel; Dietel, Manfred; Groenen, Patricia; Van Krieken, J Han

    2014-11-01

    Molecular testing is becoming an important part of the diagnosis of any patient with cancer. The challenge to laboratories is to meet this need, using reliable methods and processes to ensure that patients receive a timely and accurate report on which their treatment will be based. The aim of this paper is to provide minimum requirements for the management of molecular pathology laboratories. This general guidance should be augmented by the specific guidance available for different tumour types and tests. Preanalytical considerations are important, and careful consideration of the way in which specimens are obtained and reach the laboratory is necessary. Sample receipt and handling follow standard operating procedures, but some alterations may be necessary if molecular testing is to be performed, for instance to control tissue fixation. DNA and RNA extraction can be standardised and should be checked for quality and quantity of output on a regular basis. The choice of analytical method(s) depends on clinical requirements, desired turnaround time, and expertise available. Internal quality control, regular internal audit of the whole testing process, laboratory accreditation, and continual participation in external quality assessment schemes are prerequisites for delivery of a reliable service. A molecular pathology report should accurately convey the information the clinician needs to treat the patient with sufficient information to allow for correct interpretation of the result. Molecular pathology is developing rapidly, and further detailed evidence-based recommendations are required for many of the topics covered here. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations

    PubMed Central

    Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey

    2012-01-01

    Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254

  16. First-principles engineering of charged defects for two-dimensional quantum technologies

    NASA Astrophysics Data System (ADS)

    Wu, Feng; Galatas, Andrew; Sundararaman, Ravishankar; Rocca, Dario; Ping, Yuan

    2017-12-01

    Charged defects in two-dimensional (2D) materials have emerging applications in quantum technologies such as quantum emitters and quantum computation. The advancement of these technologies requires a rational design of ideal defect centers, demanding reliable computation methods for the quantitatively accurate prediction of defect properties. We present an accurate, parameter-free, and efficient procedure to evaluate the quasiparticle defect states and thermodynamic charge transition levels of defects in 2D materials. Importantly, we solve critical issues that stem from the strongly anisotropic screening in 2D materials, that have so far precluded the accurate prediction of charge transition levels in these materials. Using this procedure, we investigate various defects in monolayer hexagonal boron nitride (h -BN ) for their charge transition levels, stable spin states, and optical excitations. We identify CBVN (nitrogen vacancy adjacent to carbon substitution of boron) to be the most promising defect candidate for scalable quantum bit and emitter applications.

  17. Research on fully distributed optical fiber sensing security system localization algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Xu; Hou, Jiacheng; Liu, Kun; Liu, Tiegen

    2013-12-01

    A new fully distributed optical fiber sensing and location technology based on the Mach-Zehnder interferometers is studied. In this security system, a new climbing point locating algorithm based on short-time average zero-crossing rate is presented. By calculating the zero-crossing rates of the multiple grouped data separately, it not only utilizes the advantages of the frequency analysis method to determine the most effective data group more accurately, but also meets the requirement of the real-time monitoring system. Supplemented with short-term energy calculation group signal, the most effective data group can be quickly picked out. Finally, the accurate location of the climbing point can be effectively achieved through the cross-correlation localization algorithm. The experimental results show that the proposed algorithm can realize the accurate location of the climbing point and meanwhile the outside interference noise of the non-climbing behavior can be effectively filtered out.

  18. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  19. Motion tracking in the liver: Validation of a method based on 4D ultrasound using a nonrigid registration technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, Sinara, E-mail: sinara.vijayan@ntnu.no; Klein, Stefan; Hofstad, Erlend Fagertun

    Purpose: Treatments like radiotherapy and focused ultrasound in the abdomen require accurate motion tracking, in order to optimize dosage delivery to the target and minimize damage to critical structures and healthy tissues around the target. 4D ultrasound is a promising modality for motion tracking during such treatments. In this study, the authors evaluate the accuracy of motion tracking in the liver based on deformable registration of 4D ultrasound images. Methods: The offline analysis was performed using a nonrigid registration algorithm that was specifically designed for motion estimation from dynamic imaging data. The method registers the entire 4D image data sequencemore » in a groupwise optimization fashion, thus avoiding a bias toward a specifically chosen reference time point. Three healthy volunteers were scanned over several breathing cycles (12 s) from three different positions and angles on the abdomen; a total of nine 4D scans for the three volunteers. Well-defined anatomic landmarks were manually annotated in all 96 time frames for assessment of the automatic algorithm. The error of the automatic motion estimation method was compared with interobserver variability. The authors also performed experiments to investigate the influence of parameters defining the deformation field flexibility and evaluated how well the method performed with a lower temporal resolution in order to establish the minimum frame rate required for accurate motion estimation. Results: The registration method estimated liver motion with an error of 1 mm (75% percentile over all datasets), which was lower than the interobserver variability of 1.4 mm. The results were only slightly dependent on the degrees of freedom of the deformation model. The registration error increased to 2.8 mm with an eight times lower temporal resolution. Conclusions: The authors conclude that the methodology was able to accurately track the motion of the liver in the 4D ultrasound data. The authors believe that the method has potential in interventions on moving abdominal organs such as MR or ultrasound guided focused ultrasound therapy and radiotherapy, pending the method is enabled to run in real-time. The data and the annotations used for this study are made publicly available for those who would like to test other methods on 4D liver ultrasound data.« less

  20. A comparison of five methods to predict genomic breeding values of dairy bulls from genome-wide SNP markers

    PubMed Central

    2009-01-01

    Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835

  1. Ground Vibration Attenuation Measurement using Triaxial and Single Axis Accelerometers

    NASA Astrophysics Data System (ADS)

    Mohammad, A. H.; Yusoff, N. A.; Madun, A.; Tajudin, S. A. A.; Zahari, M. N. H.; Chik, T. N. T.; Rahman, N. A.; Annuar, Y. M. N.

    2018-04-01

    Peak Particle Velocity is one of the important term to show the level of the vibration amplitude especially traveling wave by distance. Vibration measurement using triaxial accelerometer is needed to obtain accurate value of PPV however limited by the size and the available channel of the data acquisition module for detailed measurement. In this paper, an attempt to estimate accurate PPV has been made by using only a triaxial accelerometer together with multiple single axis accelerometer for the ground vibration measurement. A field test was conducted on soft ground using nine single axis accelerometers and a triaxial accelerometer installed at nine receiver location R1 to R9. Based from the obtained result, the method shows convincing similarity between actual PPV with the calculated PPV with error ratio 0.97. With the design method, vibration measurement equipment size can be reduced with fewer channel required.

  2. INAA Application for Trace Element Determination in Biological Reference Material

    NASA Astrophysics Data System (ADS)

    Atmodjo, D. P. D.; Kurniawati, S.; Lestiani, D. D.; Adventini, N.

    2017-06-01

    Trace element determination in biological samples is often used in the study of health and toxicology. Determination change to its essentiality and toxicity of trace element require an accurate determination method, which implies that a good Quality Control (QC) procedure should be performed. In this study, QC for trace element determination in biological samples was applied by analyzing the Standard Reference Material (SRM) Bovine muscle 8414 NIST using Instrumental Neutron Activation Analysis (INAA). Three selected trace element such as Fe, Zn, and Se were determined. Accuracy of the elements showed as %recovery and precision as %coefficient of variance (%CV). The result showed that %recovery of Fe, Zn, and Se were in the range between 99.4-107%, 92.7-103%, and 91.9-112%, respectively, whereas %CV were 2.92, 3.70, and 5.37%, respectively. These results showed that INAA method is precise and accurate for trace element determination in biological matrices.

  3. Methods for Real-Time Prediction of the Mode of Travel Using Smartphone-Based GPS and Accelerometer Data

    PubMed Central

    Martin, Bryan D.; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling

    2017-01-01

    We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy. PMID:28885550

  4. Methods for Real-Time Prediction of the Mode of Travel Using Smartphone-Based GPS and Accelerometer Data.

    PubMed

    Martin, Bryan D; Addona, Vittorio; Wolfson, Julian; Adomavicius, Gediminas; Fan, Yingling

    2017-09-08

    We propose and compare combinations of several methods for classifying transportation activity data from smartphone GPS and accelerometer sensors. We have two main objectives. First, we aim to classify our data as accurately as possible. Second, we aim to reduce the dimensionality of the data as much as possible in order to reduce the computational burden of the classification. We combine dimension reduction and classification algorithms and compare them with a metric that balances accuracy and dimensionality. In doing so, we develop a classification algorithm that accurately classifies five different modes of transportation (i.e., walking, biking, car, bus and rail) while being computationally simple enough to run on a typical smartphone. Further, we use data that required no behavioral changes from the smartphone users to collect. Our best classification model uses the random forest algorithm to achieve 96.8% accuracy.

  5. OSM-Classic : An optical imaging technique for accurately determining strain

    NASA Astrophysics Data System (ADS)

    Aldrich, Daniel R.; Ayranci, Cagri; Nobes, David S.

    OSM-Classic is a program designed in MATLAB® to provide a method of accurately determining strain in a test sample using an optical imaging technique. Measuring strain for the mechanical characterization of materials is most commonly performed with extensometers, LVDT (linear variable differential transistors), and strain gauges; however, these strain measurement methods suffer from their fragile nature and it is not particularly easy to attach these devices to the material for testing. To alleviate these potential problems, an optical approach that does not require contact with the specimen can be implemented to measure the strain. OSM-Classic is a software that interrogates a series of images to determine elongation in a test sample and hence, strain of the specimen. It was designed to provide a graphical user interface that includes image processing with a dynamic region of interest. Additionally, the stain is calculated directly while providing active feedback during the processing.

  6. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  7. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  8. Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

    NASA Astrophysics Data System (ADS)

    Xie, Tian; Grossman, Jeffrey C.

    2018-04-01

    The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

  9. Measurement of lung volumes from supine portable chest radiographs.

    PubMed

    Ries, A L; Clausen, J L; Friedman, P J

    1979-12-01

    Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.

  10. BEST: Improved Prediction of B-Cell Epitopes from Antigen Sequences

    PubMed Central

    Gao, Jianzhao; Faraggi, Eshel; Zhou, Yaoqi; Ruan, Jishou; Kurgan, Lukasz

    2012-01-01

    Accurate identification of immunogenic regions in a given antigen chain is a difficult and actively pursued problem. Although accurate predictors for T-cell epitopes are already in place, the prediction of the B-cell epitopes requires further research. We overview the available approaches for the prediction of B-cell epitopes and propose a novel and accurate sequence-based solution. Our BEST (B-cell Epitope prediction using Support vector machine Tool) method predicts epitopes from antigen sequences, in contrast to some method that predict only from short sequence fragments, using a new architecture based on averaging selected scores generated from sliding 20-mers by a Support Vector Machine (SVM). The SVM predictor utilizes a comprehensive and custom designed set of inputs generated by combining information derived from the chain, sequence conservation, similarity to known (training) epitopes, and predicted secondary structure and relative solvent accessibility. Empirical evaluation on benchmark datasets demonstrates that BEST outperforms several modern sequence-based B-cell epitope predictors including ABCPred, method by Chen et al. (2007), BCPred, COBEpro, BayesB, and CBTOPE, when considering the predictions from antigen chains and from the chain fragments. Our method obtains a cross-validated area under the receiver operating characteristic curve (AUC) for the fragment-based prediction at 0.81 and 0.85, depending on the dataset. The AUCs of BEST on the benchmark sets of full antigen chains equal 0.57 and 0.6, which is significantly and slightly better than the next best method we tested. We also present case studies to contrast the propensity profiles generated by BEST and several other methods. PMID:22761950

  11. An approximate Riemann solver for thermal and chemical nonequilibrium flows

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    1994-01-01

    Among the many methods available for the determination of inviscid fluxes across a surface of discontinuity, the flux-difference-splitting technique that employs Roe-averaged variables has been used extensively by the CFD community because of its simplicity and its ability to capture shocks exactly. This method, originally developed for perfect gas flows, has since been extended to equilibrium as well as nonequilibrium flows. Determination of the Roe-averaged variables for the case of a perfect gas flow is a simple task; however, for thermal and chemical nonequilibrium flows, some of the variables are not uniquely defined. Methods available in the literature to determine these variables seem to lack sound bases. The present paper describes a simple, yet accurate, method to determine all the variables for nonequilibrium flows in the Roe-average state. The basis for this method is the requirement that the Roe-averaged variables form a consistent set of thermodynamic variables. The present method satisfies the requirement that the square of the speed of sound be positive.

  12. Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order

    NASA Astrophysics Data System (ADS)

    Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy

    Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.

  13. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  14. Rapid analysis of scattering from periodic dielectric structures using accelerated Cartesian expansions

    DOE PAGES

    Baczewski, Andrew David; Miller, Nicholas C.; Shanker, Balasubramaniam

    2012-03-22

    Here, the analysis of fields in periodic dielectric structures arise in numerous applications of recent interest, ranging from photonic bandgap structures and plasmonically active nanostructures to metamaterials. To achieve an accurate representation of the fields in these structures using numerical methods, dense spatial discretization is required. This, in turn, affects the cost of analysis, particularly for integral-equation-based methods, for which traditional iterative methods require Ο(Ν 2) operations, Ν being the number of spatial degrees of freedom. In this paper, we introduce a method for the rapid solution of volumetric electric field integral equations used in the analysis of doubly periodicmore » dielectric structures. The crux of our method is the accelerated Cartesian expansion algorithm, which is used to evaluate the requisite potentials in Ο(Ν) cost. Results are provided that corroborate our claims of acceleration without compromising accuracy, as well as the application of our method to a number of compelling photonics applications.« less

  15. Accurate continuous geographic assignment from low- to high-density SNP data.

    PubMed

    Guillot, Gilles; Jónsson, Hákon; Hinge, Antoine; Manchih, Nabil; Orlando, Ludovic

    2016-04-01

    Large-scale genotype datasets can help track the dispersal patterns of epidemiological outbreaks and predict the geographic origins of individuals. Such genetically-based geographic assignments also show a range of possible applications in forensics for profiling both victims and criminals, and in wildlife management, where poaching hotspot areas can be located. They, however, require fast and accurate statistical methods to handle the growing amount of genetic information made available from genotype arrays and next-generation sequencing technologies. We introduce a novel statistical method for geopositioning individuals of unknown origin from genotypes. Our method is based on a geostatistical model trained with a dataset of georeferenced genotypes. Statistical inference under this model can be implemented within the theoretical framework of Integrated Nested Laplace Approximation, which represents one of the major recent breakthroughs in statistics, as it does not require Monte Carlo simulations. We compare the performance of our method and an alternative method for geospatial inference, SPA in a simulation framework. We highlight the accuracy and limits of continuous spatial assignment methods at various scales by analyzing genotype datasets from a diversity of species, including Florida Scrub-jay birds Aphelocoma coerulescens, Arabidopsis thaliana and humans, representing 41-197,146 SNPs. Our method appears to be best suited for the analysis of medium-sized datasets (a few tens of thousands of loci), such as reduced-representation sequencing data that become increasingly available in ecology. http://www2.imm.dtu.dk/∼gigu/Spasiba/ gilles.b.guillot@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Nontronite mineral identification in nilgiri hills of tamil nadu using hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Vigneshkumar, M.; Yarakkula, Kiran

    2017-11-01

    Hyperspectral Remote sensing is a tool to identify the minerals along with field investigation. Tamil Nadu has abundant minerals like 30% titanium, 52% molybdenum, 59% garnet, 69% dunite, 75% vermiculite and 81% lignite. To enhance the user and industry requirements, mineral extraction is required. To identify the minerals properly, sophisticated tools are required. Hyperspectral remote sensing provides continuous extraction of earth surface information in an accurate manner. Nontronite is an iron-rich mineral mainly available in Nilgiri hills, Tamil Nadu, India. Due to the large number of bands, hyperspectral data require various preprocessing steps such as bad bands removal, destriping, radiance conversion and atmospheric correction. The atmospheric correction is performed using FLAASH method. The spectral data reduction is carried out with minimum noise fraction (MNF) method. The spatial information is reduced using pixel purity index (PPI) with 10000 iterations. The selected end members are compared with spectral libraries like USGS, JPL, and JHU. In the Nontronite mineral gives the probability of 0.85. Finally the classification is accomplished using spectral angle mapper (SAM) method.

  17. Investigation of test methods, material properties and processes for solar cell encapsulants

    NASA Technical Reports Server (NTRS)

    Willis, P. B.

    1985-01-01

    The historical development of ethylene vinyl acetate (EVA) is presented, including the functional requirements, polymer selection, curing, stabilization, production and module processing. The construction and use of a new method for the accelerated aging of polymers is detailed. The method more closely resembles the conditions that may be encountered in actual module field exposure and additionally may permit service life to be predicted accurately. The use of hardboard as a low cost candidate substrate material is studied. The performance of surface antisoiling treatments useful for imparting a self cleaning property to modules is updated.

  18. A fast non-contact imaging photoplethysmography method using a tissue-like model

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi

    2018-02-01

    Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.

  19. Development and application of a local linearization algorithm for the integration of quaternion rate equations in real-time flight simulation problems

    NASA Technical Reports Server (NTRS)

    Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.

    1973-01-01

    High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.

  20. Image analysis methods for assessing levels of image plane nonuniformity and stochastic noise in a magnetic resonance image of a homogeneous phantom.

    PubMed

    Magnusson, P; Olsson, L E

    2000-08-01

    Magnetic response image plane nonuniformity and stochastic noise are properties that greatly influence the outcome of quantitative magnetic resonance imaging (MRI) evaluations such as gel dosimetry measurements using MRI. To study these properties, robust and accurate image analysis methods are required. New nonuniformity level assessment methods were designed, since previous methods were found to be insufficiently robust and accurate. The new and previously reported nonuniformity level assessment methods were analyzed with respect to, for example, insensitivity to stochastic noise; and previously reported stochastic noise level assessment methods with respect to insensitivity to nonuniformity. Using the same image data, different methods were found to assess significantly different levels of nonuniformity. Nonuniformity levels obtained using methods that count pixels in an intensity interval, and obtained using methods that use only intensity values, were found not to be comparable. The latter were found preferable, since they assess the quantity intrinsically sought. A new method which calculates a deviation image, with every pixel representing the deviation from a reference intensity, was least sensitive to stochastic noise. Furthermore, unlike any other analyzed method, it includes all intensity variations across the phantom area and allows for studies of nonuniformity shapes. This new method was designed for accurate studies of nonuniformities in gel dosimetry measurements, but could also be used with benefit in quality assurance and acceptance testing of MRI, scintillation camera, and computer tomography systems. The stochastic noise level was found to be greatly method dependent. Two methods were found to be insensitive to nonuniformity and also simple to use in practice. One method assesses the stochastic noise level as the average of the levels at five different positions within the phantom area, and the other assesses the stochastic noise in a region outside the phantom area.

Top