Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J
2015-11-08
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.
Real-time, haptics-enabled simulator for probing ex vivo liver tissue.
Lister, Kevin; Gao, Zhan; Desai, Jaydev P
2009-01-01
The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.
Macintyre, Lisa
2011-11-01
Accurate measurement of the pressure delivered by medical compression products is highly desirable both in monitoring treatment and in developing new pressure inducing garments or products. There are several complications in measuring pressure at the garment/body interface and at present no ideal pressure measurement tool exists for this purpose. This paper summarises a thorough evaluation of the accuracy and reproducibility of measurements taken following both of Tekscan Inc.'s recommended calibration procedures for I-scan sensors; and presents an improved method for calibrating and using I-scan pressure sensors. The proposed calibration method enables accurate (±2.1 mmHg) measurement of pressures delivered by pressure garments to body parts with a circumference ≥30 cm. This method is too cumbersome for routine clinical use but is very useful, accurate and reproducible for product development or clinical evaluation purposes. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Robotic CCD microscope for enhanced crystal recognition
Segelke, Brent W.; Toppani, Dominique
2007-11-06
A robotic CCD microscope and procedures to automate crystal recognition. The robotic CCD microscope and procedures enables more accurate crystal recognition, leading to fewer false negative and fewer false positives, and enable detection of smaller crystals compared to other methods available today.
ERIC Educational Resources Information Center
Beare, R. A.
2008-01-01
Professional astronomers use specialized software not normally available to students to determine the rotation periods of asteroids from fragmented light curve data. This paper describes a simple yet accurate method based on Microsoft Excel[R] that enables students to find periods in asteroid light curve and other discontinuous time series data of…
Kinetographic determination of airplane flight characteristics
NASA Technical Reports Server (NTRS)
Raethjen, P; Knott, H
1927-01-01
The author's first experiments with a glider on flight characteristics demonstrated that an accurate flight-path measurement would enable determination of the polar diagram from a gliding flight. Since then he has endeavored to obtain accurate flight measurements by means of kinetograph (motion-picture camera). Different methods of accomplishing this are presented.
Computer Aided Evaluation of Higher Education Tutors' Performance
ERIC Educational Resources Information Center
Xenos, Michalis; Papadopoulos, Thanos
2007-01-01
This article presents a method for computer-aided tutor evaluation: Bayesian Networks are used for organizing the collected data about tutors and for enabling accurate estimations and predictions about future tutor behavior. The model provides indications about each tutor's strengths and weaknesses, which enables the evaluator to exploit strengths…
Sakamoto, Takuya; Imasaka, Ryohei; Taki, Hirofumi; Sato, Toru; Yoshioka, Mototaka; Inoue, Kenichi; Fukuda, Takeshi; Sakai, Hiroyuki
2016-04-01
The objectives of this paper are to propose a method that can accurately estimate the human heart rate (HR) using an ultrawideband (UWB) radar system, and to determine the performance of the proposed method through measurements. The proposed method uses the feature points of a radar signal to estimate the HR efficiently and accurately. Fourier- and periodicity-based methods are inappropriate for estimation of instantaneous HRs in real time because heartbeat waveforms are highly variable, even within the beat-to-beat interval. We define six radar waveform features that enable correlation processing to be performed quickly and accurately. In addition, we propose a feature topology signal that is generated from a feature sequence without using amplitude information. This feature topology signal is used to find unreliable feature points, and thus, to suppress inaccurate HR estimates. Measurements were taken using UWB radar, while simultaneously performing electrocardiography measurements in an experiment that was conducted on nine participants. The proposed method achieved an average root-mean-square error in the interbeat interval of 7.17 ms for the nine participants. The results demonstrate the effectiveness and accuracy of the proposed method. The significance of this study for biomedical research is that the proposed method will be useful in the realization of a remote vital signs monitoring system that enables accurate estimation of HR variability, which has been used in various clinical settings for the treatment of conditions such as diabetes and arterial hypertension.
A comparison of two methods for quantifying parasitic nematode fecundity
USDA-ARS?s Scientific Manuscript database
Accurate measures of nematode fecundity can provide important information for investigating parasite life history evolution, transmission potential, and effects on host health. Understanding differences among fecundity assessment protocols and standardizing methods, where possible, will enable compa...
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François
2017-02-15
In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.
Enabling multiplexed testing of pooled donor cells through whole-genome sequencing.
Chan, Yingleong; Chan, Ying Kai; Goodman, Daniel B; Guo, Xiaoge; Chavez, Alejandro; Lim, Elaine T; Church, George M
2018-04-19
We describe a method that enables the multiplex screening of a pool of many different donor cell lines. Our method accurately predicts each donor proportion from the pool without requiring the use of unique DNA barcodes as markers of donor identity. Instead, we take advantage of common single nucleotide polymorphisms, whole-genome sequencing, and an algorithm to calculate the proportions from the sequencing data. By testing using simulated and real data, we showed that our method robustly predicts the individual proportions from a mixed-pool of numerous donors, thus enabling the multiplexed testing of diverse donor cells en masse.More information is available at https://pgpresearch.med.harvard.edu/poolseq/.
Parameterizing Coefficients of a POD-Based Dynamical System
NASA Technical Reports Server (NTRS)
Kalb, Virginia L.
2010-01-01
A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy
2015-05-01
We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.
Deng, Yong; Luo, Zhaoyang; Jiang, Xu; Xie, Wenhao; Luo, Qingming
2015-07-01
We propose a method based on a decoupled fluorescence Monte Carlo model for constructing fluorescence Jacobians to enable accurate quantification of fluorescence targets within turbid media. The effectiveness of the proposed method is validated using two cylindrical phantoms enclosing fluorescent targets within homogeneous and heterogeneous background media. The results demonstrate that our method can recover relative concentrations of the fluorescent targets with higher accuracy than the perturbation fluorescence Monte Carlo method. This suggests that our method is suitable for quantitative fluorescence diffuse optical tomography, especially for in vivo imaging of fluorophore targets for diagnosis of different diseases and abnormalities.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Collins, Kodi; Warnow, Tandy
2018-06-19
PASTA is a multiple sequence method that uses divide-and-conquer plus iteration to enable base alignment methods to scale with high accuracy to large sequence datasets. By default, PASTA included MAFFT L-INS-i; our new extension of PASTA enables the use of MAFFT G-INS-i, MAFFT Homologs, CONTRAlign, and ProbCons. We analyzed the performance of each base method and PASTA using these base methods on 224 datasets from BAliBASE 4 with at least 50 sequences. We show that PASTA enables the most accurate base methods to scale to larger datasets at reduced computational effort, and generally improves alignment and tree accuracy on the largest BAliBASE datasets. PASTA is available at https://github.com/kodicollins/pasta and has also been integrated into the original PASTA repository at https://github.com/smirarab/pasta. Supplementary data are available at Bioinformatics online.
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723
Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.
Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D
2015-01-01
Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Sequencing small genomic targets with high efficiency and extreme accuracy
Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.
2015-01-01
The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638
Droplet Digital™ PCR Next-Generation Sequencing Library QC Assay.
Heredia, Nicholas J
2018-01-01
Digital PCR is a valuable tool to quantify next-generation sequencing (NGS) libraries precisely and accurately. Accurately quantifying NGS libraries enable accurate loading of the libraries on to the sequencer and thus improve sequencing performance by reducing under and overloading error. Accurate quantification also benefits users by enabling uniform loading of indexed/barcoded libraries which in turn greatly improves sequencing uniformity of the indexed/barcoded samples. The advantages gained by employing the Droplet Digital PCR (ddPCR™) library QC assay includes the precise and accurate quantification in addition to size quality assessment, enabling users to QC their sequencing libraries with confidence.
Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.
Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J
2018-06-01
Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.
Testing a simple field method for assessing nitrate removal in riparian zones
Philippe Vidon; Michael G. Dosskey
2008-01-01
Being able to identify riparian sites that function better for nitrate removal from groundwater is critical to using efficiently the riparian zones for water quality management. For this purpose, managers need a method that is quick, inexpensive, and accurate enough to enable effective management decisions. This study assesses the precision and accuracy of a simple...
Efficacy of curtailment announcements as a predictor of lumber supply
Henry Spelter
2001-01-01
A practical method for tracking the effect of curtailment announcements on lumber supply is described and tested. Combining announcements of closures and curtailments with mill capacities enables the creation of accurate forward-looking assessments of lumber supply 1 to 2 months into the future. For three American and Canadian lumber- producing regions, the method...
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
ERIC Educational Resources Information Center
Mulik, James D.; Sawicki, Eugene
1979-01-01
Accurate for the analysis of ions in solution, this form of analysis enables the analyst to directly assay many compounds that previously were difficult or impossible to analyze. The method is a combination of the methodologies of ion exchange, liquid chromatography, and conductimetric determination with eluant suppression. (Author/RE)
Stability of rigid rotors supported by air foil bearings: Comparison of two fundamental approaches
NASA Astrophysics Data System (ADS)
Larsen, Jon S.; Santos, Ilmar F.; von Osmanski, Sebastian
2016-10-01
High speed direct drive motors enable the use of Air Foil Bearings (AFB) in a wide range of applications due to the elimination of gear forces. Unfortunately, AFB supported rotors are lightly damped, and an accurate prediction of their Onset Speed of Instability (OSI) is therefore important. This paper compares two fundamental methods for predicting the OSI. One is based on a nonlinear time domain simulation and another is based on a linearised frequency domain method and a perturbation of the Reynolds equation. Both methods are based on equivalent models and should predict similar results. Significant discrepancies are observed leading to the question, is the classical frequency domain method sufficiently accurate? The discrepancies and possible explanations are discussed in detail.
Estimating cardiac fiber orientations in pig hearts using registered ultrasound and MR image volumes
NASA Astrophysics Data System (ADS)
Dormer, James D.; Meng, Yuguang; Zhang, Xiaodong; Jiang, Rong; Wagner, Mary B.; Fei, Baowei
2017-03-01
Heart fiber mechanics can be important predictors in current and future cardiac function. Accurate knowledge of these mechanics could enable cardiologists to provide a diagnosis before conditions progress. Magnetic resonance diffusion tensor imaging (MR-DTI) has been used to determine cardiac fiber orientations. Ultrasound is capable of providing anatomical information in real time, enabling a physician to quickly adjust parameters to optimize image scans. If known fiber orientations from a template heart measured using DTI can be accurately deformed onto a cardiac ultrasound volume, fiber orientations could be estimated for the patient without the need for a costly MR scan while still providing cardiologists valuable information about the heart mechanics. In this study, we apply the method to pig hearts, which are a close representation of human heart anatomy. Experiments from pig hearts show that the registration method achieved an average Dice similarity coefficient (DSC) of 0.819 +/- 0.050 between the ultrasound and deformed MR volumes and that the proposed ultrasound-based method is able to estimate the cardiac fiber orientation in pig hearts.
Identity-by-Descent-Based Phasing and Imputation in Founder Populations Using Graphical Models
Palin, Kimmo; Campbell, Harry; Wright, Alan F; Wilson, James F; Durbin, Richard
2011-01-01
Accurate knowledge of haplotypes, the combination of alleles co-residing on a single copy of a chromosome, enables powerful gene mapping and sequence imputation methods. Since humans are diploid, haplotypes must be derived from genotypes by a phasing process. In this study, we present a new computational model for haplotype phasing based on pairwise sharing of haplotypes inferred to be Identical-By-Descent (IBD). We apply the Bayesian network based model in a new phasing algorithm, called systematic long-range phasing (SLRP), that can capitalize on the close genetic relationships in isolated founder populations, and show with simulated and real genome-wide genotype data that SLRP substantially reduces the rate of phasing errors compared to previous phasing algorithms. Furthermore, the method accurately identifies regions of IBD, enabling linkage-like studies without pedigrees, and can be used to impute most genotypes with very low error rate. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc.35:853-860, 2011 PMID:22006673
Spectral estimation of received phase in the presence of amplitude scintillation
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1988-01-01
A technique is demonstrated for obtaining the spectral parameters of the received carrier phase in the presence of carrier amplitude scintillation, by means of a digital phased locked loop. Since the random amplitude fluctuations generate time-varying loop characteristics, straightforward processing of the phase detector output does not provide accurate results. The method developed here performs a time-varying inverse filtering operation on the corrupted observables, thus recovering the original phase process and enabling accurate estimation of its underlying parameters.
NASA Astrophysics Data System (ADS)
Diodato, A.; Cafarelli, A.; Schiappacasse, A.; Tognarelli, S.; Ciuti, G.; Menciassi, A.
2018-02-01
High intensity focused ultrasound (HIFU) is an emerging therapeutic solution that enables non-invasive treatment of several pathologies, mainly in oncology. On the other hand, accurate targeting of moving abdominal organs (e.g. liver, kidney, pancreas) is still an open challenge. This paper proposes a novel method to compensate the physiological respiratory motion of organs during HIFU procedures, by exploiting a robotic platform for ultrasound-guided HIFU surgery provided with a therapeutic annular phased array transducer. The proposed method enables us to keep the same contact point between the transducer and the patient’s skin during the whole procedure, thus minimizing the modification of the acoustic window during the breathing phases. The motion of the target point is compensated through the rotation of the transducer around a virtual pivot point, while the focal depth is continuously adjusted thanks to the axial electronically steering capabilities of the HIFU transducer. The feasibility of the angular motion compensation strategy has been demonstrated in a simulated respiratory-induced organ motion environment. Based on the experimental results, the proposed method appears to be significantly accurate (i.e. the maximum compensation error is always under 1 mm), thus paving the way for the potential use of this technique for in vivo treatment of moving organs, and therefore enabling a wide use of HIFU in clinics.
NASA Astrophysics Data System (ADS)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-01
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiological conditions.
Information Measures for Statistical Orbit Determination
ERIC Educational Resources Information Center
Mashiku, Alinda K.
2013-01-01
The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…
Parallel Cartesian grid refinement for 3D complex flow simulations
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2013-11-01
A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.
Electrochemical thermodynamic measurement system
Reynier, Yvan [Meylan, FR; Yazami, Rachid [Los Angeles, CA; Fultz, Brent T [Pasadena, CA
2009-09-29
The present invention provides systems and methods for accurately characterizing thermodynamic and materials properties of electrodes and electrochemical energy storage and conversion systems. Systems and methods of the present invention are configured for simultaneously collecting a suite of measurements characterizing a plurality of interconnected electrochemical and thermodynamic parameters relating to the electrode reaction state of advancement, voltage and temperature. Enhanced sensitivity provided by the present methods and systems combined with measurement conditions that reflect thermodynamically stabilized electrode conditions allow very accurate measurement of thermodynamic parameters, including state functions such as the Gibbs free energy, enthalpy and entropy of electrode/electrochemical cell reactions, that enable prediction of important performance attributes of electrode materials and electrochemical systems, such as the energy, power density, current rate and the cycle life of an electrochemical cell.
Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick
2013-01-01
We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124
NASA Astrophysics Data System (ADS)
Huang, Rongrong; Pomin, Vitor H.; Sharp, Joshua S.
2011-09-01
Improved methods for structural analyses of glycosaminoglycans (GAGs) are required to understand their functional roles in various biological processes. Major challenges in structural characterization of complex GAG oligosaccharides using liquid chromatography-mass spectrometry (LC-MS) include the accurate determination of the patterns of sulfation due to gas-phase losses of the sulfate groups upon collisional activation and inefficient on-line separation of positional sulfation isomers prior to MS/MS analyses. Here, a sequential chemical derivatization procedure including permethylation, desulfation, and acetylation was demonstrated to enable both on-line LC separation of isomeric mixtures of chondroitin sulfate (CS) oligosaccharides and accurate determination of sites of sulfation by MS n . The derivatized oligosaccharides have sulfate groups replaced with acetyl groups, which are sufficiently stable to survive MS n fragmentation and reflect the original sulfation patterns. A standard reversed-phase LC-MS system with a capillary C18 column was used for separation, and MS n experiments using collision-induced dissociation (CID) were performed. Our results indicate that the combination of this derivatization strategy and MS n methodology enables accurate identification of the sulfation isomers of CS hexasaccharides with either saturated or unsaturated nonreducing ends. Moreover, derivatized CS hexasaccharide isomer mixtures become separable by LC-MS method due to different positions of acetyl modifications.
Huang, Rongrong; Pomin, Vitor H.; Sharp, Joshua S.
2011-01-01
Improved methods for structural analyses of glycosaminoglycans (GAGs) are required to understand their functional roles in various biological processes. Major challenges in structural characterization of complex GAG oligosaccharides using liquid chromatography-mass spectrometry (LC-MS) include the accurate determination of the patterns of sulfation due to gas-phase losses of the sulfate groups upon collisional activation and inefficient on-line separation of positional sulfation isomers prior to MS/MS analyses. Here, a sequential chemical derivatization procedure including permethylation, desulfation, and acetylation was demonstrated to enable both on-line LC separation of isomeric mixtures of chondroitin sulfate (CS) oligosaccharides and accurate determination of sites of sulfation by MSn. The derivatized oligosaccharides have sulfate groups replaced with acetyl groups, which are sufficiently stable to survive MSn fragmentation and reflect the original sulfation patterns. A standard reversed-phase LC-MS system with a capillary C18 column was used for separation, and MSn experiments using collision-induced dissociation (CID) were performed. Our results indicate that the combination of this derivatization strategy and MSn methodology enables accurate identification of the sulfation isomers of CS hexasaccharides with either saturated or unsaturated nonreducing ends. Moreover, derivatized CS hexasaccharide isomer mixtures become separable by LC-MS method due to different positions of acetyl modifications. PMID:21953261
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
El Mendili, Mohamed-Mounir; Chen, Raphaël; Tiret, Brice; Villard, Noémie; Trunet, Stéphanie; Pélégrini-Issac, Mélanie; Lehéricy, Stéphane; Pradat, Pierre-François; Benali, Habib
2015-01-01
To design a fast and accurate semi-automated segmentation method for spinal cord 3T MR images and to construct a template of the cervical spinal cord. A semi-automated double threshold-based method (DTbM) was proposed enabling both cross-sectional and volumetric measures from 3D T2-weighted turbo spin echo MR scans of the spinal cord at 3T. Eighty-two healthy subjects, 10 patients with amyotrophic lateral sclerosis, 10 with spinal muscular atrophy and 10 with spinal cord injuries were studied. DTbM was compared with active surface method (ASM), threshold-based method (TbM) and manual outlining (ground truth). Accuracy of segmentations was scored visually by a radiologist in cervical and thoracic cord regions. Accuracy was also quantified at the cervical and thoracic levels as well as at C2 vertebral level. To construct a cervical template from healthy subjects' images (n=59), a standardization pipeline was designed leading to well-centered straight spinal cord images and accurate probability tissue map. Visual scoring showed better performance for DTbM than for ASM. Mean Dice similarity coefficient (DSC) was 95.71% for DTbM and 90.78% for ASM at the cervical level and 94.27% for DTbM and 89.93% for ASM at the thoracic level. Finally, at C2 vertebral level, mean DSC was 97.98% for DTbM compared with 98.02% for TbM and 96.76% for ASM. DTbM showed similar accuracy compared with TbM, but with the advantage of limited manual interaction. A semi-automated segmentation method with limited manual intervention was introduced and validated on 3T images, enabling the construction of a cervical spinal cord template.
Whole-genome regression and prediction methods applied to plant and animal breeding.
de Los Campos, Gustavo; Hickey, John M; Pong-Wong, Ricardo; Daetwyler, Hans D; Calus, Mario P L
2013-02-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade.
Whole-Genome Regression and Prediction Methods Applied to Plant and Animal Breeding
de los Campos, Gustavo; Hickey, John M.; Pong-Wong, Ricardo; Daetwyler, Hans D.; Calus, Mario P. L.
2013-01-01
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade. PMID:22745228
Using Qualitative Methods for Revising Items in the Hispanic Stress Inventory
ERIC Educational Resources Information Center
Cervantes, Richard C.; Goldbach, Jeremy T.; Padilla, Amado M.
2012-01-01
Despite progress in the development of measures to assess psychosocial stress experiences in the general population, a lack of culturally informed assessment instruments exist to enable clinicians and researchers to detect and accurately diagnosis mental health concerns among Hispanics. The Hispanic Stress Inventory (HSI) was developed…
Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans
2014-09-30
the assessment tag impact on animal health and well-being. Specifically, we are working to develop methods that will enable the accurate estimates...currently not available for any marine mammal, about animal health and activity has the potential to revolutionize how animals are cared for in these
Pollock, Samuel B; Hu, Amy; Mou, Yun; Martinko, Alexander J; Julien, Olivier; Hornsby, Michael; Ploder, Lynda; Adams, Jarrett J; Geng, Huimin; Müschen, Markus; Sidhu, Sachdev S; Moffat, Jason; Wells, James A
2018-03-13
Human cells express thousands of different surface proteins that can be used for cell classification, or to distinguish healthy and disease conditions. A method capable of profiling a substantial fraction of the surface proteome simultaneously and inexpensively would enable more accurate and complete classification of cell states. We present a highly multiplexed and quantitative surface proteomic method using genetically barcoded antibodies called phage-antibody next-generation sequencing (PhaNGS). Using 144 preselected antibodies displayed on filamentous phage (Fab-phage) against 44 receptor targets, we assess changes in B cell surface proteins after the development of drug resistance in a patient with acute lymphoblastic leukemia (ALL) and in adaptation to oncogene expression in a Myc-inducible Burkitt lymphoma model. We further show PhaNGS can be applied at the single-cell level. Our results reveal that a common set of proteins including FLT3, NCR3LG1, and ROR1 dominate the response to similar oncogenic perturbations in B cells. Linking high-affinity, selective, genetically encoded binders to NGS enables direct and highly multiplexed protein detection, comparable to RNA-sequencing for mRNA. PhaNGS has the potential to profile a substantial fraction of the surface proteome simultaneously and inexpensively to enable more accurate and complete classification of cell states. Copyright © 2018 the Author(s). Published by PNAS.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-03-15
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system's lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system's ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection.
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
NASA Astrophysics Data System (ADS)
Jung, Jaewoon; Sugita, Yuji; Ten-no, S.
2010-02-01
An analytic gradient expression is formulated and implemented for the second-order Møller-Plesset perturbation theory (MP2) based on the generalized hybrid orbital QM/MM method. The method enables us to obtain an accurate geometry at a reasonable computational cost. The performance of the method is assessed for various isomers of alanine dipepetide. We also compare the optimized structures of fumaramide-derived [2]rotaxane and cAMP-dependent protein kinase with experiment.
Orun, A B; Seker, H; Uslan, V; Goodyer, E; Smith, G
2017-06-01
The textural structure of 'skin age'-related subskin components enables us to identify and analyse their unique characteristics, thus making substantial progress towards establishing an accurate skin age model. This is achieved by a two-stage process. First by the application of textural analysis using laser speckle imaging, which is sensitive to textural effects within the λ = 650 nm spectral band region. In the second stage, a Bayesian inference method is used to select attributes from which a predictive model is built. This technique enables us to contrast different skin age models, such as the laser speckle effect against the more widely used normal light (LED) imaging method, whereby it is shown that our laser speckle-based technique yields better results. The method introduced here is non-invasive, low cost and capable of operating in real time; having the potential to compete against high-cost instrumentation such as confocal microscopy or similar imaging devices used for skin age identification purposes. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
New Developments in Cathodoluminescence Spectroscopy for the Study of Luminescent Materials
den Engelsen, Daniel; Fern, George R.; Harris, Paul G.; Ireland, Terry G.; Silver, Jack
2017-01-01
Herein, we describe three advanced techniques for cathodoluminescence (CL) spectroscopy that have recently been developed in our laboratories. The first is a new method to accurately determine the CL-efficiency of thin layers of phosphor powders. When a wide band phosphor with a band gap (Eg > 5 eV) is bombarded with electrons, charging of the phosphor particles will occur, which eventually leads to erroneous results in the determination of the luminous efficacy. To overcome this problem of charging, a comparison method has been developed, which enables accurate measurement of the current density of the electron beam. The study of CL from phosphor specimens in a scanning electron microscope (SEM) is the second subject to be treated. A detailed description of a measuring method to determine the overall decay time of single phosphor crystals in a SEM without beam blanking is presented. The third technique is based on the unique combination of microscopy and spectrometry in the transmission electron microscope (TEM) of Brunel University London (UK). This combination enables the recording of CL-spectra of nanometre-sized specimens and determining spatial variations in CL emission across individual particles by superimposing the scanning TEM and CL-images. PMID:28772671
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
Beke, Tamás; Czajlik, András; Csizmadia, Imre G; Perczel, András
2006-02-02
Nanofibers, nanofilms and nanotubes constructed of one to four strands of oligo-alpha- and oligo-beta-peptides were obtained by using carefully selected building units. Lego-type approaches based on thermoneutral isodesmic reactions can be used to reconstruct the total energies of both linear and tubular periodic nanostructures with acceptable accuracy. Total energies of several different nanostructures were accurately determined with errors typically falling in the subchemical range. Thus, attention will be focused on the description of suitable isodesmic reactions that have enabled the determination of the total energy of polypeptides and therefore offer a very fast, efficient and accurate method to obtain energetic information on large and even very large nanosystems.
Graus, Matthew S.; Neumann, Aaron K.; Timlin, Jerilyn A.
2017-01-05
Fungi in the Candida genus are the most common fungal pathogens. They not only cause high morbidity and mortality but can also cost billions of dollars in healthcare. To alleviate this burden, early and accurate identification of Candida species is necessary. However, standard identification procedures can take days and have a large false negative error. The method described in this study takes advantage of hyperspectral confocal fluorescence microscopy, which enables the capability to quickly and accurately identify and characterize the unique autofluorescence spectra from different Candida species with up to 84% accuracy when grown in conditions that closely mimic physiologicalmore » conditions.« less
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong
2015-01-01
Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288
Ranking Reputation and Quality in Online Rating Systems
Liao, Hao; Zeng, An; Xiao, Rui; Ren, Zhuo-Ming; Chen, Duan-Bing; Zhang, Yi-Cheng
2014-01-01
How to design an accurate and robust ranking algorithm is a fundamental problem with wide applications in many real systems. It is especially significant in online rating systems due to the existence of some spammers. In the literature, many well-performed iterative ranking methods have been proposed. These methods can effectively recognize the unreliable users and reduce their weight in judging the quality of objects, and finally lead to a more accurate evaluation of the online products. In this paper, we design an iterative ranking method with high performance in both accuracy and robustness. More specifically, a reputation redistribution process is introduced to enhance the influence of highly reputed users and two penalty factors enable the algorithm resistance to malicious behaviors. Validation of our method is performed in both artificial and real user-object bipartite networks. PMID:24819119
Bai, Yuqiang; Nichols, Jason J
2017-05-01
The thickness of tear film has been investigated under both invasive and non-invasive methods. While invasive methods are largely historical, more recent noninvasive methods are generally based on optical approaches that provide accurate, precise, and rapid measures. Optical microscopy, interferometry, and optical coherence tomography (OCT) have been developed to characterize the thickness of tear film or certain aspects of the tear film (e.g., the lipid layer). This review provides an in-depth overview on contemporary optical techniques used in studying the tear film, including both advantages and limitations of these approaches. It is anticipated that further developments of high-resolution OCT and other interferometric methods will enable a more accurate and precise measurement of the thickness of the tear film and its related dynamic properties. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bache, S; Belley, M; Benning, R
2014-06-15
Purpose: Pre-clinical micro-radiation therapy studies often utilize very small beams (∼0.5-5mm), and require accurate dose delivery in order to effectively investigate treatment efficacy. Here we present a novel high-resolution absolute 3D dosimetry procedure, capable of ∼100-micron isotopic dosimetry in anatomically accurate rodent-morphic phantoms Methods: Anatomically accurate rat-shaped 3D dosimeters were made using 3D printing techniques from outer body contours and spinal contours outlined on CT. The dosimeters were made from a radiochromic plastic material PRESAGE, and incorporated high-Z PRESASGE inserts mimicking the spine. A simulated 180-degree spinal arc treatment was delivered through a 2 step process: (i) cone-beam-CT image-guided positioningmore » was performed to precisely position the rat-dosimeter for treatment on the XRad225 small animal irradiator, then (ii) treatment was delivered with a simulated spine-treatment with a 180-degree arc with 20mm x 10mm cone at 225 kVp. Dose distribution was determined from the optical density change using a high-resolution in-house optical-CT system. Absolute dosimetry was enabled through calibration against a novel nano-particle scintillation detector positioned in a channel in the center of the distribution. Results: Sufficient contrast between regular PRESAGE (tissue equivalent) and high-Z PRESAGE (spinal insert) was observed to enable highly accurate image-guided alignment and targeting. The PRESAGE was found to have linear optical density (OD) change sensitivity with respect to dose (R{sup 2} = 0.9993). Absolute dose for 360-second irradiation at isocenter was found to be 9.21Gy when measured with OD change, and 9.4Gy with nano-particle detector- an agreement within 2%. The 3D dose distribution was measured at 500-micron resolution Conclusion: This work demonstrates for the first time, the feasibility of accurate absolute 3D dose measurement in anatomically accurate rat phantoms containing variable density PRESAGE material (tissue equivalent and bone equivalent). This method enables precise treatment verification of micro-radiation therapies, and enhances the robustness of tumor radio-response studies. This work was supported by NIH R01CA100835.« less
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-11-11
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.
Identifying and quantifying secondhand smoke in multiunit homes with tobacco smoke odor complaints
NASA Astrophysics Data System (ADS)
Dacunto, Philip J.; Cheng, Kai-Chung; Acevedo-Bolton, Viviana; Klepeis, Neil E.; Repace, James L.; Ott, Wayne R.; Hildemann, Lynn M.
2013-06-01
Accurate identification and quantification of the secondhand tobacco smoke (SHS) that drifts between multiunit homes (MUHs) is essential for assessing resident exposure and health risk. We collected 24 gaseous and particle measurements over 6-9 day monitoring periods in five nonsmoking MUHs with reported SHS intrusion problems. Nicotine tracer sampling showed evidence of SHS intrusion in all five homes during the monitoring period; logistic regression and chemical mass balance (CMB) analysis enabled identification and quantification of some of the precise periods of SHS entry. Logistic regression models identified SHS in eight periods when residents complained of SHS odor, and CMB provided estimates of SHS magnitude in six of these eight periods. Both approaches properly identified or apportioned all six cooking periods used as no-SHS controls. Finally, both approaches enabled identification and/or apportionment of suspected SHS in five additional periods when residents did not report smelling smoke. The time resolution of this methodology goes beyond sampling methods involving single tracers (such as nicotine), enabling the precise identification of the magnitude and duration of SHS intrusion, which is essential for accurate assessment of human exposure.
Direct evaluation of free energy for large system through structure integration approach.
Takeuchi, Kazuhito; Tanaka, Ryohei; Yuge, Koretaka
2015-09-30
We propose a new approach, 'structure integration', enabling direct evaluation of configurational free energy for large systems. The present approach is based on the statistical information of lattice. Through first-principles-based simulation, we find that the present method evaluates configurational free energy accurately in disorder states above critical temperature.
ERIC Educational Resources Information Center
Fellenz, Martin R.
2006-01-01
A key challenge for management instructors using graded groupwork with students is to find ways to maximize student learning from group projects while ensuring fair and accurate assessment methods. This article presents the Groupwork Peer-Evaluation Protocol (GPEP) that enables the assessment of individual contributions to graded student…
USDA-ARS?s Scientific Manuscript database
The amount of secondary cell wall (SCW) cellulose in the fiber affects the quality and commercial value of cotton. Accurate assessments of SCW cellulose are essential for improving cotton fibers. Fourier Transform Infrared (FT-IR) spectroscopy enables distinguishing SCW from other cell wall componen...
Sakaguchi, Yohei; Hayama, Tadashi; Yoshida, Hideyuki; Itoyama, Miki; Todoroki, Kenichiro; Yamaguchi, Masatoshi; Nohta, Hitoshi
2014-12-15
A separation-oriented derivatization method using a specific fluorous affinity between perfluoroalkyl-containing compounds was applied to selective liquid chromatography/tandem mass spectrometric (LC/MS/MS) analysis of sialyl oligosaccharides. The perfluoroalkyl-labeled sialyl oligosaccharides could be selectively retained on an LC column with the perfluoroalkyl-modified stationary phase and effectively distinguished from non-derivatized species. Sialyl oligosaccharides (3'-sialyllactose, 6'-sialyllactose, sialyllacto-N-tetraose a, sialyllacto-N-tetraose b, sialyllacto-N-tetraose c, and disialyllacto-N-tetraose) were derivatized with 4,4,5,5,6,6,7,7,8,8,9,9,10,10,11,11,11-heptadecafluoroundecylamine via amidation in the presence of 4-(4,6-dimethoxy-1,3,5-triazin-2-yl)-4-methylmorpholinium chloride (condensation reagent). The obtained derivatives were directly injected onto the fluorous LC column without any pretreatments and then detected by positive electrospray ionization MS/MS. The method enabled accurate determination of the sialyl oligosaccharides in biological samples such as human urine and human milk, because there was no interference with matrix-induced effects during LC/MS/MS analysis. The limits of detection of the examined sialyl oligosaccharides, defined as signal-to-noise (S/N) = 3, were in the range 0.033-0.13 nM. Accuracy in the range 95.6-108% was achieved, and the precision (relative standard deviation) was within 9.4%. This method enabled highly selective and sensitive analysis of sialyl oligosaccharides, enabling accurate measurement of even their trace amounts in biological matrices. The proposed method may prove to be a powerful tool for the analysis of various sialyl oligosaccharides. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Bourke, M.; Balme, M.; Beyer, R. A.; Williams, K. K.
2004-01-01
Methods traditionally used to estimate the relative height of surface features on Mars include: photoclinometry, shadow length and stereography. The MOLA data set enables a more accurate assessment of the surface topography of Mars. However, many small-scale aeolian bedforms remain below the sample resolution of the MOLA data set. In response to this a number of research teams have adopted and refined existing methods and applied them to high resolution (2-6 m/pixel) narrow angle MOC satellite images. Collectively, the methods provide data on a range of morphometric parameters (many not previously available for dunes on Mars). These include dune height, width, length, surface area, volume, longitudinal and cross profiles). This data will facilitate a more accurate analysis of aeolian bedforms on Mars. In this paper we undertake a comparative analysis of methods used to determine the height of aeolian dunes and ripples.
A Comparison of Methods Used to Estimate the Height of Sand Dunes on Mars
NASA Technical Reports Server (NTRS)
Bourke, M. C.; Balme, M.; Beyer, R. A.; Williams, K. K.; Zimbelman, J.
2006-01-01
The collection of morphometric data on small-scale landforms from other planetary bodies is difficult. We assess four methods that can be used to estimate the height of aeolian dunes on Mars. These are (1) stereography, (2) slip face length, (3) profiling photoclinometry, and (4) Mars Orbiter Laser Altimeter (MOLA). Results show that there is good agreement among the methods when conditions are ideal. However, limitations inherent to each method inhibited their accurate application to all sites. Collectively, these techniques provide data on a range of morphometric parameters, some of which were not previously available for dunes on Mars. They include dune height, width, length, surface area, volume, and longitudinal and transverse profiles. Thc utilization of these methods will facilitate a more accurate analysis of aeolian dunes on Mars and enable comparison with dunes on other planetary surfaces.
Reconstructing Spatial Distributions from Anonymized Locations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horey, James L; Forrest, Stephanie; Groat, Michael
2012-01-01
Devices such as mobile phones, tablets, and sensors are often equipped with GPS that accurately report a person's location. Combined with wireless communication, these devices enable a wide range of new social tools and applications. These same qualities, however, leave location-aware applications vulnerable to privacy violations. This paper introduces the Negative Quad Tree, a privacy protection method for location aware applications. The method is broadly applicable to applications that use spatial density information, such as social applications that measure the popularity of social venues. The method employs a simple anonymization algorithm running on mobile devices, and a more complex reconstructionmore » algorithm on a central server. This strategy is well suited to low-powered mobile devices. The paper analyzes the accuracy of the reconstruction method in a variety of simulated and real-world settings and demonstrates that the method is accurate enough to be used in many real-world scenarios.« less
Design and control of a macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff
1993-01-01
Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.
Psychophysical contrast calibration
To, Long; Woods, Russell L; Goldstein, Robert B; Peli, Eli
2013-01-01
Electronic displays and computer systems offer numerous advantages for clinical vision testing. Laboratory and clinical measurements of various functions and in particular of (letter) contrast sensitivity require accurately calibrated display contrast. In the laboratory this is achieved using expensive light meters. We developed and evaluated a novel method that uses only psychophysical responses of a person with normal vision to calibrate the luminance contrast of displays for experimental and clinical applications. Our method combines psychophysical techniques (1) for detection (and thus elimination or reduction) of display saturating nonlinearities; (2) for luminance (gamma function) estimation and linearization without use of a photometer; and (3) to measure without a photometer the luminance ratios of the display’s three color channels that are used in a bit-stealing procedure to expand the luminance resolution of the display. Using a photometer we verified that the calibration achieved with this procedure is accurate for both LCD and CRT displays enabling testing of letter contrast sensitivity to 0.5%. Our visual calibration procedure enables clinical, internet and home implementation and calibration verification of electronic contrast testing. PMID:23643843
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Apparatus and method for variable angle slant hole collimator
Lee, Seung Joon; Kross, Brian J.; McKisson, John E.
2017-07-18
A variable angle slant hole (VASH) collimator for providing collimation of high energy photons such as gamma rays during radiological imaging of humans. The VASH collimator includes a stack of multiple collimator leaves and a means of quickly aligning each leaf to provide various projection angles. Rather than rotate the detector around the subject, the VASH collimator enables the detector to remain stationary while the projection angle of the collimator is varied for tomographic acquisition. High collimator efficiency is achieved by maintaining the leaves in accurate alignment through the various projection angles. Individual leaves include unique angled cuts to maintain a precise target collimation angle. Matching wedge blocks driven by two actuators with twin-lead screws accurately position each leaf in the stack resulting in the precise target collimation angle. A computer interface with the actuators enables precise control of the projection angle of the collimator.
NASA Astrophysics Data System (ADS)
Casamayou-Boucau, Yannick; Ryder, Alan G.
2017-09-01
Anisotropy resolved multidimensional emission spectroscopy (ARMES) provides valuable insights into multi-fluorophore proteins (Groza et al 2015 Anal. Chim. Acta 886 133-42). Fluorescence anisotropy adds to the multidimensional fluorescence dataset information about the physical size of the fluorophores and/or the rigidity of the surrounding micro-environment. The first ARMES studies used standard thin film polarizers (TFP) that had negligible transmission between 250 and 290 nm, preventing accurate measurement of intrinsic protein fluorescence from tyrosine and tryptophan. Replacing TFP with pairs of broadband wire grid polarizers enabled standard fluorescence spectrometers to accurately measure anisotropies between 250 and 300 nm, which was validated with solutions of perylene in the UV and Erythrosin B and Phloxine B in the visible. In all cases, anisotropies were accurate to better than ±1% when compared to literature measurements made with Glan Thompson or TFP polarizers. Better dual wire grid polarizer UV transmittance and the use of excitation-emission matrix measurements for ARMES required complete Rayleigh scatter elimination. This was achieved by chemometric modelling rather than classical interpolation, which enabled the acquisition of pure anisotropy patterns over wider spectral ranges. In combination, these three improvements permit the accurate implementation of ARMES for studying intrinsic protein fluorescence.
Continuous Shape Estimation of Continuum Robots Using X-ray Images
Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron
2015-01-01
We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960
Continuous Shape Estimation of Continuum Robots Using X-ray Images.
Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron
2013-05-06
We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.
Use of the DISST Model to Estimate the HOMA and Matsuda Indexes Using Only a Basal Insulin Assay
Docherty, Paul D.; Chase, J. Geoffrey
2014-01-01
Background: It is hypothesized that early detection of reduced insulin sensitivity (SI) could prompt intervention that may reduce the considerable financial strain type 2 diabetes mellitus (T2DM) places on global health care. Reduction of the cost of already inexpensive SI metrics such as the Matsuda and HOMA indexes would enable more widespread, economically feasible use of these metrics for screening. The goal of this research was to determine a means of reducing the number of insulin samples and therefore the cost required to provide an accurate Matsuda Index value. Method: The Dynamic Insulin Sensitivity and Secretion Test (DISST) model was used with the glucose and basal insulin measurements from an Oral Glucose Tolerance Test (OGTT) to predict patient insulin responses. The insulin response to the OGTT was determined via population based regression analysis that incorporated the 60-minute glucose and basal insulin values. Results: The proposed method derived accurate and precise Matsuda Indices as compared to the fully sampled Matsuda (R = .95) using only the basal assay insulin-level data and 4 glucose measurements. Using a model employing the basal insulin also allows for determination of the 1-day HOMA value. Conclusion: The DISST model was successfully modified to allow for the accurate prediction an individual’s insulin response to the OGTT. In turn, this enabled highly accurate and precise estimation of a Matsuda Index using only the glucose and basal insulin assays. As insulin assays account for the majority of the cost of the Matsuda Index, this model offers a significant reduction in assay cost. PMID:24876431
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp
The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less
El Mendili, Mohamed-Mounir; Trunet, Stéphanie; Pélégrini-Issac, Mélanie; Lehéricy, Stéphane; Pradat, Pierre-François; Benali, Habib
2015-01-01
Objective To design a fast and accurate semi-automated segmentation method for spinal cord 3T MR images and to construct a template of the cervical spinal cord. Materials and Methods A semi-automated double threshold-based method (DTbM) was proposed enabling both cross-sectional and volumetric measures from 3D T2-weighted turbo spin echo MR scans of the spinal cord at 3T. Eighty-two healthy subjects, 10 patients with amyotrophic lateral sclerosis, 10 with spinal muscular atrophy and 10 with spinal cord injuries were studied. DTbM was compared with active surface method (ASM), threshold-based method (TbM) and manual outlining (ground truth). Accuracy of segmentations was scored visually by a radiologist in cervical and thoracic cord regions. Accuracy was also quantified at the cervical and thoracic levels as well as at C2 vertebral level. To construct a cervical template from healthy subjects’ images (n=59), a standardization pipeline was designed leading to well-centered straight spinal cord images and accurate probability tissue map. Results Visual scoring showed better performance for DTbM than for ASM. Mean Dice similarity coefficient (DSC) was 95.71% for DTbM and 90.78% for ASM at the cervical level and 94.27% for DTbM and 89.93% for ASM at the thoracic level. Finally, at C2 vertebral level, mean DSC was 97.98% for DTbM compared with 98.02% for TbM and 96.76% for ASM. DTbM showed similar accuracy compared with TbM, but with the advantage of limited manual interaction. Conclusion A semi-automated segmentation method with limited manual intervention was introduced and validated on 3T images, enabling the construction of a cervical spinal cord template. PMID:25816143
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-01-01
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.
A dynamic family practice information system for enhanced financial management.
Hofman, M N; Hughes, R L
1982-08-01
The definition of the fiscal unit/family structure has enabled users of the FMIS, or family practices in Colorado and Wyoming, to maintain accurate, ongoing financial information on their patients. In turn, this structure has been a major incentive for maintaining accurate family information, and a distinct benefit to FMIS users. This article has presented the rationale, design, and method of implementation of the fiscal unit structure, with the intention of presenting this concept to others for use in other information systems used in maintaining family-oriented financial and medical information for medical practices.
Brennan, T.M.; Hammons, B.E.; Tsao, J.Y.
1992-12-15
A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth. 3 figs.
Brennan, Thomas M.; Hammons, B. Eugene; Tsao, Jeffrey Y.
1992-01-01
A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth.
Li, Tongyang; Wang, Shaoping; Zio, Enrico; Shi, Jian; Hong, Wei
2018-01-01
Leakage is the most important failure mode in aircraft hydraulic systems caused by wear and tear between friction pairs of components. The accurate detection of abrasive debris can reveal the wear condition and predict a system’s lifespan. The radial magnetic field (RMF)-based debris detection method provides an online solution for monitoring the wear condition intuitively, which potentially enables a more accurate diagnosis and prognosis on the aviation hydraulic system’s ongoing failures. To address the serious mixing of pipe abrasive debris, this paper focuses on the superimposed abrasive debris separation of an RMF abrasive sensor based on the degenerate unmixing estimation technique. Through accurately separating and calculating the morphology and amount of the abrasive debris, the RMF-based abrasive sensor can provide the system with wear trend and sizes estimation of the wear particles. A well-designed experiment was conducted and the result shows that the proposed method can effectively separate the mixed debris and give an accurate count of the debris based on RMF abrasive sensor detection. PMID:29543733
NASA Technical Reports Server (NTRS)
Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Philip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations which are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminarturbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes which take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-HT code and applied to film cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30 holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and four blowing ratios of 0.5, 1.0, 1.5 and 2.0 are shown. Flow features under those conditions are also described.
NASA Astrophysics Data System (ADS)
Kahrobaee, Saeed; Hejazi, Taha-Hossein
2017-07-01
Austenitizing and tempering temperatures are the effective characteristics in heat treating process of AISI D2 tool steel. Therefore, controlling them enables the heat treatment process to be designed more accurately which results in more balanced mechanical properties. The aim of this work is to develop a multiresponse predictive model that enables finding these characteristics based on nondestructive tests by a set of parameters of the magnetic Barkhausen noise technique and hysteresis loop method. To produce various microstructural changes, identical specimens from the AISI D2 steel sheet were austenitized in the range 1025-1130 °C, for 30 min, oil-quenched and finally tempered at various temperatures between 200 °C and 650 °C. A set of nondestructive data have been gathered based on general factorial design of experiments and used for training and testing the multiple response surface model. Finally, an optimization model has been proposed to achieve minimal error prediction. Results revealed that applying Barkhausen and hysteresis loop methods, simultaneously, coupling to the multiresponse model, has a potential to be used as a reliable and accurate nondestructive tool for predicting austenitizing and tempering temperatures (which, in turn, led to characterizing the microstructural changes) of the parts with unknown heat treating conditions.
Quantitative analysis of periodontal pathogens by ELISA and real-time polymerase chain reaction.
Hamlet, Stephen M
2010-01-01
The development of analytical methods enabling the accurate identification and enumeration of bacterial species colonizing the oral cavity has led to the identification of a small number of bacterial pathogens that are major factors in the etiology of periodontal disease. Further, these methods also underpin more recent epidemiological analyses of the impact of periodontal disease on general health. Given the complex milieu of over 700 species of microorganisms known to exist within the complex biofilms found in the oral cavity, the identification and enumeration of oral periodontopathogens has not been an easy task. In recent years however, some of the intrinsic limitations of the more traditional microbiological analyses previously used have been overcome with the advent of immunological and molecular analytical methods. Of the plethora of methodologies reported in the literature, the enzyme-linked immunosorbent assay (ELISA), which combines the specificity of antibody with the sensitivity of simple enzyme assays and the polymerase chain reaction (PCR), has been widely utilized in both laboratory and clinical applications. Although conventional PCR does not allow quantitation of the target organism, real-time PCR (rtPCR) has the ability to detect amplicons as they accumulate in "real time" allowing subsequent quantitation. These methods enable the accurate quantitation of as few as 10(2) (using rtPCR) to 10(4) (using ELISA) periodontopathogens in dental plaque samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Montagne, Louise; Derhourhi, Mehdi; Piton, Amélie; Toussaint, Bénédicte; Durand, Emmanuelle; Vaillant, Emmanuel; Thuillier, Dorothée; Gaget, Stefan; De Graeve, Franck; Rabearivelo, Iandry; Lansiaux, Amélie; Lenne, Bruno; Sukno, Sylvie; Desailloud, Rachel; Cnop, Miriam; Nicolescu, Ramona; Cohen, Lior; Zagury, Jean-François; Amouyal, Mélanie; Weill, Jacques; Muller, Jean; Sand, Olivier; Delobel, Bruno; Froguel, Philippe; Bonnefond, Amélie
2018-05-16
The molecular diagnosis of extreme forms of obesity, in which accurate detection of both copy number variations (CNVs) and point mutations, is crucial for an optimal care of the patients and genetic counseling for their families. Whole-exome sequencing (WES) has benefited considerably this molecular diagnosis, but its poor ability to detect CNVs remains a major limitation. We aimed to develop a method (CoDE-seq) enabling the accurate detection of both CNVs and point mutations in one step. CoDE-seq is based on an augmented WES method, using probes distributed uniformly throughout the genome. CoDE-seq was validated in 40 patients for whom chromosomal DNA microarray was available. CNVs and mutations were assessed in 82 children/young adults with suspected Mendelian obesity and/or intellectual disability and in their parents when available (n total = 145). CoDE-seq not only detected all of the 97 CNVs identified by chromosomal DNA microarrays but also found 84 additional CNVs, due to a better resolution. When compared to CoDE-seq and chromosomal DNA microarrays, WES failed to detect 37% and 14% of CNVs, respectively. In the 82 patients, a likely molecular diagnosis was achieved in >30% of the patients. Half of the genetic diagnoses were explained by CNVs while the other half by mutations. CoDE-seq has proven cost-efficient and highly effective as it avoids the sequential genetic screening approaches currently used in clinical practice for the accurate detection of CNVs and point mutations. Copyright © 2018 The Authors. Published by Elsevier GmbH.. All rights reserved.
Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod
2007-11-20
Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.
Dual beam translator for use in Laser Doppler anemometry
Brudnoy, David M.
1987-01-01
A method and apparatus for selectively translating the path of at least one pair of light beams in a Laser Doppler anemometry device whereby the light paths are translated in a direction parallel to the original beam paths so as to enable attainment of spacial coincidence of the two intersection volumes and permit accurate measurements of Reynolds shear stress.
Dual beam translator for use in Laser Doppler anemometry
Brudnoy, D.M.
1984-04-12
A method and apparatus for selectively translating the path of at least one pair of light beams in a Laser Doppler anemometry device whereby the light paths are translated in a direction parallel to the original beam paths so as to enable attainment of spacial coincidence of the two intersection volumes and permit accurate measurements of Reynolds shear stress.
Montero-Dorta, Antonio D.; Bolton, Adam S.; Brownstein, Joel R.; ...
2016-06-09
The history of the expanding universe is encoded in the large-scale distribution of galaxies throughout space. By mapping out the three-dimensional locations of millions of galaxies with powerful telescopes, we can directly measure this expansion history. When interpreted using Einstein's theory of gravity, this expansion history lets us infer the contents of the universe, including the amount and nature of "dark energy", an as-yet unexplained energy density associated with the empty vacuum of space. However, to make these measurements and inferences accurately, we must understand and control for a large number of experimental effects. This paper develops a novel methodmore » for large cosmological galaxy surveys, and applies it to data from the "BOSS" experiment of the Third Sloan Digital Sky Survey. This method enables an accurate statistical characterization of the "completeness" of the BOSS experiment: the probability that a given galaxy at a given place in the universe is actually detected and successfully measured. It also enables the accurate determination of the underlying demographics of the galaxy population being studied by the experiment. These two ingredients can then be used to make a more accurate comparison between the results of the experiment and the theoretical models that predict the observable effects of dark energy.« less
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
A method for the rapid detection of urinary tract infections.
Olsson, Carl; Kapoor, Deepak; Howard, Glenn
2012-04-01
To determine the reliability of a rapid detection method compared with the reference standard streaked agar plate in diagnosing the presence of urinary tract infection (UTI). De-identified clean catch urine specimens from 980 office visit patients were processed during a 30-day period. Classic 1-μL and 10-μL streaked agar plates were used in parallel with the new CultureStat Rapid UTI Detection System (CSRUDS). Urine results were evaluated using the CSRUDS at 30 and 90 minutes after collection. A comparative analysis of the subsequent plate results versus the CSRUDS results was achieved for 973 of these samples. Positive UTI conditions were accurately identified by both CSRUDS and agar streak plate methods. CSRUDS accurately identified UTI negative conditions with 99.3% reliability at 90 minutes. The negative predictive value of CSRUDS was 99.2% at 30 minutes. Current agar plating for first-round UTI screening has substantial documented problems that can negatively affect an accurate and timely UTI diagnosis. A novel rapid detection system, the CSRUDS provides UTI negative/positive same-day results in ≤ 90 minutes from the start of test. Such rapidly available results will enable more accurate and timely clinical decisions to be made in the urology office, particularly regarding infection status before urologic instrumentation. Copyright © 2012 Elsevier Inc. All rights reserved.
Accurate Modeling Method for Cu Interconnect
NASA Astrophysics Data System (ADS)
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki
2015-02-15
Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Sensing and Active Flow Control for Advanced BWB Propulsion-Airframe Integration Concepts
NASA Technical Reports Server (NTRS)
Fleming, John; Anderson, Jason; Ng, Wing; Harrison, Neal
2005-01-01
In order to realize the substantial performance benefits of serpentine boundary layer ingesting diffusers, this study investigated the use of enabling flow control methods to reduce engine-face flow distortion. Computational methods and novel flow control modeling techniques were utilized that allowed for rapid, accurate analysis of flow control geometries. Results were validated experimentally using the Techsburg Ejector-based wind tunnel facility; this facility is capable of simulating the high-altitude, high subsonic Mach number conditions representative of BWB cruise conditions.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
Direct volume estimation without segmentation
NASA Astrophysics Data System (ADS)
Zhen, X.; Wang, Z.; Islam, A.; Bhaduri, M.; Chan, I.; Li, S.
2015-03-01
Volume estimation plays an important role in clinical diagnosis. For example, cardiac ventricular volumes including left ventricle (LV) and right ventricle (RV) are important clinical indicators of cardiac functions. Accurate and automatic estimation of the ventricular volumes is essential to the assessment of cardiac functions and diagnosis of heart diseases. Conventional methods are dependent on an intermediate segmentation step which is obtained either manually or automatically. However, manual segmentation is extremely time-consuming, subjective and highly non-reproducible; automatic segmentation is still challenging, computationally expensive, and completely unsolved for the RV. Towards accurate and efficient direct volume estimation, our group has been researching on learning based methods without segmentation by leveraging state-of-the-art machine learning techniques. Our direct estimation methods remove the accessional step of segmentation and can naturally deal with various volume estimation tasks. Moreover, they are extremely flexible to be used for volume estimation of either joint bi-ventricles (LV and RV) or individual LV/RV. We comparatively study the performance of direct methods on cardiac ventricular volume estimation by comparing with segmentation based methods. Experimental results show that direct estimation methods provide more accurate estimation of cardiac ventricular volumes than segmentation based methods. This indicates that direct estimation methods not only provide a convenient and mature clinical tool for cardiac volume estimation but also enables diagnosis of cardiac diseases to be conducted in a more efficient and reliable way.
Quantitation of sweet steviol glycosides by means of a HILIC-MS/MS-SIDA approach.
Well, Caroline; Frank, Oliver; Hofmann, Thomas
2013-11-27
Meeting the rising consumer demand for natural food ingredients, steviol glycosides, the sweet principle of Stevia rebaudiana Bertoni (Bertoni), have recently been approved as food additives in the European Union. As regulatory constraints require sensitive methods to analyze the sweet-tasting steviol glycosides in foods and beverages, a HILIC-MS/MS method was developed enabling the accurate and reliable quantitation of the major steviol glycosides stevioside, rebaudiosides A-F, steviolbioside, rubusoside, and dulcoside A by using the corresponding deuterated 16,17-dihydrosteviol glycosides as suitable internal standards. This quantitation not only enables the analysis of the individual steviol glycosides in foods and beverages but also can support the optimization of breeding and postharvest downstream processing of Stevia plants to produce preferentially sweet and least bitter tasting Stevia extracts.
Gutzwiller renormalization group
Lanatà, Nicola; Yao, Yong -Xin; Deng, Xiaoyu; ...
2016-01-06
We develop a variational scheme called the “Gutzwiller renormalization group” (GRG), which enables us to calculate the ground state of Anderson impurity models (AIM) with arbitrary numerical precision. Our method exploits the low-entanglement property of the ground state of local Hamiltonians in combination with the framework of the Gutzwiller wave function and indicates that the ground state of the AIM has a very simple structure, which can be represented very accurately in terms of a surprisingly small number of variational parameters. Furthermore, we perform benchmark calculations of the single-band AIM that validate our theory and suggest that the GRG mightmore » enable us to study complex systems beyond the reach of the other methods presently available and pave the way to interesting generalizations, e.g., to nonequilibrium transport in nanostructures.« less
Coplanar electrode microfluidic chip enabling accurate sheathless impedance cytometry.
De Ninno, Adele; Errico, Vito; Bertani, Francesca Romana; Businaro, Luca; Bisegna, Paolo; Caselli, Federica
2017-03-14
Microfluidic impedance cytometry offers a simple non-invasive method for single-cell analysis. Coplanar electrode chips are especially attractive due to ease of fabrication, yielding miniaturized, reproducible, and ultimately low-cost devices. However, their accuracy is challenged by the dependence of the measured signal on particle trajectory within the interrogation volume, that manifests itself as an error in the estimated particle size, unless any kind of focusing system is used. In this paper, we present an original five-electrode coplanar chip enabling accurate particle sizing without the need for focusing. The chip layout is designed to provide a peculiar signal shape from which a new metric correlating with particle trajectory can be extracted. This metric is exploited to correct the estimated size of polystyrene beads of 5.2, 6 and 7 μm nominal diameter, reaching coefficient of variations lower than the manufacturers' quoted values. The potential impact of the proposed device in the field of life sciences is demonstrated with an application to Saccharomyces cerevisiae yeast.
Gruen, Dieter M.; Young, Charles E.; Pellin, Michael J.
1989-01-01
A method and apparatus for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected autoionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy.
Gruen, D.M.; Young, C.E.; Pellin, M.J.
1989-08-08
A method and apparatus are described for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected auto-ionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy. 8 figs.
Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.
Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios
2016-09-15
Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.
Shao, Xu; Milner, Ben
2005-08-01
This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-07-05
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell.
Petukhov, Viktor; Guo, Jimin; Baryawno, Ninib; Severe, Nicolas; Scadden, David T; Samsonova, Maria G; Kharchenko, Peter V
2018-06-19
Recent single-cell RNA-seq protocols based on droplet microfluidics use massively multiplexed barcoding to enable simultaneous measurements of transcriptomes for thousands of individual cells. The increasing complexity of such data creates challenges for subsequent computational processing and troubleshooting of these experiments, with few software options currently available. Here, we describe a flexible pipeline for processing droplet-based transcriptome data that implements barcode corrections, classification of cell quality, and diagnostic information about the droplet libraries. We introduce advanced methods for correcting composition bias and sequencing errors affecting cellular and molecular barcodes to provide more accurate estimates of molecular counts in individual cells.
Alali, Sanaz; Gribble, Adam; Vitkin, I Alex
2016-03-01
A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.
Preface: Special Topic: From Quantum Mechanics to Force Fields.
Piquemal, Jean-Philip; Jordan, Kenneth D
2017-10-28
This Special Topic issue entitled "From Quantum Mechanics to Force Fields" is dedicated to the ongoing efforts of the theoretical chemistry community to develop a new generation of accurate force fields based on data from high-level electronic structure calculations and to develop faster electronic structure methods for testing and designing force fields as well as for carrying out simulations. This issue includes a collection of 35 original research articles that illustrate recent theoretical advances in the field. It provides a timely snapshot of recent developments in the generation of approaches to enable more accurate molecular simulations of processes important in chemistry, physics, biophysics, and materials science.
Preface: Special Topic: From Quantum Mechanics to Force Fields
NASA Astrophysics Data System (ADS)
Piquemal, Jean-Philip; Jordan, Kenneth D.
2017-10-01
This Special Topic issue entitled "From Quantum Mechanics to Force Fields" is dedicated to the ongoing efforts of the theoretical chemistry community to develop a new generation of accurate force fields based on data from high-level electronic structure calculations and to develop faster electronic structure methods for testing and designing force fields as well as for carrying out simulations. This issue includes a collection of 35 original research articles that illustrate recent theoretical advances in the field. It provides a timely snapshot of recent developments in the generation of approaches to enable more accurate molecular simulations of processes important in chemistry, physics, biophysics, and materials science.
Gulati, Sanchita; During, David; Mainland, Jeff; Wong, Agnes M F
2018-01-01
One of the key challenges to healthcare organizations is the development of relevant and accurate cost information. In this paper, we used time-driven activity-based costing (TDABC) method to calculate the costs of treating individual patients with specific medical conditions over their full cycle of care. We discussed how TDABC provides a critical, systematic and data-driven approach to estimate costs accurately and dynamically, as well as its potential to enable structural and rational cost reduction to bring about a sustainable healthcare system. © 2018 Longwoods Publishing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Masud, Mohammad Shahed; Borisyuk, Roman; Stuart, Liz
2017-07-15
This study analyses multiple spike trains (MST) data, defines its functional connectivity and subsequently visualises an accurate diagram of connections. This is a challenging problem. For example, it is difficult to distinguish the common input and the direct functional connection of two spike trains. The new method presented in this paper is based on the traditional pairwise cross-correlation function (CCF) and a new combination of statistical techniques. First, the CCF is used to create the Advanced Correlation Grid (ACG) correlation where both the significant peak of the CCF and the corresponding time delay are used for detailed analysis of connectivity. Second, these two features of functional connectivity are used to classify connections. Finally, the visualization technique is used to represent the topology of functional connections. Examples are presented in the paper to demonstrate the new Advanced Correlation Grid method and to show how it enables discrimination between (i) influence from one spike train to another through an intermediate spike train and (ii) influence from one common spike train to another pair of analysed spike trains. The ACG method enables scientists to automatically distinguish between direct connections from spurious connections such as common source connection and indirect connection whereas existing methods require in-depth analysis to identify such connections. The ACG is a new and effective method for studying functional connectivity of multiple spike trains. This method can identify accurately all the direct connections and can distinguish common source and indirect connections automatically. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimates of Power Plant NOx Emissions and Lifetimes from OMI NO2 Satellite Retrievals
NASA Technical Reports Server (NTRS)
de Foy, Benjamin; Lu, Zifeng; Streets, David G.; Lamsal, Lok N.; Duncan, Bryan N.
2015-01-01
Isolated power plants with well characterized emissions serve as an ideal test case of methods to estimate emissions using satellite data. In this study we evaluate the Exponentially-Modified Gaussian (EMG) method and the box model method based on mass balance for estimating known NOx emissions from satellite retrievals made by the Ozone Monitoring Instrument (OMI). We consider 29 power plants in the USA which have large NOx plumes that do not overlap with other sources and which have emissions data from the Continuous Emission Monitoring System (CEMS). This enables us to identify constraints required by the methods, such as which wind data to use and how to calculate background values. We found that the lifetimes estimated by the methods are too short to be representative of the chemical lifetime. Instead, we introduce a separate lifetime parameter to account for the discrepancy between estimates using real data and those that theory would predict. In terms of emissions, the EMG method required averages from multiple years to give accurate results, whereas the box model method gave accurate results for individual ozone seasons.
Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.
Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2017-02-01
Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Date, Kumi; Ishigure, Takaaki
2017-02-01
Polymer optical waveguides with graded-index (GI) circular cores are fabricated using the Mosquito method, in which the positions of parallel cores are accurately controlled. Such an accurate arrangement is of great importance for a high optical coupling efficiency with other optical components such as fiber ribbons. In the Mosquito method that we developed, a core monomer with a viscous liquid state is dispensed into another liquid state monomer for cladding via a syringe needle. Hence, the core positions are likely to shift during or after the dispensing process due to several factors. We investigate the factors, specifically affecting the core height. When the core and cladding monomers are selected appropriately, the effect of the gravity could be negligible, so the core height is maintained uniform, resulting in accurate core heights. The height variance is controlled in +/-2 micrometers for the 12 cores. Meanwhile, larger shift in the core height is observed when the needle-tip position is apart from the substrate surface. One of the possible reasons of the needle-tip height dependence is the asymmetric volume contraction during the monomer curing. We find a linear relationship between the original needle-tip height and the core-height observed. This relationship is implemented in the needle-scan program to stabilize the core height in different layers. Finally, the core heights are accurately controlled even if the cores are aligned on various heights. These results indicate that the Mosquito method enables to fabricate waveguides in which the cores are 3-dimensionally aligned with a high position accuracy.
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Multiphase Interface Tracking with Fast Semi-Lagrangian Contouring.
Li, Xiaosheng; He, Xiaowei; Liu, Xuehui; Zhang, Jian J; Liu, Baoquan; Wu, Enhua
2016-08-01
We propose a semi-Lagrangian method for multiphase interface tracking. In contrast to previous methods, our method maintains an explicit polygonal mesh, which is reconstructed from an unsigned distance function and an indicator function, to track the interface of arbitrary number of phases. The surface mesh is reconstructed at each step using an efficient multiphase polygonization procedure with precomputed stencils while the distance and indicator function are updated with an accurate semi-Lagrangian path tracing from the meshes of the last step. Furthermore, we provide an adaptive data structure, multiphase distance tree, to accelerate the updating of both the distance function and the indicator function. In addition, the adaptive structure also enables us to contour the distance tree accurately with simple bisection techniques. The major advantage of our method is that it can easily handle topological changes without ambiguities and preserve both the sharp features and the volume well. We will evaluate its efficiency, accuracy and robustness in the results part with several examples.
NASA Astrophysics Data System (ADS)
Farges, Bérangère; Duchez, David; Dussap, Claude-Gilles; Cornet, Jean-François
2012-01-01
In microgravity, one of the major challenge encountered in biological life support systems (BLSS) is the gas-liquid transfer with, for instance, the necessity to provide CO2 (carbon source, pH control) and to recover the evolved O2 in photobioreactors used as atmosphere bioregenerative systems.This paper describes first the development of a system enabling the accurate characterization of the mass transfer limiting step for a PTFE membrane module used as a possible efficient solution to the microgravity gas-liquid transfer. This original technical apparatus, together with a technical assessment of membrane permeability to different gases, is associated with a balance model, determining thus completely the CO2 mass transfer problem between phases. First results are given and discussed for the CO2 mass transfer coefficient kLCO obtained in case of absorption experiments at pH 8 using the hollow fiber membrane module. The consistency of the proposed method, based on a gas and liquid phase balances verifying carbon conservation enables a very accurate determination of the kLCO value as a main limiting step of the whole process. Nevertheless, further experiments are still needed to demonstrate that the proposed method could serve in the future as reference method for mass transfer coefficient determination if using membrane modules for BLSS in reduced or microgravity conditions.
NASA Technical Reports Server (NTRS)
Ameri, Ali A.; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
NASA Technical Reports Server (NTRS)
Ameri, Ali; Shyam, Vikram; Rigby, David; Poinsatte, Phillip; Thurman, Douglas; Steinthorsson, Erlendur
2014-01-01
Computational fluid dynamics (CFD) analysis using Reynolds-averaged Navier-Stokes (RANS) formulation for turbomachinery-related flows has enabled improved engine component designs. RANS methodology has limitations that are related to its inability to accurately describe the spectrum of flow phenomena encountered in engines. Examples of flows that are difficult to compute accurately with RANS include phenomena such as laminar/turbulent transition, turbulent mixing due to mixing of streams, and separated flows. Large eddy simulation (LES) can improve accuracy but at a considerably higher cost. In recent years, hybrid schemes that take advantage of both unsteady RANS and LES have been proposed. This study investigated an alternative scheme, the time-filtered Navier-Stokes (TFNS) method applied to compressible flows. The method developed by Shih and Liu was implemented in the Glenn-Heat-Transfer (Glenn-HT) code and applied to film-cooling flows. In this report the method and its implementation is briefly described. The film effectiveness results obtained for film cooling from a row of 30deg holes with a pitch of 3.0 diameters emitting air at a nominal density ratio of unity and two blowing ratios of 0.5 and 1.0 are shown. Flow features under those conditions are also described.
NASA Astrophysics Data System (ADS)
Pedemonte, Stefano; Pierce, Larry; Van Leemput, Koen
2017-11-01
Measuring the depth-of-interaction (DOI) of gamma photons enables increasing the resolution of emission imaging systems. Several design variants of DOI-sensitive detectors have been recently introduced to improve the performance of scanners for positron emission tomography (PET). However, the accurate characterization of the response of DOI detectors, necessary to accurately measure the DOI, remains an unsolved problem. Numerical simulations are, at the state of the art, imprecise, while measuring directly the characteristics of DOI detectors experimentally is hindered by the impossibility to impose the depth-of-interaction in an experimental set-up. In this article we introduce a machine learning approach for extracting accurate forward models of gamma imaging devices from simple pencil-beam measurements, using a nonlinear dimensionality reduction technique in combination with a finite mixture model. The method is purely data-driven, not requiring simulations, and is applicable to a wide range of detector types. The proposed method was evaluated both in a simulation study and with data acquired using a monolithic gamma camera designed for PET (the cMiCE detector), demonstrating the accurate recovery of the DOI characteristics. The combination of the proposed calibration technique with maximum- a posteriori estimation of the coordinates of interaction provided a depth resolution of ≈1.14 mm for the simulated PET detector and ≈1.74 mm for the cMiCE detector. The software and experimental data are made available at http://occiput.mgh.harvard.edu/depthembedding/.
Choi, Ted; Eskin, Eleazar
2013-01-01
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue. PMID:23785294
NASA Astrophysics Data System (ADS)
Unger, Jakob; Sun, Tianchen; Chen, Yi-Ling; Phipps, Jennifer E.; Bold, Richard J.; Darrow, Morgan A.; Ma, Kwan-Liu; Marcu, Laura
2018-01-01
An important step in establishing the diagnostic potential for emerging optical imaging techniques is accurate registration between imaging data and the corresponding tissue histopathology typically used as gold standard in clinical diagnostics. We present a method to precisely register data acquired with a point-scanning spectroscopic imaging technique from fresh surgical tissue specimen blocks with corresponding histological sections. Using a visible aiming beam to augment point-scanning multispectral time-resolved fluorescence spectroscopy on video images, we evaluate two different markers for the registration with histology: fiducial markers using a 405-nm CW laser and the tissue block's outer shape characteristics. We compare the registration performance with benchmark methods using either the fiducial markers or the outer shape characteristics alone to a hybrid method using both feature types. The hybrid method was found to perform best reaching an average error of 0.78±0.67 mm. This method provides a profound framework to validate diagnostical abilities of optical fiber-based techniques and furthermore enables the application of supervised machine learning techniques to automate tissue characterization.
Fractal propagation method enables realistic optical microscopy simulations in biological tissues
Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.
2017-01-01
Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499
King, Andrew J; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F
2017-01-01
Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device's accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use.
King, Andrew J.; Hochheiser, Harry; Visweswaran, Shyam; Clermont, Gilles; Cooper, Gregory F.
2017-01-01
Eye-tracking is a valuable research tool that is used in laboratory and limited field environments. We take steps toward developing methods that enable widespread adoption of eye-tracking and its real-time application in clinical decision support. Eye-tracking will enhance awareness and enable intelligent views, more precise alerts, and other forms of decision support in the Electronic Medical Record (EMR). We evaluated a low-cost eye-tracking device and found the device’s accuracy to be non-inferior to a more expensive device. We also developed and evaluated an automatic method for mapping eye-tracking data to interface elements in the EMR (e.g., a displayed laboratory test value). Mapping was 88% accurate across the six participants in our experiment. Finally, we piloted the use of the low-cost device and the automatic mapping method to label training data for a Learning EMR (LEMR) which is a system that highlights the EMR elements a physician is predicted to use. PMID:28815151
A macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Wang, Yulun
1993-01-01
This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.
Multi-modality image registration for effective thermographic fever screening
NASA Astrophysics Data System (ADS)
Dwith, C. Y. N.; Ghassemi, Pejhman; Pfefer, Joshua; Casamento, Jon; Wang, Quanzeng
2017-02-01
Fever screening based on infrared thermographs (IRTs) is a viable mass screening approach during infectious disease pandemics, such as Ebola and Severe Acute Respiratory Syndrome (SARS), for temperature monitoring in public places like hospitals and airports. IRTs have been found to be powerful, quick and non-invasive methods for detecting elevated temperatures. Moreover, regions medially adjacent to the inner canthi (called the canthi regions in this paper) are preferred sites for fever screening. Accurate localization of the canthi regions can be achieved through multi-modality registration of infrared (IR) and white-light images. Here we propose a registration method through a coarse-fine registration strategy using different registration models based on landmarks and edge detection on eye contours. We have evaluated the registration accuracy to be within +/- 2.7 mm, which enables accurate localization of the canthi regions.
Computation of transmitted and received B1 fields in magnetic resonance imaging.
Milles, Julien; Zhu, Yue Min; Chen, Nan-Kuei; Panych, Lawrence P; Gimenez, Gérard; Guttmann, Charles R G
2006-05-01
Computation of B1 fields is a key issue for determination and correction of intensity nonuniformity in magnetic resonance images. This paper presents a new method for computing transmitted and received B1 fields. Our method combines a modified MRI acquisition protocol and an estimation technique based on the Levenberg-Marquardt algorithm and spatial filtering. It enables accurate estimation of transmitted and received B1 fields for both homogeneous and heterogeneous objects. The method is validated using numerical simulations and experimental data from phantom and human scans. The experimental results are in agreement with theoretical expectations.
An Algebraic Approach to Guarantee Harmonic Balance Method Using Gröbner Base
NASA Astrophysics Data System (ADS)
Yagi, Masakazu; Hisakado, Takashi; Okumura, Kohshi
Harmonic balance (HB) method is well known principle for analyzing periodic oscillations on nonlinear networks and systems. Because the HB method has a truncation error, approximated solutions have been guaranteed by error bounds. However, its numerical computation is very time-consuming compared with solving the HB equation. This paper proposes an algebraic representation of the error bound using Gröbner base. The algebraic representation enables to decrease the computational cost of the error bound considerably. Moreover, using singular points of the algebraic representation, we can obtain accurate break points of the error bound by collisions.
NASA Astrophysics Data System (ADS)
Yuan, Wu; Kut, Carmen; Liang, Wenxuan; Li, Xingde
2017-03-01
Cancer is known to alter the local optical properties of tissues. The detection of OCT-based optical attenuation provides a quantitative method to efficiently differentiate cancer from non-cancer tissues. In particular, the intraoperative use of quantitative OCT is able to provide a direct visual guidance in real time for accurate identification of cancer tissues, especially these without any obvious structural layers, such as brain cancer. However, current methods are suboptimal in providing high-speed and accurate OCT attenuation mapping for intraoperative brain cancer detection. In this paper, we report a novel frequency-domain (FD) algorithm to enable robust and fast characterization of optical attenuation as derived from OCT intensity images. The performance of this FD algorithm was compared with traditional fitting methods by analyzing datasets containing images from freshly resected human brain cancer and from a silica phantom acquired by a 1310 nm swept-source OCT (SS-OCT) system. With graphics processing unit (GPU)-based CUDA C/C++ implementation, this new attenuation mapping algorithm can offer robust and accurate quantitative interpretation of OCT images in real time during brain surgery.
A Self-Directed Method for Cell-Type Identification and Separation of Gene Expression Microarrays
Zuckerman, Neta S.; Noam, Yair; Goldsmith, Andrea J.; Lee, Peter P.
2013-01-01
Gene expression analysis is generally performed on heterogeneous tissue samples consisting of multiple cell types. Current methods developed to separate heterogeneous gene expression rely on prior knowledge of the cell-type composition and/or signatures - these are not available in most public datasets. We present a novel method to identify the cell-type composition, signatures and proportions per sample without need for a-priori information. The method was successfully tested on controlled and semi-controlled datasets and performed as accurately as current methods that do require additional information. As such, this method enables the analysis of cell-type specific gene expression using existing large pools of publically available microarray datasets. PMID:23990767
Nir, Oded; Marvin, Esra; Lahav, Ori
2014-11-01
Measuring and modeling pH in concentrated aqueous solutions in an accurate and consistent manner is of paramount importance to many R&D and industrial applications, including RO desalination. Nevertheless, unified definitions and standard procedures have yet to be developed for solutions with ionic strength higher than ∼0.7 M, while implementation of conventional pH determination approaches may lead to significant errors. In this work a systematic yet simple methodology for measuring pH in concentrated solutions (dominated by Na(+)/Cl(-)) was developed and evaluated, with the aim of achieving consistency with the Pitzer ion-interaction approach. Results indicate that the addition of 0.75 M of NaCl to NIST buffers, followed by assigning a new standard pH (calculated based on the Pitzer approach), enabled reducing measured errors to below 0.03 pH units in seawater RO brines (ionic strength up to 2 M). To facilitate its use, the method was developed to be both conceptually and practically analogous to the conventional pH measurement procedure. The method was used to measure the pH of seawater RO retentates obtained at varying recovery ratios. The results matched better the pH values predicted by an accurate RO transport model. Calibrating the model by the measured pH values enabled better boron transport prediction. A Donnan-induced phenomenon, affecting pH in both retentate and permeate streams, was identified and quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.
An approach for accurate simulation of liquid mixing in a T-shaped micromixer.
Matsunaga, Takuya; Lee, Ho-Joon; Nishino, Koichi
2013-04-21
In this paper, we propose a new computational method for efficient evaluation of the fluid mixing behaviour in a T-shaped micromixer with a rectangular cross section at high Schmidt number under steady state conditions. Our approach enables a low-cost high-quality simulation based on tracking of fluid particles for convective fluid mixing and posterior solving of a model of the species equation for molecular diffusion. The examined parameter range is Re = 1.33 × 10(-2) to 240 at Sc = 3600. The proposed method is shown to simulate well the mixing quality even in the engulfment regime, where the ordinary grid-based simulation is not able to obtain accurate solutions with affordable mesh sizes due to the numerical diffusion at high Sc. The obtained results agree well with a backward random-walk Monte Carlo simulation, by which the accuracy of the proposed method is verified. For further investigation of the characteristics of the proposed method, the Sc dependency is examined in a wide range of Sc from 10 to 3600 at Re = 200. The study reveals that the model discrepancy error emerges more significantly in the concentration distribution at lower Sc, while the resulting mixing quality is accurate over the entire range.
NASA Astrophysics Data System (ADS)
Zhang, Zh.
2018-02-01
An analytical method is presented, which enables the non-uniform velocity and pressure distributions at the impeller inlet of a pump to be accurately computed. The analyses are based on the potential flow theory and the geometrical similarity of the streamline distribution along the leading edge of the impeller blades. The method is thus called streamline similarity method (SSM). The obtained geometrical form of the flow distribution is then simply described by the geometrical variable G( s) and the first structural constant G I . As clearly demonstrated and also validated by experiments, both the flow velocity and the pressure distributions at the impeller inlet are usually highly non-uniform. This knowledge is indispensible for impeller blade designs to fulfill the shockless inlet flow condition. By introducing the second structural constant G II , the paper also presents the simple and accurate computation of the shock loss, which occurs at the impeller inlet. The introduction of two structural constants contributes immensely to the enhancement of the computational accuracies. As further indicated, all computations presented in this paper can also be well applied to the non-uniform exit flow out of an impeller of the Francis turbine for accurately computing the related mean values.
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Hybrid Particle-Continuum Numerical Methods for Aerospace Applications
2011-01-01
may require kinetic analysis. Another possible option that will enable high-mass, Mars missions is supersonic retro -propulsion [17], where a jet is...exploration missions [15]. 2.3 Plumes Another class of multi-scale ows of interest is rocket exhaust plumes. Ecient and accurate predictions of...atmospheric exhaust plumes at high altitudes are necessary to ensure that the chemical rocket maintains eciency while also assuring that the vehicle heating
Sackmann, Eric K; Majlof, Lars; Hahn-Windgassen, Annett; Eaton, Brent; Bandzava, Temo; Daulton, Jay; Vandenbroucke, Arne; Mock, Matthew; Stearns, Richard G; Hinkson, Stephen; Datwani, Sammy S
2016-02-01
Acoustic liquid handling uses high-frequency acoustic signals that are focused on the surface of a fluid to eject droplets with high accuracy and precision for various life science applications. Here we present a multiwell source plate, the Echo Qualified Reservoir (ER), which can acoustically transfer over 2.5 mL of fluid per well in 25-nL increments using an Echo 525 liquid handler. We demonstrate two Labcyte technologies-Dynamic Fluid Analysis (DFA) methods and a high-voltage (HV) grid-that are required to maintain accurate and precise fluid transfers from the ER at this volume scale. DFA methods were employed to dynamically assess the energy requirements of the fluid and adjust the acoustic ejection parameters to maintain a constant velocity droplet. Furthermore, we demonstrate that the HV grid enhances droplet velocity and coalescence at the destination plate. These technologies enabled 5-µL per destination well transfers to a 384-well plate, with accuracy and precision values better than 4%. Last, we used the ER and Echo 525 liquid handler to perform a quantitative polymerase chain reaction (qPCR) assay to demonstrate an application that benefits from the flexibility and larger volume capabilities of the ER. © 2015 Society for Laboratory Automation and Screening.
SimPhospho: a software tool enabling confident phosphosite assignment.
Suni, Veronika; Suomi, Tomi; Tsubosaka, Tomoya; Imanishi, Susumu Y; Elo, Laura L; Corthals, Garry L
2018-03-27
Mass spectrometry combined with enrichment strategies for phosphorylated peptides has been successfully employed for two decades to identify sites of phosphorylation. However, unambiguous phosphosite assignment is considered challenging. Given that site-specific phosphorylation events function as different molecular switches, validation of phosphorylation sites is of utmost importance. In our earlier study we developed a method based on simulated phosphopeptide spectral libraries, which enables highly sensitive and accurate phosphosite assignments. To promote more widespread use of this method, we here introduce a software implementation with improved usability and performance. We present SimPhospho, a fast and user-friendly tool for accurate simulation of phosphopeptide tandem mass spectra. Simulated phosphopeptide spectral libraries are used to validate and supplement database search results, with a goal to improve reliable phosphoproteome identification and reporting. The presented program can be easily used together with the Trans-Proteomic Pipeline and integrated in a phosphoproteomics data analysis workflow. SimPhospho is available for Windows, Linux and Mac operating systems at https://sourceforge.net/projects/simphospho/. It is open source and implemented in C ++. A user's manual with detailed description of data analysis using SimPhospho as well as test data can be found as supplementary material of this article. Supplementary data are available at https://www.btk.fi/research/ computational-biomedicine/software/.
Kim, Byoungjip; Kang, Seungwoo; Ha, Jin-Young; Song, Junehwa
2015-07-16
In this paper, we introduce a novel smartphone framework called VisitSense that automatically detects and predicts a smartphone user's place visits from ambient radio to enable behavioral targeting for mobile ads in large shopping malls. VisitSense enables mobile app developers to adopt visit-pattern-aware mobile advertising for shopping mall visitors in their apps. It also benefits mobile users by allowing them to receive highly relevant mobile ads that are aware of their place visit patterns in shopping malls. To achieve the goal, VisitSense employs accurate visit detection and prediction methods. For accurate visit detection, we develop a change-based detection method to take into consideration the stability change of ambient radio and the mobility change of users. It performs well in large shopping malls where ambient radio is quite noisy and causes existing algorithms to easily fail. In addition, we proposed a causality-based visit prediction model to capture the causality in the sequential visit patterns for effective prediction. We have developed a VisitSense prototype system, and a visit-pattern-aware mobile advertising application that is based on it. Furthermore, we deploy the system in the COEX Mall, one of the largest shopping malls in Korea, and conduct diverse experiments to show the effectiveness of VisitSense.
Red blood cell transport mechanisms in polyester thread-based blood typing devices.
Nilghaz, Azadeh; Ballerini, David R; Guan, Liyun; Li, Lizi; Shen, Wei
2016-02-01
A recently developed blood typing diagnostic based on a polyester thread substrate has shown great promise for use in medical emergencies and in impoverished regions. The device is easy to use and transport, while also being inexpensive, accurate, and rapid. This study used a fluorescent confocal microscope to delve deeper into how red blood cells were behaving within the polyester thread-based diagnostic at the cellular level, and how plasma separation could be made to visibly occur on the thread, making it possible to identify blood type in a single step. Red blood cells were stained and the plasma phase dyed with fluorescent compounds to enable them to be visualised under the confocal microscope at high magnification. The mechanisms uncovered were in surprising contrast with those found for a similar, paper-based method. Red blood cell aggregates did not flow over each other within the thread substrate as expected, but suffered from a restriction to their flow which resulted in the chromatographic separation of the RBCs from the liquid phase of the blood. It is hoped that these results will lead to the optimisation of the method to enable more accurate and sensitive detection, increasing the range of blood systems that can be detected.
Use of an auxiliary basis set to describe the polarization in the fragment molecular orbital method
NASA Astrophysics Data System (ADS)
Fedorov, Dmitri G.; Kitaura, Kazuo
2014-03-01
We developed a dual basis approach within the fragment molecular orbital formalism enabling efficient and accurate use of large basis sets. The method was tested on water clusters and polypeptides and applied to perform geometry optimization of chignolin (PDB: 1UAO) in solution at the level of DFT/6-31++G∗∗, obtaining a structure in agreement with experiment (RMSD of 0.4526 Å). The polarization in polypeptides is discussed with a comparison of the α-helix and β-strand.
Solution of quadratic matrix equations for free vibration analysis of structures.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.
Rapid fusion method for the determination of refractory thorium and uranium isotopes in soil samples
Maxwell, Sherrod L.; Hutchison, Jay B.; McAlister, Daniel R.
2015-02-14
Recently, approximately 80% of participating laboratories failed to accurately determine uranium isotopes in soil samples in the U.S Department of Energy Mixed Analyte Performance Evaluation Program (MAPEP) Session 30, due to incomplete dissolution of refractory particles in the samples. Failing laboratories employed acid dissolution methods, including hydrofluoric acid, to recover uranium from the soil matrix. The failures illustrate the importance of rugged soil dissolution methods for the accurate measurement of analytes in the sample matrix. A new rapid fusion method has been developed by the Savannah River National Laboratory (SRNL) to prepare 1-2 g soil sample aliquots very quickly, withmore » total dissolution of refractory particles. Soil samples are fused with sodium hydroxide at 600 ºC in zirconium crucibles to enable complete dissolution of the sample. Uranium and thorium are separated on stacked TEVA and TRU extraction chromatographic resin cartridges, prior to isotopic measurements by alpha spectrometry on cerium fluoride microprecipitation sources. Plutonium can also be separated and measured using this method. Batches of 12 samples can be prepared for measurement in <5 hours.« less
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis.
Collins, Tony J; Ylanko, Jarkko; Geng, Fei; Andrews, David W
2015-11-01
A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose-response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds.
A Versatile Cell Death Screening Assay Using Dye-Stained Cells and Multivariate Image Analysis
Collins, Tony J.; Ylanko, Jarkko; Geng, Fei
2015-01-01
Abstract A novel dye-based method for measuring cell death in image-based screens is presented. Unlike conventional high- and medium-throughput cell death assays that measure only one form of cell death accurately, using multivariate analysis of micrographs of cells stained with the inexpensive mix, red dye nonyl acridine orange, and a nuclear stain, it was possible to quantify cell death induced by a variety of different agonists even without a positive control. Surprisingly, using a single known cytotoxic agent as a positive control for training a multivariate classifier allowed accurate quantification of cytotoxicity for mechanistically unrelated compounds enabling generation of dose–response curves. Comparison with low throughput biochemical methods suggested that cell death was accurately distinguished from cell stress induced by low concentrations of the bioactive compounds Tunicamycin and Brefeldin A. High-throughput image-based format analyses of more than 300 kinase inhibitors correctly identified 11 as cytotoxic with only 1 false positive. The simplicity and robustness of this dye-based assay makes it particularly suited to live cell screening for toxic compounds. PMID:26422066
Improving Fidelity of Launch Vehicle Liftoff Acoustic Simulations
NASA Technical Reports Server (NTRS)
Liever, Peter; West, Jeff
2016-01-01
Launch vehicles experience high acoustic loads during ignition and liftoff affected by the interaction of rocket plume generated acoustic waves with launch pad structures. Application of highly parallelized Computational Fluid Dynamics (CFD) analysis tools optimized for application on the NAS computer systems such as the Loci/CHEM program now enable simulation of time-accurate, turbulent, multi-species plume formation and interaction with launch pad geometry and capture the generation of acoustic noise at the source regions in the plume shear layers and impingement regions. These CFD solvers are robust in capturing the acoustic fluctuations, but they are too dissipative to accurately resolve the propagation of the acoustic waves throughout the launch environment domain along the vehicle. A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed to improve such liftoff acoustic environment predictions. The framework combines the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin (DG) solver, Loci/THRUST, developed in the same computational framework. Loci/THRUST employs a low dissipation, high-order, unstructured DG method to accurately propagate acoustic waves away from the source regions across large distances. The DG solver is currently capable of solving up to 4th order solutions for non-linear, conservative acoustic field propagation. Higher order boundary conditions are implemented to accurately model the reflection and refraction of acoustic waves on launch pad components. The DG solver accepts generalized unstructured meshes, enabling efficient application of common mesh generation tools for CHEM and THRUST simulations. The DG solution is coupled with the CFD solution at interface boundaries placed near the CFD acoustic source regions. Both simulations are executed simultaneously with coordinated boundary condition data exchange.
Exclusion-Based Capture and Enumeration of CD4+ T Cells from Whole Blood for Low-Resource Settings.
Howard, Alexander L; Pezzi, Hannah M; Beebe, David J; Berry, Scott M
2014-06-01
In developing countries, demand exists for a cost-effective method to evaluate human immunodeficiency virus patients' CD4(+) T-helper cell count. The TH (CD4) cell count is the current marker used to identify when an HIV patient has progressed to acquired immunodeficiency syndrome, which results when the immune system can no longer prevent certain opportunistic infections. A system to perform TH count that obviates the use of costly flow cytometry will enable physicians to more closely follow patients' disease progression and response to therapy in areas where such advanced equipment is unavailable. Our system of two serially-operated immiscible phase exclusion-based cell isolations coupled with a rapid fluorescent readout enables exclusion-based isolation and accurate counting of T-helper cells at lower cost and from a smaller volume of blood than previous methods. TH cell isolation via immiscible filtration assisted by surface tension (IFAST) compares well against the established Dynal T4 Quant Kit and is sensitive at CD4 counts representative of immunocompromised patients (less than 200 TH cells per microliter of blood). Our technique retains use of open, simple-to-operate devices that enable IFAST as a high-throughput, automatable sample preparation method, improving throughput over previous low-resource methods. © 2013 Society for Laboratory Automation and Screening.
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan
2011-04-01
In this study, we propose and evaluate a method for spectral characterization of acousto-optic tunable filter (AOTF) hyperspectral imaging systems in the near-infrared (NIR) spectral region from 900 nm to 1700 nm. The proposed spectral characterization method is based on the SRM-2035 standard reference material, exhibiting distinct spectral features, which enables robust non-rigid matching of the acquired and reference spectra. The matching is performed by simultaneously optimizing the parameters of the AOTF tuning curve, spectral resolution, baseline, and multiplicative effects. In this way, the tuning curve (frequency-wavelength characteristics) and the corresponding spectral resolution of the AOTF hyperspectral imaging system can be characterized simultaneously. Also, the method enables simple spectral characterization of the entire imaging plane of hyperspectral imaging systems. The results indicate that the method is accurate and efficient and can easily be integrated with systems operating in diffuse reflection or transmission modes. Therefore, the proposed method is suitable for characterization, calibration, or validation of AOTF hyperspectral imaging systems. © 2011 Society for Applied Spectroscopy
High-Throughput Histopathological Image Analysis via Robust Cell Segmentation and Hashing
Zhang, Xiaofan; Xing, Fuyong; Su, Hai; Yang, Lin; Zhang, Shaoting
2015-01-01
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells. PMID:26599156
Distance measurement using frequency scanning interferometry with mode-hoped laser
NASA Astrophysics Data System (ADS)
Medhat, M.; Sobee, M.; Hussein, H. M.; Terra, O.
2016-06-01
In this paper, frequency scanning interferometry is implemented to measure distances up to 5 m absolutely. The setup consists of a Michelson interferometer, an external cavity tunable diode laser, and an ultra-low expansion (ULE) Fabry-Pérot (FP) cavity to measure the frequency scanning range. The distance is measured by acquiring simultaneously the interference fringes from, the Michelson and the FP interferometers, while scanning the laser frequency. An online fringe processing technique is developed to calculate the distance from the fringe ratio while removing the parts result from the laser mode-hops without significantly affecting the measurement accuracy. This fringe processing method enables accurate distance measurements up to 5 m with measurements repeatability ±3.9×10-6 L. An accurate translation stage is used to find the FP cavity free-spectral-range and therefore allow accurate measurement. Finally, the setup is applied for the short distance calibration of a laser distance meter (LDM).
NASA Astrophysics Data System (ADS)
Takagi, Kenta; Omote, Masanori; Kawasaki, Akira
2010-03-01
The orderly build-up of monosized microspheres with sizes of hundreds of micrometres enabled us to develop three-dimensional (3D) photonic crystal devices for terahertz electromagnetic waves. We designed and manufactured an original 3D particle assembly system capable of fabricating arbitrary periodic structures from these spherical particles. This method employs a pick-and-place assembling approach with robotic manipulation and interparticle laser microwelding in order to incorporate a contrivance for highly accurate arraying: an operation that compensates the size deviation of raw monosized particles. Pre-examination of particles of various materials revealed that interparticle laser welding must be achieved with local melting by suppressing heat diffusion from the welding area. By optimizing the assembly conditions, we succeeded in fabricating an accurate periodic structure with a diamond lattice from 400 µm polyethylene composite particles. This structure demonstrated a photonic bandgap in the terahertz frequency range.
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-01-01
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell. DOI: http://dx.doi.org/10.7554/eLife.26580.001 PMID:28678007
Remote monitoring of LED lighting system performance
NASA Astrophysics Data System (ADS)
Thotagamuwa, Dinusha R.; Perera, Indika U.; Narendran, Nadarajah
2016-09-01
The concept of connected lighting systems using LED lighting for the creation of intelligent buildings is becoming attractive to building owners and managers. In this application, the two most important parameters include power demand and the remaining useful life of the LED fixtures. The first enables energy-efficient buildings and the second helps building managers schedule maintenance services. The failure of an LED lighting system can be parametric (such as lumen depreciation) or catastrophic (such as complete cessation of light). Catastrophic failures in LED lighting systems can create serious consequences in safety critical and emergency applications. Therefore, both failure mechanisms must be considered and the shorter of the two must be used as the failure time. Furthermore, because of significant variation between the useful lives of similar products, it is difficult to accurately predict the life of LED systems. Real-time data gathering and analysis of key operating parameters of LED systems can enable the accurate estimation of the useful life of a lighting system. This paper demonstrates the use of a data-driven method (Euclidean distance) to monitor the performance of an LED lighting system and predict its time to failure.
Hi-Plex for Simple, Accurate, and Cost-Effective Amplicon-based Targeted DNA Sequencing.
Pope, Bernard J; Hammet, Fleur; Nguyen-Dumont, Tu; Park, Daniel J
2018-01-01
Hi-Plex is a suite of methods to enable simple, accurate, and cost-effective highly multiplex PCR-based targeted sequencing (Nguyen-Dumont et al., Biotechniques 58:33-36, 2015). At its core is the principle of using gene-specific primers (GSPs) to "seed" (or target) the reaction and universal primers to "drive" the majority of the reaction. In this manner, effects on amplification efficiencies across the target amplicons can, to a large extent, be restricted to early seeding cycles. Product sizes are defined within a relatively narrow range to enable high-specificity size selection, replication uniformity across target sites (including in the context of fragmented input DNA such as that derived from fixed tumor specimens (Nguyen-Dumont et al., Biotechniques 55:69-74, 2013; Nguyen-Dumont et al., Anal Biochem 470:48-51, 2015), and application of high-specificity genetic variant calling algorithms (Pope et al., Source Code Biol Med 9:3, 2014; Park et al., BMC Bioinformatics 17:165, 2016). Hi-Plex offers a streamlined workflow that is suitable for testing large numbers of specimens without the need for automation.
Koohbor, Behrad; Kidane, Addis; Lu, Wei-Yang
2016-06-27
As an optimum energy-absorbing material system, polymeric foams are needed to dissipate the kinetic energy of an impact, while maintaining the impact force transferred to the protected object at a low level. As a result, it is crucial to accurately characterize the load bearing and energy dissipation performance of foams at high strain rate loading conditions. There are certain challenges faced in the accurate measurement of the deformation response of foams due to their low mechanical impedance. In the present work, a non-parametric method is successfully implemented to enable the accurate assessment of the compressive constitutive response of rigid polymericmore » foams subjected to impact loading conditions. The method is based on stereovision high speed photography in conjunction with 3D digital image correlation, and allows for accurate evaluation of inertia stresses developed within the specimen during deformation time. In conclusion, full-field distributions of stress, strain and strain rate are used to extract the local constitutive response of the material at any given location along the specimen axis. In addition, the effective energy absorbed by the material is calculated. Finally, results obtained from the proposed non-parametric analysis are compared with data obtained from conventional test procedures.« less
Lung tumor diagnosis and subtype discovery by gene expression profiling.
Wang, Lu-yong; Tu, Zhuowen
2006-01-01
The optimal treatment of patients with complex diseases, such as cancers, depends on the accurate diagnosis by using a combination of clinical and histopathological data. In many scenarios, it becomes tremendously difficult because of the limitations in clinical presentation and histopathology. To accurate diagnose complex diseases, the molecular classification based on gene or protein expression profiles are indispensable for modern medicine. Moreover, many heterogeneous diseases consist of various potential subtypes in molecular basis and differ remarkably in their response to therapies. It is critical to accurate predict subgroup on disease gene expression profiles. More fundamental knowledge of the molecular basis and classification of disease could aid in the prediction of patient outcome, the informed selection of therapies, and identification of novel molecular targets for therapy. In this paper, we propose a new disease diagnostic method, probabilistic boosting tree (PB tree) method, on gene expression profiles of lung tumors. It enables accurate disease classification and subtype discovery in disease. It automatically constructs a tree in which each node combines a number of weak classifiers into a strong classifier. Also, subtype discovery is naturally embedded in the learning process. Our algorithm achieves excellent diagnostic performance, and meanwhile it is capable of detecting the disease subtype based on gene expression profile.
Barnes, Brian B.; Wilson, Michael B.; Carr, Peter W.; Vitha, Mark F.; Broeckling, Corey D.; Heuberger, Adam L.; Prenni, Jessica; Janis, Gregory C.; Corcoran, Henry; Snow, Nicholas H.; Chopra, Shilpi; Dhandapani, Ramkumar; Tawfall, Amanda; Sumner, Lloyd W.; Boswell, Paul G.
2014-01-01
Gas chromatography-mass spectrometry (GC-MS) is a primary tool used to identify compounds in complex samples. Both mass spectra and GC retention times are matched to those of standards, but it is often impractical to have standards on hand for every compound of interest, so we must rely on shared databases of MS data and GC retention information. Unfortunately, retention databases (e.g. linear retention index libraries) are experimentally restrictive, notoriously unreliable, and strongly instrument dependent, relegating GC retention information to a minor, often negligible role in compound identification despite its potential power. A new methodology called “retention projection” has great potential to overcome the limitations of shared chromatographic databases. In this work, we tested the reliability of the methodology in five independent laboratories. We found that even when each lab ran nominally the same method, the methodology was 3-fold more accurate than retention indexing because it properly accounted for unintentional differences between the GC-MS systems. When the labs used different methods of their own choosing, retention projections were 4- to 165-fold more accurate. More importantly, the distribution of error in the retention projections was predictable across different methods and labs, thus enabling automatic calculation of retention time tolerance windows. Tolerance windows at 99% confidence were generally narrower than those widely used even when physical standards are on hand to measure their retention. With its high accuracy and reliability, the new retention projection methodology makes GC retention a reliable, precise tool for compound identification, even when standards are not available to the user. PMID:24205931
A Critical Assessment of the Aluminum Cartridge Case Failure Mechanism
1976-03-01
achievements of an exploratory development program at Frankford Arsenal. The program was initiated to determine the engineering parameters required for...third that of brass cases, are ideal for improving the combat load effectiveness of an infantryman, a combat vehicle or a gunship« To enable...damage sus- tained by aluminum cartridge cases during "burn-through". A more accurate method of determining the effect of "burn- through" in a
Winzer, Eva; Luger, Maria; Schindler, Karin
2018-06-01
Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Partridge Jr, William P.; Choi, Jae-Soon
By directly resolving spatial and temporal species distributions within operating honeycomb monolith catalysts, spatially resolved capillary inlet mass spectrometry (SpaciMS) provides a uniquely enabling perspective for advancing automotive catalysis. Specifically, the ability to follow the spatiotemporal evolution of reactions throughout the catalyst is a significant advantage over inlet-and-effluent-limited analysis. Intracatalyst resolution elucidates numerous catalyst details including the network and sequence of reactions, clarifying reaction pathways; the relative rates of different reactions and impacts of operating conditions and catalyst state; and reaction dynamics and intermediate species that exist only within the catalyst. These details provide a better understanding of how themore » catalyst functions and have basic and practical benefits; e.g., catalyst system design; strategies for on-road catalyst state assessment, control, and on-board diagnostics; and creating robust and accurate predictive catalyst models. Moreover, such spatiotemporally distributed data provide for critical model assessment, and identification of improvement opportunities that might not be apparent from effluent assessment; i.e., while an incorrectly formulated model may provide correct effluent predictions, one that can accurately predict the spatiotemporal evolution of reactions along the catalyst channels will be more robust, accurate, and reliable. In such ways, intracatalyst diagnostics comprehensively enable improved design and development tools, and faster and lower-cost development of more efficient and durable automotive catalyst systems. Beyond these direct contributions, SpaciMS has spawned and been applied to enable other analytical techniques for resolving transient distributed intracatalyst performance. This chapter focuses on SpaciMS applications and associated catalyst insights and improvements, with specific sections related to lean NOx traps, selective catalytic reduction catalysts, oxidation catalysts, and particulate filters. The objective is to promote broader use and development of intracatalyst analytical methods, and thereby expand the insights resulting from this detailed perspective for advancing automotive catalyst technologies.« less
Ilin, Yelena; Choi, Ji Sun; Harley, Brendan A C; Kraft, Mary L
2015-11-17
A major challenge for expanding specific types of hematopoietic cells ex vivo for the treatment of blood cell pathologies is identifying the combinations of cellular and matrix cues that direct hematopoietic stem cells (HSC) to self-renew or differentiate into cell populations ex vivo. Microscale screening platforms enable minimizing the number of rare HSCs required to screen the effects of numerous cues on HSC fate decisions. These platforms create a strong demand for label-free methods that accurately identify the fate decisions of individual hematopoietic cells at specific locations on the platform. We demonstrate the capacity to identify discrete cells along the HSC differentiation hierarchy via multivariate analysis of Raman spectra. Notably, cell state identification is accurate for individual cells and independent of the biophysical properties of the functionalized polyacrylamide gels upon which these cells are cultured. We report partial least-squares discriminant analysis (PLS-DA) models of single cell Raman spectra enable identifying four dissimilar hematopoietic cell populations across the HSC lineage specification. Successful discrimination was obtained for a population enriched for long-term repopulating HSCs (LT-HSCs) versus their more differentiated progeny, including closely related short-term repopulating HSCs (ST-HSCs) and fully differentiated lymphoid (B cells) and myeloid (granulocytes) cells. The lineage-specific differentiation states of cells from these four subpopulations were accurately identified independent of the stiffness of the underlying biomaterial substrate, indicating subtle spectral variations that discriminated these populations were not masked by features from the culture substrate. This approach enables identifying the lineage-specific differentiation stages of hematopoietic cells on biomaterial substrates of differing composition and may facilitate correlating hematopoietic cell fate decisions with the extrinsic cues that elicited them.
NASA Astrophysics Data System (ADS)
Banerjee, Ipsita
2009-03-01
Knowledge of pathways governing cellular differentiation to specific phenotype will enable generation of desired cell fates by careful alteration of the governing network by adequate manipulation of the cellular environment. With this aim, we have developed a novel method to reconstruct the underlying regulatory architecture of a differentiating cell population from discrete temporal gene expression data. We utilize an inherent feature of biological networks, that of sparsity, in formulating the network reconstruction problem as a bi-level mixed-integer programming problem. The formulation optimizes the network topology at the upper level and the network connectivity strength at the lower level. The method is first validated by in-silico data, before applying it to the complex system of embryonic stem (ES) cell differentiation. This formulation enables efficient identification of the underlying network topology which could accurately predict steps necessary for directing differentiation to subsequent stages. Concurrent experimental verification demonstrated excellent agreement with model prediction.
A spectrally tunable LED sphere source enables accurate calibration of tristimulus colorimeters
NASA Astrophysics Data System (ADS)
Fryc, I.; Brown, S. W.; Ohno, Y.
2006-02-01
The Four-Color Matrix method (FCM) was developed to improve the accuracy of chromaticity measurements of various display colors. The method is valid for each type of display having similar spectra. To develop the Four-Color correction matrix, spectral measurements of primary red, green, blue, and white colors of a display are needed. Consequently, a calibration facility should be equipped with a number of different displays. This is very inconvenient and expensive. A spectrally tunable light source (STS) that can mimic different display spectral distributions would eliminate the need for maintaining a wide variety of displays and would enable a colorimeter to be calibrated for a number of different displays using the same setup. Simulations show that an STS that can create red, green, blue and white distributions that are close to the real spectral power distribution (SPD) of a display works well with the FCM for the calibration of colorimeters.
Soler, Miguel A; de Marco, Ario; Fortuna, Sara
2016-10-10
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
NASA Astrophysics Data System (ADS)
Soler, Miguel A.; De Marco, Ario; Fortuna, Sara
2016-10-01
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Methods for thermodynamic evaluation of battery state of health
Yazami, Rachid; McMenamin, Joseph; Reynier, Yvan; Fultz, Brent T
2013-05-21
Described are systems and methods for accurately characterizing thermodynamic and materials properties of electrodes and battery systems and for characterizing the state of health of electrodes and battery systems. Measurement of physical attributes of electrodes and batteries corresponding to thermodynamically stabilized electrode conditions permit determination of thermodynamic parameters, including state functions such as the Gibbs free energy, enthalpy and entropy of electrode/electrochemical cell reactions, that enable prediction of important performance attributes of electrode materials and battery systems, such as energy, power density, current rate, cycle life and state of health. Also provided are systems and methods for charging a battery according to its state of health.
Methods and systems for thermodynamic evaluation of battery state of health
Yazami, Rachid; McMenamin, Joseph; Reynier, Yvan; Fultz, Brent T
2014-12-02
Described are systems and methods for accurately characterizing thermodynamic and materials properties of electrodes and battery systems and for characterizing the state of health of electrodes and battery systems. Measurement of physical attributes of electrodes and batteries corresponding to thermodynamically stabilized electrode conditions permit determination of thermodynamic parameters, including state functions such as the Gibbs free energy, enthalpy and entropy of electrode/electrochemical cell reactions, that enable prediction of important performance attributes of electrode materials and battery systems, such as energy, power density, current rate, cycle life and state of health. Also provided are systems and methods for charging a battery according to its state of health.
NASA Astrophysics Data System (ADS)
Soto, Marcelo A.; Denisov, Andrey; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Thévenaz, Luc; Gonzalez-Herraez, Miguel
2017-04-01
A method for distributed birefringence measurements is proposed based on the interference pattern generated by the interrogation of a dynamic Brillouin grating (DBG) using two short consecutive optical pulses. Compared to existing DBG interrogation techniques, the method here offers an improved sensitivity to birefringence changes thanks to the interferometric effect generated by the reflections of the two pulses. Experimental results demonstrate the possibility to obtain the longitudinal birefringence profile of a 20 m-long Panda fibre with an accuracy of 10-8 using 16 averages and 30 cm spatial resolution. The method enables sub-metric and highly-accurate distributed temperature and strain sensing.
NASA Astrophysics Data System (ADS)
John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.
2016-04-01
We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.
Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R
2016-01-05
Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.
Alsina, Adolfo; Lai, Wu Ming; Wong, Wai Kin; Qin, Xianan; Zhang, Min; Park, Hyokeun
2017-11-04
Mitochondria are essential for cellular survival and function. In neurons, mitochondria are transported to various subcellular regions as needed. Thus, defects in the axonal transport of mitochondria are related to the pathogenesis of neurodegenerative diseases, and the movement of mitochondria has been the subject of intense research. However, the inability to accurately track mitochondria with subpixel accuracy has hindered this research. Here, we report an automated method for tracking mitochondria based on the center of fluorescence. This tracking method, which is accurate to approximately one-tenth of a pixel, uses the centroid of an individual mitochondrion and provides information regarding the distance traveled between consecutive imaging frames, instantaneous speed, net distance traveled, and average speed. Importantly, this new tracking method enables researchers to observe both directed motion and undirected movement (i.e., in which the mitochondrion moves randomly within a small region, following a sub-diffusive motion). This method significantly improves our ability to analyze the movement of mitochondria and sheds light on the dynamic features of mitochondrial movement. Copyright © 2017 Elsevier Inc. All rights reserved.
Endoscopic ultrasound guided fine needle aspiration and useful ancillary methods
Tadic, Mario; Stoos-Veic, Tajana; Kusec, Rajko
2014-01-01
The role of endoscopic ultrasound (EUS) in evaluating pancreatic pathology has been well documented from the beginning of its clinical use. High spatial resolution and the close proximity to the evaluated organs within the mediastinum and abdominal cavity allow detection of small focal lesions and precise tissue acquisition from suspected lesions within the reach of this method. Fine needle aspiration (FNA) is considered of additional value to EUS and is performed to obtain tissue diagnosis. Tissue acquisition from suspected lesions for cytological or histological analysis allows, not only the differentiation between malignant and non-malignant lesions, but, in most cases, also the accurate distinction between the various types of malignant lesions. It is well documented that the best results are achieved only if an adequate sample is obtained for further analysis, if the material is processed in an appropriate way, and if adequate ancillary methods are performed. This is a multi-step process and could be quite a challenge in some cases. In this article, we discuss the technical aspects of tissue acquisition by EUS-guided-FNA (EUS-FNA), as well as the role of an on-site cytopathologist, various means of specimen processing, and the selection of the appropriate ancillary method for providing an accurate tissue diagnosis and maximizing the yield of this method. The main goal of this review is to alert endosonographers, not only to the different possibilities of tissue acquisition, namely EUS-FNA, but also to bring to their attention the importance of proper sample processing in the evaluation of various lesions in the gastrointestinal tract and other accessible organs. All aspects of tissue acquisition (needles, suction, use of stylet, complications, etc.) have been well discussed lately. Adequate tissue samples enable comprehensive diagnoses, which answer the main clinical questions, thus enabling targeted therapy. PMID:25339816
NASA Astrophysics Data System (ADS)
Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro
2017-02-01
Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.
The Finite-Surface Method for incompressible flow: a step beyond staggered grid
NASA Astrophysics Data System (ADS)
Hokpunna, Arpiruk; Misaka, Takashi; Obayashi, Shigeru
2017-11-01
We present a newly developed higher-order finite surface method for the incompressible Navier-Stokes equations (NSE). This method defines the velocities as a surface-averaged value on the surfaces of the pressure cells. Consequently, the mass conservation on the pressure cells becomes an exact equation. The only things left to approximate is the momentum equation and the pressure at the new time step. At certain conditions, the exact mass conservation enables the explicit n-th order accurate NSE solver to be used with the pressure treatment that is two or four order less accurate without loosing the apparent convergence rate. This feature was not possible with finite volume of finite difference methods. We use Fourier analysis with a model spectrum to determine the condition and found that the range covers standard boundary layer flows. The formal convergence and the performance of the proposed scheme is compared with a sixth-order finite volume method. Finally, the accuracy and performance of the method is evaluated in turbulent channel flows. This work is partially funded by a research colloaboration from IFS, Tohoku university and ASEAN+3 funding scheme from CMUIC, Chiang Mai University.
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt’s method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts. PMID:27610153
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors.
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt's method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts.
Read clouds uncover variation in complex regions of the human genome
Bishara, Alex; Liu, Yuling; Weng, Ziming; Kashef-Haghighi, Dorna; Newburger, Daniel E.; West, Robert; Sidow, Arend; Batzoglou, Serafim
2015-01-01
Although an increasing amount of human genetic variation is being identified and recorded, determining variants within repeated sequences of the human genome remains a challenge. Most population and genome-wide association studies have therefore been unable to consider variation in these regions. Core to the problem is the lack of a sequencing technology that produces reads with sufficient length and accuracy to enable unique mapping. Here, we present a novel methodology of using read clouds, obtained by accurate short-read sequencing of DNA derived from long fragment libraries, to confidently align short reads within repeat regions and enable accurate variant discovery. Our novel algorithm, Random Field Aligner (RFA), captures the relationships among the short reads governed by the long read process via a Markov Random Field. We utilized a modified version of the Illumina TruSeq synthetic long-read protocol, which yielded shallow-sequenced read clouds. We test RFA through extensive simulations and apply it to discover variants on the NA12878 human sample, for which shallow TruSeq read cloud sequencing data are available, and on an invasive breast carcinoma genome that we sequenced using the same method. We demonstrate that RFA facilitates accurate recovery of variation in 155 Mb of the human genome, including 94% of 67 Mb of segmental duplication sequence and 96% of 11 Mb of transcribed sequence, that are currently hidden from short-read technologies. PMID:26286554
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less
Quantitating Iron in Serum Ferritin by Use of ICP-MS
NASA Technical Reports Server (NTRS)
Smith, Scott M.; Gillman, Patricia L.
2003-01-01
A laboratory method has been devised to enable measurement of the concentration of iron bound in ferritin from small samples of blood (serum). Derived partly from a prior method that depends on large samples of blood, this method involves the use of an inductively-coupled-plasma mass spectrometer (ICP-MS). Ferritin is a complex of iron with the protein apoferritin. Heretofore, measurements of the concentration of serum ferritin (as distinguished from direct measurements of the concentration of iron in serum ferritin) have been used to assess iron stores in humans. Low levels of serum ferritin could indicate the first stage of iron depletion. High levels of serum ferritin could indicate high levels of iron (for example, in connection with hereditary hemochromatosis an iron-overload illness that is characterized by progressive organ damage and can be fatal). However, the picture is complicated: A high level of serum ferritin could also indicate stress and/or inflammation instead of (or in addition to) iron overload, and low serum iron concentration could indicate inflammation rather than iron deficiency. Only when concentrations of both serum iron and serum ferritin increase and decrease together can the patient s iron status be assessed accurately. Hence, in enabling accurate measurement of the iron content of serum ferritin, the present method can improve the diagnosis of the patient s iron status. The prior method of measuring the concentration of iron involves the use of an atomic-absorption spectrophotometer with a graphite furnace. The present method incorporates a modified version of the sample- preparation process of the prior method. First, ferritin is isolated; more specifically, it is immobilized by immunoprecipitation with rabbit antihuman polyclonal antibody bound to agarose beads. The ferritin is then separated from other iron-containing proteins and free iron by a series of centrifugation and wash steps. Next, the ferritin is digested with nitric acid to extract its iron content. Finally, a micronebulizer is used to inject the sample of the product of the digestion into the ICPMS for analysis of its iron content. The sensitivity of the ICP-MS is high enough to enable it to characterize samples smaller than those required in the prior method (samples can be 0.15 to 0.60 mL).
Frame Shift/warp Compensation for the ARID Robot System
NASA Technical Reports Server (NTRS)
Latino, Carl D.
1991-01-01
The Automatic Radiator Inspection Device (ARID) is a system aimed at automating the tedious task of inspecting orbiter radiator panels. The ARID must have the ability to aim a camera accurately at the desired inspection points, which are in the order of 13,000. The ideal inspection points are known; however, the panel may be relocated due to inaccurate parking and warpage. A method of determining the mathematical description of a translated as well as a warped surface by accurate measurement of only a few points on this surface is developed here. The method uses a linear warp model whose effect is superimposed on the rigid body translation. Due to the angles involved, small angle approximations are possible, which greatly reduces the computational complexity. Given an accurate linear warp model, all the desired translation and warp parameters can be obtained by knowledge of the ideal locations of four fiducial points and the corresponding measurements of these points on the actual radiator surface. The method uses three of the fiducials to define a plane and the fourth to define the warp. Given this information, it is possible to determine a transformation that will enable the ARID system to translate any desired inspection point on the ideal surface to its corresponding value on the actual surface.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
2013-07-01
Additionally, a physically consistent BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed... BRDF and radiation pressure model is utilized thus enabling an accurate physical link between the observed photometric brightness and the attitudinal...source and the observer is ( ) VLVLH ˆˆˆˆˆ ++= (2) with angles α and β from N̂ and is used in many analytic BRDF models . There are many
Indexed variation graphs for efficient and accurate resistome profiling.
Rowe, Will P M; Winn, Martyn D
2018-05-14
Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.
NASA Technical Reports Server (NTRS)
Neugroschel, A.
1981-01-01
New methods are presented and illustrated that enable the accurate determination of the diffusion length of minority carriers in the narrow regions of a solar cell or a diode. Other methods now available are inaccurate for the desired case in which the width of the region is less than the diffusion length. Once the diffusion length is determined by the new methods, this result can be combined with measured dark I-V characteristics and with small-signal admittance characteristics to enable determination of the recombination currents in each quasi-neutral region of the cell - for example, in the emitter, low-doped base, and high-doped base regions of the BSF (back-surface-field) cell. This approach leads to values for the effective surface recombination velocity of the high-low junction forming the back-surface field of BSF cells or the high-low emitter junction of HLE cells. These methods are also applicable for measuring the minority-carrier lifetime in thin epitaxial layers grown on substrates with opposite conductivity type.
Zhou, Tony; Dickson, Jennifer L; Geoffrey Chase, J
2018-01-01
Continuous glucose monitoring (CGM) devices have been effective in managing diabetes and offer potential benefits for use in the intensive care unit (ICU). Use of CGM devices in the ICU has been limited, primarily due to the higher point accuracy errors over currently used traditional intermittent blood glucose (BG) measures. General models of CGM errors, including drift and random errors, are lacking, but would enable better design of protocols to utilize these devices. This article presents an autoregressive (AR) based modeling method that separately characterizes the drift and random noise of the GlySure CGM sensor (GlySure Limited, Oxfordshire, UK). Clinical sensor data (n = 33) and reference measurements were used to generate 2 AR models to describe sensor drift and noise. These models were used to generate 100 Monte Carlo simulations based on reference blood glucose measurements. These were then compared to the original CGM clinical data using mean absolute relative difference (MARD) and a Trend Compass. The point accuracy MARD was very similar between simulated and clinical data (9.6% vs 9.9%). A Trend Compass was used to assess trend accuracy, and found simulated and clinical sensor profiles were similar (simulated trend index 11.4° vs clinical trend index 10.9°). The model and method accurately represents cohort sensor behavior over patients, providing a general modeling approach to any such sensor by separately characterizing each type of error that can arise in the data. Overall, it enables better protocol design based on accurate expected CGM sensor behavior, as well as enabling the analysis of what level of each type of sensor error would be necessary to obtain desired glycemic control safety and performance with a given protocol.
Liu, Derek; Sloboda, Ron S
2014-05-01
Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.
The 1000 Genomes Project: data management and community access.
Clarke, Laura; Zheng-Bradley, Xiangqun; Smith, Richard; Kulesha, Eugene; Xiao, Chunlin; Toneva, Iliana; Vaughan, Brendan; Preuss, Don; Leinonen, Rasko; Shumway, Martin; Sherry, Stephen; Flicek, Paul
2012-04-27
The 1000 Genomes Project was launched as one of the largest distributed data collection and analysis projects ever undertaken in biology. In addition to the primary scientific goals of creating both a deep catalog of human genetic variation and extensive methods to accurately discover and characterize variation using new sequencing technologies, the project makes all of its data publicly available. Members of the project data coordination center have developed and deployed several tools to enable widespread data access.
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.
Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases
Qutaish, Mohammed Q.; Zhou, Zhuxian; Prabhu, David; Liu, Yiqiao; Busso, Mallory R.; Izadnegahdar, Donna; Gargesha, Madhusudhana; Lu, Hong; Lu, Zheng-Rong
2018-01-01
We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy. PMID:29805438
Kim, Byoungjip; Kang, Seungwoo; Ha, Jin-Young; Song, Junehwa
2015-01-01
In this paper, we introduce a novel smartphone framework called VisitSense that automatically detects and predicts a smartphone user’s place visits from ambient radio to enable behavioral targeting for mobile ads in large shopping malls. VisitSense enables mobile app developers to adopt visit-pattern-aware mobile advertising for shopping mall visitors in their apps. It also benefits mobile users by allowing them to receive highly relevant mobile ads that are aware of their place visit patterns in shopping malls. To achieve the goal, VisitSense employs accurate visit detection and prediction methods. For accurate visit detection, we develop a change-based detection method to take into consideration the stability change of ambient radio and the mobility change of users. It performs well in large shopping malls where ambient radio is quite noisy and causes existing algorithms to easily fail. In addition, we proposed a causality-based visit prediction model to capture the causality in the sequential visit patterns for effective prediction. We have developed a VisitSense prototype system, and a visit-pattern-aware mobile advertising application that is based on it. Furthermore, we deploy the system in the COEX Mall, one of the largest shopping malls in Korea, and conduct diverse experiments to show the effectiveness of VisitSense. PMID:26193275
NASA Astrophysics Data System (ADS)
Pipa, A. V.; Koskulics, J.; Brandenburg, R.; Hoder, T.
2012-11-01
The concept of the simplest equivalent circuit for a dielectric barrier discharge (DBD) is critically reviewed. It is shown that the approach is consistent with experimental data measured either in large-scale sinusoidal-voltage driven or miniature pulse-voltage driven DBDs. An expression for the charge transferred through the gas gap q(t) is obtained with an accurate account for the displacement current and the values of DBD reactor capacitance. This enables (i) the significant reduction of experimental error in the determination of q(t) in pulsed DBDs, (ii) the verification of the classical electrical theory of ozonizers about maximal transferred charge qmax, and (iii) the development of a graphical method for the determination of qmax from charge-voltage characteristics (Q-V plots, often referred as Lissajous figures) measured under pulsed excitation. The method of graphical presentation of qmax is demonstrated with an example of a Q-V plot measured under pulsed excitation. The relations between the discharge current jR(t), the transferred charge q(t), and the measurable parameters are presented in new forms, which enable the qualitative interpretation of the measured current and voltage waveforms without the knowledge about the value of the dielectric barrier capacitance Cd. Whereas for quantitative evaluation of electrical measurements, the accurate estimation of the Cd is important.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Accurate vehicle classification including motorcycles using piezoelectric sensors.
DOT National Transportation Integrated Search
2013-03-01
State and federal departments of transportation are charged with classifying vehicles and monitoring mileage traveled. Accurate data reporting enables suitable roadway design for safety and capacity. Vehicle classifiers currently employ inductive loo...
Robotically assisted small animal MRI-guided mouse biopsy
NASA Astrophysics Data System (ADS)
Wilson, Emmanuel; Chiodo, Chris; Wong, Kenneth H.; Fricke, Stanley; Jung, Mira; Cleary, Kevin
2010-02-01
Small mammals, namely mice and rats, play an important role in biomedical research. Imaging, in conjunction with accurate therapeutic agent delivery, has tremendous value in small animal research since it enables serial, non-destructive testing of animals and facilitates the study of biomarkers of disease progression. The small size of organs in mice lends some difficulty to accurate biopsies and therapeutic agent delivery. Image guidance with the use of robotic devices should enable more accurate and repeatable targeting for biopsies and delivery of therapeutic agents, as well as the ability to acquire tissue from a pre-specified location based on image anatomy. This paper presents our work in integrating a robotic needle guide device, specialized stereotaxic mouse holder, and magnetic resonance imaging, with a long-term goal of performing accurate and repeatable targeting in anesthetized mice studies.
Beidas, Rinad S; Maclean, Johanna Catherine; Fishman, Jessica; Dorsey, Shannon; Schoenwald, Sonja K; Mandell, David S; Shea, Judy A; McLeod, Bryce D; French, Michael T; Hogue, Aaron; Adams, Danielle R; Lieberman, Adina; Becker-Haimes, Emily M; Marcus, Steven C
2016-09-15
This randomized trial will compare three methods of assessing fidelity to cognitive-behavioral therapy (CBT) for youth to identify the most accurate and cost-effective method. The three methods include self-report (i.e., therapist completes a self-report measure on the CBT interventions used in session while circumventing some of the typical barriers to self-report), chart-stimulated recall (i.e., therapist reports on the CBT interventions used in session via an interview with a trained rater, and with the chart to assist him/her) and behavioral rehearsal (i.e., therapist demonstrates the CBT interventions used in session via a role-play with a trained rater). Direct observation will be used as the gold-standard comparison for each of the three methods. This trial will recruit 135 therapists in approximately 12 community agencies in the City of Philadelphia. Therapists will be randomized to one of the three conditions. Each therapist will provide data from three unique sessions, for a total of 405 sessions. All sessions will be audio-recorded and coded using the Therapy Process Observational Coding System for Child Psychotherapy-Revised Strategies scale. This will enable comparison of each measurement approach to direct observation of therapist session behavior to determine which most accurately assesses fidelity. Cost data associated with each method will be gathered. To gather stakeholder perspectives of each measurement method, we will use purposive sampling to recruit 12 therapists from each condition (total of 36 therapists) and 12 supervisors to participate in semi-structured qualitative interviews. Results will provide needed information on how to accurately and cost-effectively measure therapist fidelity to CBT for youth, as well as important information about stakeholder perspectives with regard to each measurement method. Findings will inform fidelity measurement practices in future implementation studies as well as in clinical practice. NCT02820623 , June 3rd, 2016.
Saukko, Annina E A; Honkanen, Juuso T J; Xu, Wujun; Väänänen, Sami P; Jurvelin, Jukka S; Lehto, Vesa-Pekka; Töyräs, Juha
2017-12-01
Cartilage injuries may be detected using contrast-enhanced computed tomography (CECT) by observing variations in distribution of anionic contrast agent within cartilage. Currently, clinical CECT enables detection of injuries and related post-traumatic degeneration based on two subsequent CT scans. The first scan allows segmentation of articular surfaces and lesions while the latter scan allows evaluation of tissue properties. Segmentation of articular surfaces from the latter scan is difficult since the contrast agent diffusion diminishes the image contrast at surfaces. We hypothesize that this can be overcome by mixing anionic contrast agent (ioxaglate) with bismuth oxide nanoparticles (BINPs) too large to diffuse into cartilage, inducing a high contrast at the surfaces. Here, a dual contrast method employing this mixture is evaluated by determining the depth-wise X-ray attenuation profiles in intact, enzymatically degraded, and mechanically injured osteochondral samples (n = 3 × 10) using a microCT immediately and at 45 min after immersion in contrast agent. BiNPs were unable to diffuse into cartilage, producing high contrast at articular surfaces. Ioxaglate enabled the detection of enzymatic and mechanical degeneration. In conclusion, the dual contrast method allowed detection of injuries and degeneration simultaneously with accurate cartilage segmentation using a single scan conducted at 45 min after contrast agent administration.
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R
2018-02-01
High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.
NASA Astrophysics Data System (ADS)
Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.
2015-02-01
Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0
Automatic lumbar spine measurement in CT images
NASA Astrophysics Data System (ADS)
Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun
2017-03-01
Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
NASA Astrophysics Data System (ADS)
Ge, Zhouyang; Loiseau, Jean-Christophe; Tammisola, Outi; Brandt, Luca
2018-01-01
Aiming for the simulation of colloidal droplets in microfluidic devices, we present here a numerical method for two-fluid systems subject to surface tension and depletion forces among the suspended droplets. The algorithm is based on an efficient solver for the incompressible two-phase Navier-Stokes equations, and uses a mass-conserving level set method to capture the fluid interface. The four novel ingredients proposed here are, firstly, an interface-correction level set (ICLS) method; global mass conservation is achieved by performing an additional advection near the interface, with a correction velocity obtained by locally solving an algebraic equation, which is easy to implement in both 2D and 3D. Secondly, we report a second-order accurate geometric estimation of the curvature at the interface and, thirdly, the combination of the ghost fluid method with the fast pressure-correction approach enabling an accurate and fast computation even for large density contrasts. Finally, we derive a hydrodynamic model for the interaction forces induced by depletion of surfactant micelles and combine it with a multiple level set approach to study short-range interactions among droplets in the presence of attracting forces.
Quantification of HTLV-1 Clonality and TCR Diversity
Laydon, Daniel J.; Melamed, Anat; Sim, Aaron; Gillet, Nicolas A.; Sim, Kathleen; Darko, Sam; Kroll, J. Simon; Douek, Daniel C.; Price, David A.; Bangham, Charles R. M.; Asquith, Becca
2014-01-01
Estimation of immunological and microbiological diversity is vital to our understanding of infection and the immune response. For instance, what is the diversity of the T cell repertoire? These questions are partially addressed by high-throughput sequencing techniques that enable identification of immunological and microbiological “species” in a sample. Estimators of the number of unseen species are needed to estimate population diversity from sample diversity. Here we test five widely used non-parametric estimators, and develop and validate a novel method, DivE, to estimate species richness and distribution. We used three independent datasets: (i) viral populations from subjects infected with human T-lymphotropic virus type 1; (ii) T cell antigen receptor clonotype repertoires; and (iii) microbial data from infant faecal samples. When applied to datasets with rarefaction curves that did not plateau, existing estimators systematically increased with sample size. In contrast, DivE consistently and accurately estimated diversity for all datasets. We identify conditions that limit the application of DivE. We also show that DivE can be used to accurately estimate the underlying population frequency distribution. We have developed a novel method that is significantly more accurate than commonly used biodiversity estimators in microbiological and immunological populations. PMID:24945836
Chung, Younshik; Chang, IlJoon
2015-11-01
Recently, the introduction of vehicle black box systems or in-vehicle video event data recorders enables the driver to use the system to collect more accurate crash information such as location, time, and situation at the pre-crash and crash moment, which can be analyzed to find the crash causal factors more accurately. This study presents the vehicle black box system in brief and its application status in Korea. Based on the crash data obtained from the vehicle black box system, this study analyzes the accuracy of the crash data collected from existing road crash data recording method, which has been recorded by police officers based on accident parties' statements or eyewitness's account. The analysis results show that the crash data observed by the existing method have an average of 84.48m of spatial difference and standard deviation of 157.75m as well as average 29.05min of temporal error and standard deviation of 19.24min. Additionally, the average and standard deviation of crash speed errors were found to be 9.03km/h and 7.21km/h, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Blood vessels segmentation of hatching eggs based on fully convolutional networks
NASA Astrophysics Data System (ADS)
Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao
2018-04-01
FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.
A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.
Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo
2010-01-01
In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.
3D printing functional materials and devices (Conference Presentation)
NASA Astrophysics Data System (ADS)
McAlpine, Michael C.
2017-05-01
The development of methods for interfacing high performance functional devices with biology could impact regenerative medicine, smart prosthetics, and human-machine interfaces. Indeed, the ability to three-dimensionally interweave biological and functional materials could enable the creation of devices possessing unique geometries, properties, and functionalities. Yet, most high quality functional materials are two dimensional, hard and brittle, and require high crystallization temperatures for maximal performance. These properties render the corresponding devices incompatible with biology, which is three-dimensional, soft, stretchable, and temperature sensitive. We overcome these dichotomies by: 1) using 3D printing and scanning for customized, interwoven, anatomically accurate device architectures; 2) employing nanotechnology as an enabling route for overcoming mechanical discrepancies while retaining high performance; and 3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This three-dimensional blending of functional materials and `living' platforms may enable next-generation 3D printed devices.
Brain vascular image segmentation based on fuzzy local information C-means clustering
NASA Astrophysics Data System (ADS)
Hu, Chaoen; Liu, Xia; Liang, Xiao; Hui, Hui; Yang, Xin; Tian, Jie
2017-02-01
Light sheet fluorescence microscopy (LSFM) is a powerful optical resolution fluorescence microscopy technique which enables to observe the mouse brain vascular network in cellular resolution. However, micro-vessel structures are intensity inhomogeneity in LSFM images, which make an inconvenience for extracting line structures. In this work, we developed a vascular image segmentation method by enhancing vessel details which should be useful for estimating statistics like micro-vessel density. Since the eigenvalues of hessian matrix and its sign describes different geometric structure in images, which enable to construct vascular similarity function and enhance line signals, the main idea of our method is to cluster the pixel values of the enhanced image. Our method contained three steps: 1) calculate the multiscale gradients and the differences between eigenvalues of Hessian matrix. 2) In order to generate the enhanced microvessels structures, a feed forward neural network was trained by 2.26 million pixels for dealing with the correlations between multi-scale gradients and the differences between eigenvalues. 3) The fuzzy local information c-means clustering (FLICM) was used to cluster the pixel values in enhance line signals. To verify the feasibility and effectiveness of this method, mouse brain vascular images have been acquired by a commercial light-sheet microscope in our lab. The experiment of the segmentation method showed that dice similarity coefficient can reach up to 85%. The results illustrated that our approach extracting line structures of blood vessels dramatically improves the vascular image and enable to accurately extract blood vessels in LSFM images.
Characterization of Adipose Tissue Product Quality Using Measurements of Oxygen Consumption Rate.
Suszynski, Thomas M; Sieber, David A; Mueller, Kathryn; Van Beek, Allen L; Cunningham, Bruce L; Kenkel, Jeffrey M
2018-03-14
Fat grafting is a common procedure in plastic surgery but associated with unpredictable graft retention. Adipose tissue (AT) "product" quality is affected by the methods used for harvest, processing and transfer, which vary widely amongst surgeons. Currently, there is no method available to accurately assess the quality of AT. In this study, we present a novel method for the assessment of AT product quality through direct measurements of oxygen consumption rate (OCR). OCR has exhibited potential in predicting outcomes following pancreatic islet transplant. Our study aim was to reapportion existing technology for its use with AT preparations and to confirm that these measurements are feasible. OCR was successfully measured for en bloc and postprocessed AT using a stirred microchamber system. OCR was then normalized to DNA content (OCR/DNA), which represents the AT product quality. Mean (±SE) OCR/DNA values for fresh en bloc and post-processed AT were 149.8 (± 9.1) and 61.1 (± 6.1) nmol/min/mg DNA, respectively. These preliminary data suggest that: (1) OCR and OCR/DNA measurements of AT harvested using conventional protocol are feasible; and (2) standard AT processing results in a decrease in overall AT product quality. OCR measurements of AT using existing technology can be done and enables accurate, real-time, quantitative assessment of the quality of AT product prior to transfer. The availability and further validation of this type of assay could enable optimization of fat grafting protocol by providing a tool for the more detailed study of procedural variables that affect AT product quality.
Advances in magnetic tweezers for single molecule and cell biophysics.
Kilinc, Devrim; Lee, Gil U
2014-01-01
Magnetic tweezers (MTW) enable highly accurate forces to be transduced to molecules to study mechanotransduction at the molecular or cellular level. We review recent MTW studies in single molecule and cell biophysics that demonstrate the flexibility of this technique. We also discuss technical advances in the method on several fronts, i.e., from novel approaches for the measurement of torque to multiplexed biophysical assays. Finally, we describe multi-component nanorods with enhanced optical and magnetic properties and discuss their potential as future MTW probes.
Boolean logic analysis for flow regime recognition of gas-liquid horizontal flow
NASA Astrophysics Data System (ADS)
Ramskill, Nicholas P.; Wang, Mi
2011-10-01
In order to develop a flowmeter for the accurate measurement of multiphase flows, it is of the utmost importance to correctly identify the flow regime present to enable the selection of the optimal method for metering. In this study, the horizontal flow of air and water in a pipeline was studied under a multitude of conditions using electrical resistance tomography but the flow regimes that are presented in this paper have been limited to plug and bubble air-water flows. This study proposes a novel method for recognition of the prevalent flow regime using only a fraction of the data, thus rendering the analysis more efficient. By considering the average conductivity of five zones along the central axis of the tomogram, key features can be identified, thus enabling the recognition of the prevalent flow regime. Boolean logic and frequency spectrum analysis has been applied for flow regime recognition. Visualization of the flow using the reconstructed images provides a qualitative comparison between different flow regimes. Application of the Boolean logic scheme enables a quantitative comparison of the flow patterns, thus reducing the subjectivity in the identification of the prevalent flow regime.
Ultraaccurate genome sequencing and haplotyping of single human cells.
Chu, Wai Keung; Edge, Peter; Lee, Ho Suk; Bansal, Vikas; Bafna, Vineet; Huang, Xiaohua; Zhang, Kun
2017-11-21
Accurate detection of variants and long-range haplotypes in genomes of single human cells remains very challenging. Common approaches require extensive in vitro amplification of genomes of individual cells using DNA polymerases and high-throughput short-read DNA sequencing. These approaches have two notable drawbacks. First, polymerase replication errors could generate tens of thousands of false-positive calls per genome. Second, relatively short sequence reads contain little to no haplotype information. Here we report a method, which is dubbed SISSOR (single-stranded sequencing using microfluidic reactors), for accurate single-cell genome sequencing and haplotyping. A microfluidic processor is used to separate the Watson and Crick strands of the double-stranded chromosomal DNA in a single cell and to randomly partition megabase-size DNA strands into multiple nanoliter compartments for amplification and construction of barcoded libraries for sequencing. The separation and partitioning of large single-stranded DNA fragments of the homologous chromosome pairs allows for the independent sequencing of each of the complementary and homologous strands. This enables the assembly of long haplotypes and reduction of sequence errors by using the redundant sequence information and haplotype-based error removal. We demonstrated the ability to sequence single-cell genomes with error rates as low as 10 -8 and average 500-kb-long DNA fragments that can be assembled into haplotype contigs with N50 greater than 7 Mb. The performance could be further improved with more uniform amplification and more accurate sequence alignment. The ability to obtain accurate genome sequences and haplotype information from single cells will enable applications of genome sequencing for diverse clinical needs. Copyright © 2017 the Author(s). Published by PNAS.
Skeletal assessment with finite element analysis: relevance, pitfalls and interpretation.
Campbell, Graeme Michael; Glüer, Claus-C
2017-07-01
Finite element models simulate the mechanical response of bone under load, enabling noninvasive assessment of strength. Models generated from quantitative computed tomography (QCT) incorporate the geometry and spatial distribution of bone mineral density (BMD) to simulate physiological and traumatic loads as well as orthopaedic implant behaviour. The present review discusses the current strengths and weakness of finite element models for application to skeletal biomechanics. In cadaver studies, finite element models provide better estimations of strength compared to BMD. Data from clinical studies are encouraging; however, the superiority of finite element models over BMD measures for fracture prediction has not been shown conclusively, and may be sex and site dependent. Therapeutic effects on bone strength are larger than for BMD; however, model validation has only been performed on untreated bone. High-resolution modalities and novel image processing methods may enhance the structural representation and predictive ability. Despite extensive use of finite element models to study orthopaedic implant stability, accurate simulation of the bone-implant interface and fracture progression remains a significant challenge. Skeletal finite element models provide noninvasive assessments of strength and implant stability. Improved structural representation and implant surface interaction may enable more accurate models of fragility in the future.
Shape Sensing Techniques for Continuum Robots in Minimally Invasive Surgery: A Survey.
Shi, Chaoyang; Luo, Xiongbiao; Qi, Peng; Li, Tianliang; Song, Shuang; Najdovski, Zoran; Fukuda, Toshio; Ren, Hongliang
2017-08-01
Continuum robots provide inherent structural compliance with high dexterity to access the surgical target sites along tortuous anatomical paths under constrained environments and enable to perform complex and delicate operations through small incisions in minimally invasive surgery. These advantages enable their broad applications with minimal trauma and make challenging clinical procedures possible with miniaturized instrumentation and high curvilinear access capabilities. However, their inherent deformable designs make it difficult to realize 3-D intraoperative real-time shape sensing to accurately model their shape. Solutions to this limitation can lead themselves to further develop closely associated techniques of closed-loop control, path planning, human-robot interaction, and surgical manipulation safety concerns in minimally invasive surgery. Although extensive model-based research that relies on kinematics and mechanics has been performed, accurate shape sensing of continuum robots remains challenging, particularly in cases of unknown and dynamic payloads. This survey investigates the recent advances in alternative emerging techniques for 3-D shape sensing in this field and focuses on the following categories: fiber-optic-sensor-based, electromagnetic-tracking-based, and intraoperative imaging modality-based shape-reconstruction methods. The limitations of existing technologies and prospects of new technologies are also discussed.
CAD-RADS - a new clinical decision support tool for coronary computed tomography angiography.
Foldyna, Borek; Szilveszter, Bálint; Scholtz, Jan-Erik; Banerji, Dahlia; Maurovich-Horvat, Pál; Hoffmann, Udo
2018-04-01
Coronary computed tomography angiography (CTA) has been established as an accurate method to non-invasively assess coronary artery disease (CAD). The proposed 'Coronary Artery Disease Reporting and Data System' (CAD-RADS) may enable standardised reporting of the broad spectrum of coronary CTA findings related to the presence, extent and composition of coronary atherosclerosis. The CAD-RADS classification is a comprehensive tool for summarising findings on a per-patient-basis dependent on the highest-grade coronary artery lesion, ranging from CAD-RADS 0 (absence of CAD) to CAD-RADS 5 (total occlusion of a coronary artery). In addition, it provides suggestions for clinical management for each classification, including further testing and therapeutic options. Despite some limitations, CAD-RADS may facilitate improved communication between imagers and patient caregivers. As such, CAD-RADS may enable a more efficient use of coronary CTA leading to more accurate utilisation of invasive coronary angiograms. Furthermore, widespread use of CAD-RADS may facilitate registry-based research of diagnostic and prognostic aspects of CTA. • CAD-RADS is a tool for standardising coronary CTA reports. • CAD-RADS includes clinical treatment recommendations based on CTA findings. • CAD-RADS has the potential to reduce variability of CTA reports.
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
Kobayashi, Shinya; Ishikawa, Tatsuya; Mutoh, Tatsushi; Hikichi, Kentaro; Suzuki, Akifumi
2012-01-01
Background: Surgical placement of a ventriculoperitoneal shunt (VPS) is the main strategy to manage hydrocephalus. However, the failure rate associated with placement of ventricular catheters remains high. Methods: A hybrid operating room, equipped with a flat-panel detector digital subtraction angiography system containing C-arm cone-beam computed tomography (CB-CT) imaging, has recently been developed and utilized to assist neurosurgical procedures. We have developed a novel technique using intraoperative fluoroscopy and a C-arm CB-CT system to facilitate accurate placement of a VPS. Results: Using this novel technique, 39 consecutive ventricular catheters were placed accurately, and no ventricular catheter failures were experienced during the follow-up period. Only two patients experienced obstruction of the VPS, both of which occurred in the extracranial portion of the shunt system. Conclusion: Surgical placement of a VPS assisted by flat panel detector CT-guided real-time fluoroscopy enabled accurate placement of ventricular catheters and was associated with a decreased need for shunt revision. PMID:23226605
A comparison of quantitative methods for clinical imaging with hyperpolarized (13)C-pyruvate.
Daniels, Charlie J; McLean, Mary A; Schulte, Rolf F; Robb, Fraser J; Gill, Andrew B; McGlashan, Nicholas; Graves, Martin J; Schwaiger, Markus; Lomas, David J; Brindle, Kevin M; Gallagher, Ferdia A
2016-04-01
Dissolution dynamic nuclear polarization (DNP) enables the metabolism of hyperpolarized (13)C-labelled molecules, such as the conversion of [1-(13)C]pyruvate to [1-(13)C]lactate, to be dynamically and non-invasively imaged in tissue. Imaging of this exchange reaction in animal models has been shown to detect early treatment response and correlate with tumour grade. The first human DNP study has recently been completed, and, for widespread clinical translation, simple and reliable methods are necessary to accurately probe the reaction in patients. However, there is currently no consensus on the most appropriate method to quantify this exchange reaction. In this study, an in vitro system was used to compare several kinetic models, as well as simple model-free methods. Experiments were performed using a clinical hyperpolarizer, a human 3 T MR system, and spectroscopic imaging sequences. The quantitative methods were compared in vivo by using subcutaneous breast tumours in rats to examine the effect of pyruvate inflow. The two-way kinetic model was the most accurate method for characterizing the exchange reaction in vitro, and the incorporation of a Heaviside step inflow profile was best able to describe the in vivo data. The lactate time-to-peak and the lactate-to-pyruvate area under the curve ratio were simple model-free approaches that accurately represented the full reaction, with the time-to-peak method performing indistinguishably from the best kinetic model. Finally, extracting data from a single pixel was a robust and reliable surrogate of the whole region of interest. This work has identified appropriate quantitative methods for future work in the analysis of human hyperpolarized (13)C data. © 2016 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander; Proksch, Roger
An ongoing challenge in atomic force microscope (AFM) experiments is the quantitative measurement of cantilever motion. The vast majority of AFMs use the optical beam deflection (OBD) method to infer the deflection of the cantilever. The OBD method is easy to implement, has impressive noise performance, and tends to be mechanically robust. However, it represents an indirect measurement of the cantilever displacement, since it is fundamentally an angular rather than a displacement measurement. Here, we demonstrate a metrological AFM that combines an OBD sensor with a laser Doppler vibrometer (LDV) to enable accurate measurements of the cantilever velocity and displacement.more » The OBD/LDV AFM allows a host of quantitative measurements to be performed, including in-situ measurements of cantilever oscillation modes in piezoresponse force microscopy. As an example application, we demonstrate how this instrument can be used for accurate quantification of piezoelectric sensitivity—a longstanding goal in the electromechanical community.« less
Development of a PCR Diagnostic System for Iris yellow spot tospovirus in Quarantine
Shin, Yong-Gil; Rho, Jae-Young
2014-01-01
Iris yellow spot virus (IYSV) is a plant pathogenic virus which has been reported to continuously occur in onion bulbs, allium field crops, seed crops, lisianthus, and irises. In South Korea, IYSV is a “controlled” virus that has not been reported, and inspection is performed when crops of the genus Iris are imported into South Korea. In this study, reverse-transcription polymerase chain reaction (RT-PCR) and nested PCR inspection methods, which can detect IYSV, from imported crops of the genus Iris at quarantine sites, were developed. In addition, a modified positive plasmid, which can be used as a positive control during inspection, was developed. This modified plasmid can facilitate a more accurate inspection by enabling the examination of a laboratory contamination in an inspection system. The inspection methods that were developed in this study are expected to contribute, through the prompt and accurate inspection of IYSV at quarantine sites to the plant quarantine in South Korea. PMID:25506310
Qin, Wan
2017-01-01
Accurate measurement of edema volume is essential for the investigation of tissue response and recovery following a traumatic injury. The measurements must be noninvasive and repetitive over time so as to monitor tissue response throughout the healing process. Such techniques are particularly necessary for the evaluation of therapeutics that are currently in development to suppress or prevent edema formation. In this study, we propose to use optical coherence tomography (OCT) technique to image and quantify edema in a mouse ear model where the injury is induced by a superficial-thickness burn. Extraction of edema volume is achieved by an attenuation compensation algorithm performed on the three-dimensional OCT images, followed by two segmentation procedures. In addition to edema volume, the segmentation method also enables accurate thickness mapping of edematous tissue, which is an important characteristic of the external symptoms of edema. To the best of our knowledge, this is the first method for noninvasively measuring absolute edema volume. PMID:27282161
Structural Loads Analysis for Wave Energy Converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
2017-06-03
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process.« less
One-step synthesis of multi-emission carbon nanodots for ratiometric temperature sensing
NASA Astrophysics Data System (ADS)
Nguyen, Vanthan; Yan, Lihe; Xu, Huanhuan; Yue, Mengmeng
2018-01-01
Measuring temperature with greater precision at localized small length scales or in a nonperturbative manner is a necessity in widespread applications, such as integrated photonic devices, micro/nano electronics, biology, and medical diagnostics. To this context, use of nanoscale fluorescent temperature probes is regarded as the most promising method for temperature sensing because they are noninvasive, accurate, and enable remote micro/nanoscale imaging. Here, we propose a novel ratiometric fluorescent sensor for nanothermometry using carbon nanodots (C-dots). The C-dots were synthesized by one-step method using femtosecond laser ablation and exhibit unique multi-emission property due to emissions from abundant functional groups on its surface. The as-prepared C-dots demonstrate excellent ratiometric temperature sensing under single wavelength excitation that achieves high temperature sensitivity with a 1.48% change per °C ratiometric response over wide-ranging temperature (5-85 °C) in aqueous buffer. The ratiometric sensor shows excellent reversibility and stability, holding great promise for the accurate measurement of temperature in many practical applications.
An overview of distributed microgrid state estimation and control for smart grids.
Rana, Md Masud; Li, Li
2015-02-12
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.
Read clouds uncover variation in complex regions of the human genome.
Bishara, Alex; Liu, Yuling; Weng, Ziming; Kashef-Haghighi, Dorna; Newburger, Daniel E; West, Robert; Sidow, Arend; Batzoglou, Serafim
2015-10-01
Although an increasing amount of human genetic variation is being identified and recorded, determining variants within repeated sequences of the human genome remains a challenge. Most population and genome-wide association studies have therefore been unable to consider variation in these regions. Core to the problem is the lack of a sequencing technology that produces reads with sufficient length and accuracy to enable unique mapping. Here, we present a novel methodology of using read clouds, obtained by accurate short-read sequencing of DNA derived from long fragment libraries, to confidently align short reads within repeat regions and enable accurate variant discovery. Our novel algorithm, Random Field Aligner (RFA), captures the relationships among the short reads governed by the long read process via a Markov Random Field. We utilized a modified version of the Illumina TruSeq synthetic long-read protocol, which yielded shallow-sequenced read clouds. We test RFA through extensive simulations and apply it to discover variants on the NA12878 human sample, for which shallow TruSeq read cloud sequencing data are available, and on an invasive breast carcinoma genome that we sequenced using the same method. We demonstrate that RFA facilitates accurate recovery of variation in 155 Mb of the human genome, including 94% of 67 Mb of segmental duplication sequence and 96% of 11 Mb of transcribed sequence, that are currently hidden from short-read technologies. © 2015 Bishara et al.; Published by Cold Spring Harbor Laboratory Press.
In vivo imaging of cancer cell size and cellularity using temporal diffusion spectroscopy.
Jiang, Xiaoyu; Li, Hua; Xie, Jingping; McKinley, Eliot T; Zhao, Ping; Gore, John C; Xu, Junzhong
2017-07-01
A temporal diffusion MRI spectroscopy based approach has been developed to quantify cancer cell size and density in vivo. A novel imaging microstructural parameters using limited spectrally edited diffusion (IMPULSED) method selects a specific limited diffusion spectral window for an accurate quantification of cell sizes ranging from 10 to 20 μm in common solid tumors. In practice, it is achieved by a combination of a single long diffusion time pulsed gradient spin echo (PGSE) and three low-frequency oscillating gradient spin echo (OGSE) acquisitions. To validate our approach, hematoxylin and eosin staining and immunostaining of cell membranes, in concert with whole slide imaging, were used to visualize nuclei and cell boundaries, and hence, enabled accurate estimates of cell size and cellularity. Based on a two compartment model (incorporating intra- and extracellular spaces), accurate estimates of cell sizes were obtained in vivo for three types of human colon cancers. The IMPULSED-derived apparent cellularities showed a stronger correlation (r = 0.81; P < 0.0001) with histology-derived cellularities than conventional ADCs (r = -0.69; P < 0.03). The IMPULSED approach samples a specific region of temporal diffusion spectra with enhanced sensitivity to length scales of 10-20 μm, and enables measurements of cell sizes and cellularities in solid tumors in vivo. Magn Reson Med 78:156-164, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Ojanperä, Ilkka; Kolmonen, Marjo; Pelander, Anna
2012-05-01
Clinical and forensic toxicology and doping control deal with hundreds or thousands of drugs that may cause poisoning or are abused, are illicit, or are prohibited in sports. Rapid and reliable screening for all these compounds of different chemical and pharmaceutical nature, preferably in a single analytical method, is a substantial effort for analytical toxicologists. Combined chromatography-mass spectrometry techniques with standardised reference libraries have been most commonly used for the purpose. In the last ten years, the focus has shifted from gas chromatography-mass spectrometry to liquid chromatography-mass spectrometry, because of progress in instrument technology and partly because of the polarity and low volatility of many new relevant substances. High-resolution mass spectrometry (HRMS), which enables accurate mass measurement at high resolving power, has recently evolved to the stage that is rapidly causing a shift from unit-resolution, quadrupole-dominated instrumentation. The main HRMS techniques today are time-of-flight mass spectrometry and Orbitrap Fourier-transform mass spectrometry. Both techniques enable a range of different drug-screening strategies that essentially rely on measuring a compound's or a fragment's mass with sufficiently high accuracy that its elemental composition can be determined directly. Accurate mass and isotopic pattern acts as a filter for confirming the identity of a compound or even identification of an unknown. High mass resolution is essential for improving confidence in accurate mass results in the analysis of complex biological samples. This review discusses recent applications of HRMS in analytical toxicology.
Processing methods for photoacoustic Doppler flowmetry with a clinical ultrasound scanner
NASA Astrophysics Data System (ADS)
Bücking, Thore M.; van den Berg, Pim J.; Balabani, Stavroula; Steenbergen, Wiendelt; Beard, Paul C.; Brunker, Joanna
2018-02-01
Photoacoustic flowmetry (PAF) based on time-domain cross correlation of photoacoustic signals is a promising technique for deep tissue measurement of blood flow velocity. Signal processing has previously been developed for single element transducers. Here, the processing methods for acoustic resolution PAF using a clinical ultrasound transducer array are developed and validated using a 64-element transducer array with a -6 dB detection band of 11 to 17 MHz. Measurements were performed on a flow phantom consisting of a tube (580 μm inner diameter) perfused with human blood flowing at physiological speeds ranging from 3 to 25 mm / s. The processing pipeline comprised: image reconstruction, filtering, displacement detection, and masking. High-pass filtering and background subtraction were found to be key preprocessing steps to enable accurate flow velocity estimates, which were calculated using a cross-correlation based method. In addition, the regions of interest in the calculated velocity maps were defined using a masking approach based on the amplitude of the cross-correlation functions. These developments enabled blood flow measurements using a transducer array, bringing PAF one step closer to clinical applicability.
Eboigbodin, Kevin; Filén, Sanna; Ojalehto, Tuomas; Brummer, Mirko; Elf, Sonja; Pousi, Kirsi; Hoser, Mark
2016-06-01
Rapid and accurate diagnosis of influenza viruses plays an important role in infection control, as well as in preventing the misuse of antibiotics. Isothermal nucleic acid amplification methods offer significant advantages over the polymerase chain reaction (PCR), since they are more rapid and do not require the sophisticated instruments needed for thermal cycling. We previously described a novel isothermal nucleic acid amplification method, 'Strand Invasion Based Amplification' (SIBA®), with high analytical sensitivity and specificity, for the detection of DNA. In this study, we describe the development of a variant of the SIBA method, namely, reverse transcription SIBA (RT-SIBA), for the rapid detection of viral RNA targets. The RT-SIBA method includes a reverse transcriptase enzyme that allows one-step reverse transcription of RNA to complementary DNA (cDNA) and simultaneous amplification and detection of the cDNA by SIBA under isothermal reaction conditions. The RT-SIBA method was found to be more sensitive than PCR for the detection of influenza A and B and could detect 100 copies of influenza RNA within 15 min. The development of RT-SIBA will enable rapid and accurate diagnosis of viral RNA targets within point-of-care or central laboratory settings.
Häyrynen, Teppo; Osterkryger, Andreas Dyhl; de Lasson, Jakob Rosenkrantz; Gregersen, Niels
2017-09-01
Recently, an open geometry Fourier modal method based on a new combination of an open boundary condition and a non-uniform k-space discretization was introduced for rotationally symmetric structures, providing a more efficient approach for modeling nanowires and micropillar cavities [J. Opt. Soc. Am. A33, 1298 (2016)JOAOD61084-752910.1364/JOSAA.33.001298]. Here, we generalize the approach to three-dimensional (3D) Cartesian coordinates, allowing for the modeling of rectangular geometries in open space. The open boundary condition is a consequence of having an infinite computational domain described using basis functions that expand the whole space. The strength of the method lies in discretizing the Fourier integrals using a non-uniform circular "dartboard" sampling of the Fourier k space. We show that our sampling technique leads to a more accurate description of the continuum of the radiation modes that leak out from the structure. We also compare our approach to conventional discretization with direct and inverse factorization rules commonly used in established Fourier modal methods. We apply our method to a variety of optical waveguide structures and demonstrate that the method leads to a significantly improved convergence, enabling more accurate and efficient modeling of open 3D nanophotonic structures.
Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying
2016-02-15
Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method. Copyright © 2016 Elsevier B.V. All rights reserved.
Sun, Bing; Zheng, Yun-Ling
2018-01-01
Currently there is no sensitive, precise, and reproducible method to quantitate alternative splicing of mRNA transcripts. Droplet digital™ PCR (ddPCR™) analysis allows for accurate digital counting for quantification of gene expression. Human telomerase reverse transcriptase (hTERT) is one of the essential components required for telomerase activity and for the maintenance of telomeres. Several alternatively spliced forms of hTERT mRNA in human primary and tumor cells have been reported in the literature. Using one pair of primers and two probes for hTERT, four alternatively spliced forms of hTERT (α-/β+, α+/β- single deletions, α-/β- double deletion, and nondeletion α+/β+) were accurately quantified through a novel analysis method via data collected from a single ddPCR reaction. In this chapter, we describe this ddPCR method that enables direct quantitative comparison of four alternatively spliced forms of the hTERT messenger RNA without the need for internal standards or multiple pairs of primers specific for each variant, eliminating the technical variation due to differential PCR amplification efficiency for different amplicons and the challenges of quantification using standard curves. This simple and straightforward method should have general utility for quantifying alternatively spliced gene transcripts.
TCRmodel: high resolution modeling of T cell receptors from sequence.
Gowthaman, Ragul; Pierce, Brian G
2018-05-22
T cell receptors (TCRs), along with antibodies, are responsible for specific antigen recognition in the adaptive immune response, and millions of unique TCRs are estimated to be present in each individual. Understanding the structural basis of TCR targeting has implications in vaccine design, autoimmunity, as well as T cell therapies for cancer. Given advances in deep sequencing leading to immune repertoire-level TCR sequence data, fast and accurate modeling methods are needed to elucidate shared and unique 3D structural features of these molecules which lead to their antigen targeting and cross-reactivity. We developed a new algorithm in the program Rosetta to model TCRs from sequence, and implemented this functionality in a web server, TCRmodel. This web server provides an easy to use interface, and models are generated quickly that users can investigate in the browser and download. Benchmarking of this method using a set of nonredundant recently released TCR crystal structures shows that models are accurate and compare favorably to models from another available modeling method. This server enables the community to obtain insights into TCRs of interest, and can be combined with methods to model and design TCR recognition of antigens. The TCRmodel server is available at: http://tcrmodel.ibbr.umd.edu/.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Characteristics of the transmission loss apparatus at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Grosveld, F. W.
1983-01-01
A description of the Transmission Loss Apparatus at NASA Langley Research Center, which is specifically designed to accommodate general aviation type aircraft structures, is presented. The measurement methodology, referred to as the Plate Reference Method, is discussed and compared with the classical method as described in the Standard of the American Society for Testing and Materials. This measurement procedure enables reliable and accurate noise transmission loss measurements down to the 50 Hz one-third octave band. The transmission loss characteristics of add-on acoustical treatments, applied to the basic structure, can be established by inclusion of appropriate absorption corrections for the treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
Arc Fault Detection & Localization by Electromagnetic-Acoustic Remote Sensing
NASA Astrophysics Data System (ADS)
Vasile, C.; Ioana, C.
2017-05-01
Electrical arc faults that occur in photovoltaic systems represent a danger due to their economic impact on production and distribution. In this paper we propose a complete system, with focus on the methodology, that enables the detection and localization of the arc fault, by the use of an electromagnetic-acoustic sensing system. By exploiting the multiple emissions of the arc fault, in conjunction with a real-time detection signal processing method, we ensure accurate detection and localization. In its final form, this present work will present in greater detail the complete system, the methods employed, results and performance, alongside further works that will be carried on.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems
Wu, Jun; Su, Zhou; Li, Jianhua
2017-01-01
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on “friend” relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems. PMID:28758943
Crowd Sensing-Enabling Security Service Recommendation for Social Fog Computing Systems.
Wu, Jun; Su, Zhou; Wang, Shen; Li, Jianhua
2017-07-30
Fog computing, shifting intelligence and resources from the remote cloud to edge networks, has the potential of providing low-latency for the communication from sensing data sources to users. For the objects from the Internet of Things (IoT) to the cloud, it is a new trend that the objects establish social-like relationships with each other, which efficiently brings the benefits of developed sociality to a complex environment. As fog service become more sophisticated, it will become more convenient for fog users to share their own services, resources, and data via social networks. Meanwhile, the efficient social organization can enable more flexible, secure, and collaborative networking. Aforementioned advantages make the social network a potential architecture for fog computing systems. In this paper, we design an architecture for social fog computing, in which the services of fog are provisioned based on "friend" relationships. To the best of our knowledge, this is the first attempt at an organized fog computing system-based social model. Meanwhile, social networking enhances the complexity and security risks of fog computing services, creating difficulties of security service recommendations in social fog computing. To address this, we propose a novel crowd sensing-enabling security service provisioning method to recommend security services accurately in social fog computing systems. Simulation results show the feasibilities and efficiency of the crowd sensing-enabling security service recommendation method for social fog computing systems.
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
Burnett, Greg C [Livermore, CA; Holzrichter, John F [Berkeley, CA; Ng, Lawrence C [Danville, CA
2006-08-08
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2004-03-23
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2006-02-14
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Burnett, Greg C.; Holzrichter, John F.; Ng, Lawrence C.
2006-04-25
The present invention is a system and method for characterizing human (or animate) speech voiced excitation functions and acoustic signals, for removing unwanted acoustic noise which often occurs when a speaker uses a microphone in common environments, and for synthesizing personalized or modified human (or other animate) speech upon command from a controller. A low power EM sensor is used to detect the motions of windpipe tissues in the glottal region of the human speech system before, during, and after voiced speech is produced by a user. From these tissue motion measurements, a voiced excitation function can be derived. Further, the excitation function provides speech production information to enhance noise removal from human speech and it enables accurate transfer functions of speech to be obtained. Previously stored excitation and transfer functions can be used for synthesizing personalized or modified human speech. Configurations of EM sensor and acoustic microphone systems are described to enhance noise cancellation and to enable multiple articulator measurements.
Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J; Kongsted, Jacob
2017-12-12
The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally tractable. This study addresses the connection between the minimal QM region size and the method used to model the MM region in the calculation of absorption properties-here exemplified for calculations on the green fluorescent protein. We find that polarizable embedding is necessary for a qualitatively correct description of the MM region, and that this enables the use of much smaller QM regions compared to fixed charge electrostatic embedding. Furthermore, absorption intensities converge very slowly with system size and inclusion of effective external field effects in the MM region through polarizabilities is therefore very important. Thus, this embedding scheme enables accurate prediction of intensities for systems that are too large to be treated fully quantum mechanically.
Radiative transport produced by oblique illumination of turbid media with collimated beams
NASA Astrophysics Data System (ADS)
Gardner, Adam R.; Kim, Arnold D.; Venugopalan, Vasan
2013-06-01
We examine the general problem of light transport initiated by oblique illumination of a turbid medium with a collimated beam. This situation has direct relevance to the analysis of cloudy atmospheres, terrestrial surfaces, soft condensed matter, and biological tissues. We introduce a solution approach to the equation of radiative transfer that governs this problem, and develop a comprehensive spherical harmonics expansion method utilizing Fourier decomposition (SHEFN). The SHEFN approach enables the solution of problems lacking azimuthal symmetry and provides both the spatial and directional dependence of the radiance. We also introduce the method of sequential-order smoothing that enables the calculation of accurate solutions from the results of two sequential low-order approximations. We apply the SHEFN approach to determine the spatial and angular dependence of both internal and boundary radiances from strongly and weakly scattering turbid media. These solutions are validated using more costly Monte Carlo simulations and reveal important insights regarding the evolution of the radiant field generated by oblique collimated beams spanning ballistic and diffusely scattering regimes.
Uervirojnangkoorn, Monarin; Zeldin, Oliver B.; Lyubimov, Artem Y.; ...
2015-03-17
There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as themore » resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. In conclusion, these developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited.« less
Uervirojnangkoorn, Monarin; Zeldin, Oliver B.; Lyubimov, Artem Y.; ...
2015-03-17
There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as themore » resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. These developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited.« less
Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Lyubimov, Artem Y; Hattne, Johan; Brewster, Aaron S; Sauter, Nicholas K; Brunger, Axel T; Weis, William I
2015-01-01
There is considerable potential for X-ray free electron lasers (XFELs) to enable determination of macromolecular crystal structures that are difficult to solve using current synchrotron sources. Prior XFEL studies often involved the collection of thousands to millions of diffraction images, in part due to limitations of data processing methods. We implemented a data processing system based on classical post-refinement techniques, adapted to specific properties of XFEL diffraction data. When applied to XFEL data from three different proteins collected using various sample delivery systems and XFEL beam parameters, our method improved the quality of the diffraction data as well as the resulting refined atomic models and electron density maps. Moreover, the number of observations for a reflection necessary to assemble an accurate data set could be reduced to a few observations. These developments will help expand the applicability of XFEL crystallography to challenging biological systems, including cases where sample is limited. DOI: http://dx.doi.org/10.7554/eLife.05421.001 PMID:25781634
Titrimetric and photometric methods for determination of hypochlorite in commercial bleaches.
Jonnalagadda, Sreekanth B; Gengan, Prabhashini
2010-01-01
Two methods, simple titration and photometric methods for determination of hypochlorite are developed, based its reaction with hydrogen peroxide and titration of the residual peroxide by acidic permanganate. In the titration method, the residual hydrogen peroxide is estimated by titration with standard permanganate solution to estimate the hypochlorite concentration. The photometric method is devised to measure the concentration of remaining permanganate, after the reaction with residual hydrogen peroxide. It employs 4 ranges of calibration curves to enable the determination of hypochlorite accurately. The new photometric method measures hypochlorite in the range 1.90 x 10(-3) to 1.90 x 10(-2) M, with high accuracy and with low variance. The concentrations of hypochlorite in diverse commercial bleach samples and in seawater which is enriched with hypochlorite were estimated using the proposed method and compared with the arsenite method. The statistical analysis validates the superiority of the proposed method.
Comparative study of signalling methods for high-speed backplane transceiver
NASA Astrophysics Data System (ADS)
Wu, Kejun
2017-11-01
A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.
Nanolock-Nanopore Facilitated Digital Diagnostics of Cancer Driver Mutation in Tumor Tissue.
Wang, Yong; Tian, Kai; Shi, Ruicheng; Gu, Amy; Pennella, Michael; Alberts, Lindsey; Gates, Kent S; Li, Guangfu; Fan, Hongxin; Wang, Michael X; Gu, Li-Qun
2017-07-28
Cancer driver mutations are clinically significant biomarkers. In precision medicine, accurate detection of these oncogenic changes in patients would enable early diagnostics of cancer, individually tailored targeted therapy, and precise monitoring of treatment response. Here we investigated a novel nanolock-nanopore method for single-molecule detection of a serine/threonine protein kinase gene BRAF V600E mutation in tumor tissues of thyroid cancer patients. The method lies in a noncovalent, mutation sequence-specific nanolock. We found that the nanolock formed on the mutant allele/probe duplex can separate the duplex dehybridization procedure into two sequential steps in the nanopore. Remarkably, this stepwise unzipping kinetics can produce a unique nanopore electric marker, with which a single DNA molecule of the cancer mutant allele can be unmistakably identified in various backgrounds of the normal wild-type allele. The single-molecule sensitivity for mutant allele enables both binary diagnostics and quantitative analysis of mutation occurrence. In the current configuration, the method can detect the BRAF V600E mutant DNA lower than 1% in the tumor tissues. The nanolock-nanopore method can be adapted to detect a broad spectrum of both transversion and transition DNA mutations, with applications from diagnostics to targeted therapy.
NASA Astrophysics Data System (ADS)
Sharma, Manu; Bhatt, Jignesh S.; Joshi, Manjunath V.
2018-04-01
Lung cancer is one of the most abundant causes of the cancerous deaths worldwide. It has low survival rate mainly due to the late diagnosis. With the hardware advancements in computed tomography (CT) technology, it is now possible to capture the high resolution images of lung region. However, it needs to be augmented by efficient algorithms to detect the lung cancer in the earlier stages using the acquired CT images. To this end, we propose a two-step algorithm for early detection of lung cancer. Given the CT image, we first extract the patch from the center location of the nodule and segment the lung nodule region. We propose to use Otsu method followed by morphological operations for the segmentation. This step enables accurate segmentation due to the use of data-driven threshold. Unlike other methods, we perform the segmentation without using the complete contour information of the nodule. In the second step, a deep convolutional neural network (CNN) is used for the better classification (malignant or benign) of the nodule present in the segmented patch. Accurate segmentation of even a tiny nodule followed by better classification using deep CNN enables the early detection of lung cancer. Experiments have been conducted using 6306 CT images of LIDC-IDRI database. We achieved the test accuracy of 84.13%, with the sensitivity and specificity of 91.69% and 73.16%, respectively, clearly outperforming the state-of-the-art algorithms.
Baek, Hyun Jae; Shin, JaeWook; Jin, Gunwoo; Cho, Jaegeol
2017-10-24
Photoplethysmographic signals are useful for heart rate variability analysis in practical ambulatory applications. While reducing the sampling rate of signals is an important consideration for modern wearable devices that enable 24/7 continuous monitoring, there have not been many studies that have investigated how to compensate the low timing resolution of low-sampling-rate signals for accurate heart rate variability analysis. In this study, we utilized the parabola approximation method and measured it against the conventional cubic spline interpolation method for the time, frequency, and nonlinear domain variables of heart rate variability. For each parameter, the intra-class correlation, standard error of measurement, Bland-Altman 95% limits of agreement and root mean squared relative error were presented. Also, elapsed time taken to compute each interpolation algorithm was investigated. The results indicated that parabola approximation is a simple, fast, and accurate algorithm-based method for compensating the low timing resolution of pulse beat intervals. In addition, the method showed comparable performance with the conventional cubic spline interpolation method. Even though the absolute value of the heart rate variability variables calculated using a signal sampled at 20 Hz were not exactly matched with those calculated using a reference signal sampled at 250 Hz, the parabola approximation method remains a good interpolation method for assessing trends in HRV measurements for low-power wearable applications.
Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.
Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori
2014-06-14
Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
NASA Astrophysics Data System (ADS)
Liu, Jie; Thiel, Walter
2018-04-01
We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.
Liu, Jie; Thiel, Walter
2018-04-21
We present an efficient implementation of configuration interaction with single excitations (CIS) for semiempirical orthogonalization-corrected OMx methods and standard modified neglect of diatomic overlap (MNDO)-type methods for the computation of vertical excitation energies as well as analytical gradients and nonadiabatic couplings. This CIS implementation is combined with Tully's fewest switches algorithm to enable surface hopping simulations of excited-state nonadiabatic dynamics. We introduce an accurate and efficient expression for the semiempirical evaluation of nonadiabatic couplings, which offers a significant speedup for medium-size molecules and is suitable for use in long nonadiabatic dynamics runs. As a pilot application, the semiempirical CIS implementation is employed to investigate ultrafast energy transfer processes in a phenylene ethynylene dendrimer model.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario
2017-06-01
The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.
NASA Astrophysics Data System (ADS)
Goh, Chin-Teng; Cruden, Andrew
2014-11-01
Capacitance and resistance are the fundamental electrical parameters used to evaluate the electrical characteristics of a supercapacitor, namely the dynamic voltage response, energy capacity, state of charge and health condition. In the British Standards EN62391 and EN62576, the constant capacitance method can be further improved with a differential capacitance that more accurately describes the dynamic voltage response of supercapacitors. This paper presents a novel bivariate quadratic based method to model the dynamic voltage response of supercapacitors under high current charge-discharge cycling, and to enable the derivation of the differential capacitance and energy capacity directly from terminal measurements, i.e. voltage and current, rather than from multiple pulsed-current or excitation signal tests across different bias levels. The estimation results the author achieves are in close agreement with experimental measurements, within a relative error of 0.2%, at various high current levels (25-200 A), more accurate than the constant capacitance method (4-7%). The archival value of this paper is the introduction of an improved quantification method for the electrical characteristics of supercapacitors, and the disclosure of the distinct properties of supercapacitors: the nonlinear capacitance-voltage characteristic, capacitance variation between charging and discharging, and distribution of energy capacity across the operating voltage window.
Image quality assessment using deep convolutional networks
NASA Astrophysics Data System (ADS)
Li, Yezhou; Ye, Xiang; Li, Yong
2017-12-01
This paper proposes a method of accurately assessing image quality without a reference image by using a deep convolutional neural network. Existing training based methods usually utilize a compact set of linear filters for learning features of images captured by different sensors to assess their quality. These methods may not be able to learn the semantic features that are intimately related with the features used in human subject assessment. Observing this drawback, this work proposes training a deep convolutional neural network (CNN) with labelled images for image quality assessment. The ReLU in the CNN allows non-linear transformations for extracting high-level image features, providing a more reliable assessment of image quality than linear filters. To enable the neural network to take images of any arbitrary size as input, the spatial pyramid pooling (SPP) is introduced connecting the top convolutional layer and the fully-connected layer. In addition, the SPP makes the CNN robust to object deformations to a certain extent. The proposed method taking an image as input carries out an end-to-end learning process, and outputs the quality of the image. It is tested on public datasets. Experimental results show that it outperforms existing methods by a large margin and can accurately assess the image quality on images taken by different sensors of varying sizes.
Accurately controlled sequential self-folding structures by polystyrene film
NASA Astrophysics Data System (ADS)
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Protein–protein docking by fast generalized Fourier transforms on 5D rotational manifolds
Padhorny, Dzmitry; Kazennov, Andrey; Zerbe, Brandon S.; Porter, Kathryn A.; Xia, Bing; Mottarella, Scott E.; Kholodov, Yaroslav; Ritchie, David W.; Vajda, Sandor; Kozakov, Dima
2016-01-01
Energy evaluation using fast Fourier transforms (FFTs) enables sampling billions of putative complex structures and hence revolutionized rigid protein–protein docking. However, in current methods, efficient acceleration is achieved only in either the translational or the rotational subspace. Developing an efficient and accurate docking method that expands FFT-based sampling to five rotational coordinates is an extensively studied but still unsolved problem. The algorithm presented here retains the accuracy of earlier methods but yields at least 10-fold speedup. The improvement is due to two innovations. First, the search space is treated as the product manifold SO(3)×(SO(3)∖S1), where SO(3) is the rotation group representing the space of the rotating ligand, and (SO(3)∖S1) is the space spanned by the two Euler angles that define the orientation of the vector from the center of the fixed receptor toward the center of the ligand. This representation enables the use of efficient FFT methods developed for SO(3). Second, we select the centers of highly populated clusters of docked structures, rather than the lowest energy conformations, as predictions of the complex, and hence there is no need for very high accuracy in energy evaluation. Therefore, it is sufficient to use a limited number of spherical basis functions in the Fourier space, which increases the efficiency of sampling while retaining the accuracy of docking results. A major advantage of the method is that, in contrast to classical approaches, increasing the number of correlation function terms is computationally inexpensive, which enables using complex energy functions for scoring. PMID:27412858
NASA Astrophysics Data System (ADS)
Adidharma, Hertanto; Tan, Sugata P.
2016-07-01
Canonical Monte Carlo simulations on face-centered cubic (FCC) and hexagonal closed packed (HCP) Lennard-Jones (LJ) solids are conducted at very low temperatures (0.10 ≤ T∗ ≤ 1.20) and high densities (0.96 ≤ ρ∗ ≤ 1.30). A simple and robust method is introduced to determine whether or not the cutoff distance used in the simulation is large enough to provide accurate thermodynamic properties, which enables us to distinguish the properties of FCC from that of HCP LJ solids with confidence, despite their close similarities. Free-energy expressions derived from the simulation results are also proposed, not only to describe the properties of those individual structures but also the FCC-liquid, FCC-vapor, and FCC-HCP solid phase equilibria.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Jin, Huaiping; Chen, Xiangguang; Yang, Jianwen; Wu, Lei; Wang, Li
2014-11-01
The lack of accurate process models and reliable online sensors for substrate measurements poses significant challenges for controlling substrate feeding accurately, automatically and optimally in fed-batch fermentation industries. It is still a common practice to regulate the feeding rate based upon manual operations. To address this issue, a hybrid intelligent control method is proposed to enable automatic substrate feeding. The resulting control system consists of three modules: a presetting module for providing initial set-points; a predictive module for estimating substrate concentration online based on a new time interval-varying soft sensing algorithm; and a feedback compensator using expert rules. The effectiveness of the proposed approach is demonstrated through its successful applications to the industrial fed-batch chlortetracycline fermentation process. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Lee, S; Kim, C S; Shin, Y G; Kim, J H; Kim, Y S; Jheong, W H
2016-03-01
The Peach rosette mosaic virus (PRMV) is a plant pathogen of the genus Nepovirus, and has been designated as a controlled quarantine virus in Korea. In this study, a specific reverse transcription (RT)-PCR marker set, nested PCR marker set, and modified-plasmid positive control were developed to promptly and accurately diagnose PRMV at plant-quarantine sites. The final selected PRMV-specific RT-PCR marker was PRMV-N10/C70 (967 bp), and the nested PCR product of 419 bp was finally amplified. The modified-plasmid positive control, in which the SalI restriction-enzyme region (GTCGAC) was inserted, verified PRMV contamination in a comparison with the control, enabling a more accurate diagnosis. It is expected that the developed method will continuously contribute to the plant-quarantine process in Korea.
Predicting poverty and wealth from mobile phone metadata.
Blumenstock, Joshua; Cadamuro, Gabriel; On, Robert
2015-11-27
Accurate and timely estimates of population characteristics are a critical input to social and economic research and policy. In industrialized economies, novel sources of data are enabling new approaches to demographic profiling, but in developing countries, fewer sources of big data exist. We show that an individual's past history of mobile phone use can be used to infer his or her socioeconomic status. Furthermore, we demonstrate that the predicted attributes of millions of individuals can, in turn, accurately reconstruct the distribution of wealth of an entire nation or to infer the asset distribution of microregions composed of just a few households. In resource-constrained environments where censuses and household surveys are rare, this approach creates an option for gathering localized and timely information at a fraction of the cost of traditional methods. Copyright © 2015, American Association for the Advancement of Science.
Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography
NASA Astrophysics Data System (ADS)
Moraes, Christopher; Sun, Yu; Simmons, Craig A.
2009-06-01
Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.
NASA Astrophysics Data System (ADS)
Wang, N.; Shen, Y.; Yang, D.; Bao, X.; Li, J.; Zhang, W.
2017-12-01
Accurate and efficient forward modeling methods are important for high resolution full waveform inversion. Compared with the elastic case, solving anelastic wave equation requires more computational time, because of the need to compute additional material-independent anelastic functions. A numerical scheme with a large Courant-Friedrichs-Lewy (CFL) condition number enables us to use a large time step to simulate wave propagation, which improves computational efficiency. In this work, we apply the fourth-order strong stability preserving Runge-Kutta method with an optimal CFL coeffiecient to solve the anelastic wave equation. We use a fourth order DRP/opt MacCormack scheme for the spatial discretization, and we approximate the rheological behaviors of the Earth by using the generalized Maxwell body model. With a larger CFL condition number, we find that the computational efficient is significantly improved compared with the traditional fourth-order Runge-Kutta method. Then, we apply the scattering-integral method for calculating travel time and amplitude sensitivity kernels with respect to velocity and attenuation structures. For each source, we carry out one forward simulation and save the time-dependent strain tensor. For each station, we carry out three `backward' simulations for the three components and save the corresponding strain tensors. The sensitivity kernels at each point in the medium are the convolution of the two sets of the strain tensors. Finally, we show several synthetic tests to verify the effectiveness of the strong stability preserving Runge-Kutta method in generating accurate synthetics in full waveform modeling, and in generating accurate strain tensors for calculating sensitivity kernels at regional and global scales.
Testing of a novel pin array guide for accurate three-dimensional glenoid component positioning.
Lewis, Gregory S; Stevens, Nicole M; Armstrong, April D
2015-12-01
A substantial challenge in total shoulder replacement is accurate positioning and alignment of the glenoid component. This challenge arises from limited intraoperative exposure and complex arthritic-driven deformity. We describe a novel pin array guide and method for patient-specific guiding of the glenoid central drill hole. We also experimentally tested the hypothesis that this method would reduce errors in version and inclination compared with 2 traditional methods. Polymer models of glenoids were created from computed tomography scans from 9 arthritic patients. Each 3-dimensional (3D) printed scapula was shrouded to simulate the operative situation. Three different methods for central drill alignment were tested, all with the target orientation of 5° retroversion and 0° inclination: no assistance, assistance by preoperative 3D imaging, and assistance by the pin array guide. Version and inclination errors of the drill line were compared. Version errors using the pin array guide (3° ± 2°) were significantly lower than version errors associated with no assistance (9° ± 7°) and preoperative 3D imaging (8° ± 6°). Inclination errors were also significantly lower using the pin array guide compared with no assistance. The new pin array guide substantially reduced errors in orientation of the central drill line. The guide method is patient specific but does not require rapid prototyping and instead uses adjustments to an array of pins based on automated software calculations. This method may ultimately provide a cost-effective solution enabling surgeons to obtain accurate orientation of the glenoid. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gliese, U.; Avanov, L. A.; Barrie, A.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M.; Pollock, C. J.; Jacques, A. D.
2013-12-01
The Fast Plasma Investigation (FPI) of the NASA Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30ms for electrons; 150ms for ions) and spatially differentiated measurements of full the 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity and reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated by setting a fixed detection threshold and, subsequently, measuring a detection system count rate plateau curve to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection amplifier threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully and individually characterize each of the fundamental parameters of the detection system. We present a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. The fundamental concepts of this method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters. This new method has been successfully applied to achieve a highly accurate calibration of the 16 Dual Electron Spectrometers and 16 Dual Ion Spectrometers of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown how this method will be applied to ensure the best possible in flight calibration during the mission.
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.
2015-01-01
Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.
Clark, Samuel A; Hickey, John M; Daetwyler, Hans D; van der Werf, Julius H J
2012-02-09
The theory of genomic selection is based on the prediction of the effects of genetic markers in linkage disequilibrium with quantitative trait loci. However, genomic selection also relies on relationships between individuals to accurately predict genetic value. This study aimed to examine the importance of information on relatives versus that of unrelated or more distantly related individuals on the estimation of genomic breeding values. Simulated and real data were used to examine the effects of various degrees of relationship on the accuracy of genomic selection. Genomic Best Linear Unbiased Prediction (gBLUP) was compared to two pedigree based BLUP methods, one with a shallow one generation pedigree and the other with a deep ten generation pedigree. The accuracy of estimated breeding values for different groups of selection candidates that had varying degrees of relationships to a reference data set of 1750 animals was investigated. The gBLUP method predicted breeding values more accurately than BLUP. The most accurate breeding values were estimated using gBLUP for closely related animals. Similarly, the pedigree based BLUP methods were also accurate for closely related animals, however when the pedigree based BLUP methods were used to predict unrelated animals, the accuracy was close to zero. In contrast, gBLUP breeding values, for animals that had no pedigree relationship with animals in the reference data set, allowed substantial accuracy. An animal's relationship to the reference data set is an important factor for the accuracy of genomic predictions. Animals that share a close relationship to the reference data set had the highest accuracy from genomic predictions. However a baseline accuracy that is driven by the reference data set size and the overall population effective population size enables gBLUP to estimate a breeding value for unrelated animals within a population (breed), using information previously ignored by pedigree based BLUP methods.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Satellite disintegration dynamics
NASA Technical Reports Server (NTRS)
Dasenbrock, R. R.; Kaufman, B.; Heard, W. B.
1975-01-01
The subject of satellite disintegration is examined in detail. Elements of the orbits of individual fragments, determined by DOD space surveillance systems, are used to accurately predict the time and place of fragmentation. Dual time independent and time dependent analyses are performed for simulated and real breakups. Methods of statistical mechanics are used to study the evolution of the fragment clouds. The fragments are treated as an ensemble of non-interacting particles. A solution of Liouville's equation is obtained which enables the spatial density to be calculated as a function of position, time and initial velocity distribution.
Interferometric observations of an artificial satellite.
Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A
1972-10-27
Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.
Improved numerical solutions for chaotic-cancer-model
NASA Astrophysics Data System (ADS)
Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair
2017-01-01
In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.
The New Tropospheric Product of the International GNSS Service
NASA Technical Reports Server (NTRS)
Byun, Sung H.; Bar-Sever, Yoaz E.; Gendt, Gerd
2005-01-01
We compare this new approach for generating the IGS tropospheric products with the previous approach, which was based on explicit combination of total zenith delay contributions from the IGS ACs. The new approach enables the IGS to rapidly generate highly accurate and highly reliable total zenith delay time series for many hundreds of sites, thus increasing the utility of the products to weather modelers, climatologists, and GPS analysts. In this paper we describe this new method, and discuss issues of accuracy, quality control, utility of the new products and assess its benefits.
Sediment acoustic index method for computing continuous suspended-sediment concentrations
Landers, Mark N.; Straub, Timothy D.; Wood, Molly S.; Domanski, Marian M.
2016-07-11
Once developed, sediment acoustic index ratings must be validated with additional suspended-sediment samples, beyond the period of record used in the rating development, to verify that the regression model continues to adequately represent sediment conditions within the stream. Changes in ADVM configuration or installation, or replacement with another ADVM, may require development of a new rating. The best practices described in this report can be used to develop continuous estimates of suspended-sediment concentration and load using sediment acoustic surrogates to enable more informed and accurate responses to diverse sedimentation issues.
Diagnosis of hydronephrosis: comparison of radionuclide scanning and sonography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malave, S.R.; Neiman, H.L.; Spies, S.M.
1980-12-01
Diagnostic sonographic and radioisotope scanning techniques have been shown to be useful in the diagnosis of obstructive uropathy. The accuracy of both methods was compared and sonography was found to provide the more accurate data (sensitivity, 90%, specificity, 98%; accuracy, 97%). Sonography provides excellent anatomic information and enables one to grade the degree of dilatation. Renal radionuclide studies were less sensitive in detecting obstruction, particularly in the presence of chronic renal disease, but offered additional information regarding relative renal blood flow, total effective renal plasma flow, and interval change in renal parenchymal function.
New Term Weighting Formulas for the Vector Space Method in Information Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chisholm, E.; Kolda, T.G.
The goal in information retrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiche, Helmut Matthias; Vogel, Sven C.
New in situ data for the U-C system are presented, with the goal of improving knowledge of the phase diagram to enable production of new ceramic fuels. The none quenchable, cubic, δ-phase, which in turn is fundamental to computational methods, was identified. Rich datasets of the formation synthesis of uranium carbide yield kinetics data which allow the benchmarking of modeling, thermodynamic parameters etc. The order-disorder transition (carbon sublattice melting) was observed due to equal sensitivity of neutrons to both elements. This dynamic has not been accurately described in some recent simulation-based publications.
Process-Hardened, Multi-Analyte Sensor for Characterizing Rocket Plume Constituents
NASA Technical Reports Server (NTRS)
Goswami, Kisholoy
2011-01-01
A multi-analyte sensor was developed that enables simultaneous detection of rocket engine combustion-product molecules in a launch-vehicle ground test stand. The sensor was developed using a pin-printing method by incorporating multiple sensor elements on a single chip. It demonstrated accurate and sensitive detection of analytes such as carbon dioxide, carbon monoxide, kerosene, isopropanol, and ethylene from a single measurement. The use of pin-printing technology enables high-volume fabrication of the sensor chip, which will ultimately eliminate the need for individual sensor calibration since many identical sensors are made in one batch. Tests were performed using a single-sensor chip attached to a fiber-optic bundle. The use of a fiber bundle allows placement of the opto-electronic readout device at a place remote from the test stand. The sensors are rugged for operation in harsh environments.
Metric Evaluation Pipeline for 3d Modeling of Urban Scenes
NASA Astrophysics Data System (ADS)
Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.
2017-05-01
Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.
NASA Astrophysics Data System (ADS)
Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.
2014-12-01
The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will involve the further integration and analysis of this data across the social sciences to facilitate the impacts across the societal domain, including timely analysis to more accurately predict and forecast future climate and environmental state.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
In-flight wind identification and soft landing control for autonomous unmanned powered parafoils
NASA Astrophysics Data System (ADS)
Luo, Shuzhen; Tan, Panlong; Sun, Qinglin; Wu, Wannan; Luo, Haowen; Chen, Zengqiang
2018-04-01
For autonomous unmanned powered parafoil, the ability to perform a final flare manoeuvre against the wind direction can allow a considerable reduction of horizontal and vertical velocities at impact, enabling a soft landing for a safe delivery of sensible loads; the lack of knowledge about the surface-layer winds will result in messing up terminal flare manoeuvre. Moreover, unknown or erroneous winds can also prevent the parafoil system from reaching the target area. To realize accurate trajectory tracking and terminal soft landing in the unknown wind environment, an efficient in-flight wind identification method merely using Global Positioning System (GPS) data and recursive least square method is proposed to online identify the variable wind information. Furthermore, a novel linear extended state observation filter is proposed to filter the groundspeed of the powered parafoil system calculated by the GPS information to provide a best estimation of the present wind during flight. Simulation experiments and real airdrop tests demonstrate the great ability of this method to in-flight identify the variable wind field, and it can benefit the powered parafoil system to fulfil accurate tracking control and a soft landing in the unknown wind field with high landing accuracy and strong wind-resistance ability.
Robert-Peillard, Fabien; Boudenne, Jean-Luc; Coulomb, Bruno
2014-05-01
This paper presents a simple, accurate and multi-sample method for the determination of proline in wines thanks to a 96-well microplate technique. Proline is the most abundant amino acid in wine and is an important parameter related to wine characteristics or maturation processes of grape. In the current study, an improved application of the general method based on sodium hypochlorite oxidation and o-phthaldialdehyde (OPA)-thiol spectrofluorometric detection is described. The main interfering compounds for specific proline detection in wines are strongly reduced by selective reaction with OPA in a preliminary step under well-defined pH conditions. Application of the protocol after a 500-fold dilution of wine samples provides a working range between 0.02 and 2.90gL(-1), with a limit of detection of 7.50mgL(-1). Comparison and validation on real wine samples by ion-exchange chromatography prove that this procedure yields accurate results. Simplicity of the protocol used, with no need for centrifugation or filtration, organic solvents or high temperature enables its full implementation in plastic microplates and efficient application for routine analysis of proline in wines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Kanematsu, Nobuyuki
2011-04-01
This work addresses computing techniques for dose calculations in treatment planning with proton and ion beams, based on an efficient kernel-convolution method referred to as grid-dose spreading (GDS) and accurate heterogeneity-correction method referred to as Gaussian beam splitting. The original GDS algorithm suffered from distortion of dose distribution for beams tilted with respect to the dose-grid axes. Use of intermediate grids normal to the beam field has solved the beam-tilting distortion. Interplay of arrangement between beams and grids was found as another intrinsic source of artifact. Inclusion of rectangular-kernel convolution in beam transport, to share the beam contribution among the nearest grids in a regulatory manner, has solved the interplay problem. This algorithmic framework was applied to a tilted proton pencil beam and a broad carbon-ion beam. In these cases, while the elementary pencil beams individually split into several tens, the calculation time increased only by several times with the GDS algorithm. The GDS and beam-splitting methods will complementarily enable accurate and efficient dose calculations for radiotherapy with protons and ions. Copyright © 2010 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Assessing the fit of site-occupancy models
MacKenzie, D.I.; Bailey, L.L.
2004-01-01
Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Confidence Region of Least Squares Solution for Single-Arc Observations
NASA Astrophysics Data System (ADS)
Principe, G.; Armellin, R.; Lewis, H.
2016-09-01
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.
Wagland, S T; Dudley, R; Naftaly, M; Longhurst, P J
2013-11-01
Two novel techniques are presented in this study which together aim to provide a system able to determine the renewable energy potential of mixed waste materials. An image analysis tool was applied to two waste samples prepared using known quantities of source-segregated recyclable materials. The technique was used to determine the composition of the wastes, where through the use of waste component properties the biogenic content of the samples was calculated. The percentage renewable energy determined by image analysis for each sample was accurate to within 5% of the actual values calculated. Microwave-based multiple-point imaging (AutoHarvest) was used to demonstrate the ability of such a technique to determine the moisture content of mixed samples. This proof-of-concept experiment was shown to produce moisture measurement accurate to within 10%. Overall, the image analysis tool was able to determine the renewable energy potential of the mixed samples, and the AutoHarvest should enable the net calorific value calculations through the provision of moisture content measurements. The proposed system is suitable for combustion facilities, and enables the operator to understand the renewable energy potential of the waste prior to combustion. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tameem, Hussain Z.; Sinha, Usha S.
2011-01-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520
Conjoined twins – role of imaging and recent advances
Francis, Swati; Basti, Ram Shenoy; Suresh, Hadihally B.; Rajarathnam, Annie; Cunha, Prema D.; Rao, Sujaya V.
2017-01-01
Introduction Conjoined twins are identical twins with fused bodies, joined in utero. They are rare complications of monochorionic twinning. The purpose of this study is to describe the various types of conjoined twins, the role of imaging and recent advances aiding in their management. Material and methods This was a twin institutional study involving 3 cases of conjoined twins diagnosed over a period of 6 years from 2010 to 2015. All the 3 cases were identified antenatally by ultrasound. Only one case was further evaluated by MRI. Results Three cases of conjoined twins (cephalopagus, thoracopagus and omphalopagus) were accurately diagnosed on antenatal ultrasound. After detailed counseling of the parents and obtaining written consent, all the three cases of pregnancy were terminated. Delivery of the viable conjoined twins was achieved without any complications to the mothers, and all the three conjoined twins died after a few minutes. Conclusion Ultrasound enables an early and accurate diagnosis of conjoined twins, which is vital for obstetric management. MRI is reserved for better tissue characterization. Termination of pregnancy when opted, should be done at an early stage as later stages are fraught with problems. Recent advances, such as 3D printing, may aid in surgical pre-planning, thereby enabling successful surgical separation of conjoined twins. PMID:29375901
Chang, Feng-Yu; Tsai, Meng-Tsan; Wang, Zu-Yi; Chi, Chun-Kai; Lee, Cheng-Kuang; Yang, Chih-Hsun; Chan, Ming-Che; Lee, Ya-Ju
2015-11-16
Blood coagulation is the clotting and subsequent dissolution of the clot following repair to the damaged tissue. However, inducing blood coagulation is difficult for some patients with homeostasis dysfunction or during surgery. In this study, we proposed a method to develop an integrated system that combines optical coherence tomography (OCT) and laser microsurgery for blood coagulation. Also, an algorithm for positioning of the treatment location from OCT images was developed. With OCT scanning, 2D/3D OCT images and angiography of tissue can be obtained simultaneously, enabling to noninvasively reconstruct the morphological and microvascular structures for real-time monitoring of changes in biological tissues during laser microsurgery. Instead of high-cost pulsed lasers, continuous-wave laser diodes (CW-LDs) with the central wavelengths of 450 nm and 532 nm are used for blood coagulation, corresponding to higher absorption coefficients of oxyhemoglobin and deoxyhemoglobin. Experimental results showed that the location of laser exposure can be accurately controlled with the proposed approach of imaging-based feedback positioning. Moreover, blood coagulation can be efficiently induced by CW-LDs and the coagulation process can be monitored in real-time with OCT. This technology enables to potentially provide accurate positioning for laser microsurgery and control the laser exposure to avoid extra damage by real-time OCT imaging.
NASA Astrophysics Data System (ADS)
Chang, Feng-Yu; Tsai, Meng-Tsan; Wang, Zu-Yi; Chi, Chun-Kai; Lee, Cheng-Kuang; Yang, Chih-Hsun; Chan, Ming-Che; Lee, Ya-Ju
2015-11-01
Blood coagulation is the clotting and subsequent dissolution of the clot following repair to the damaged tissue. However, inducing blood coagulation is difficult for some patients with homeostasis dysfunction or during surgery. In this study, we proposed a method to develop an integrated system that combines optical coherence tomography (OCT) and laser microsurgery for blood coagulation. Also, an algorithm for positioning of the treatment location from OCT images was developed. With OCT scanning, 2D/3D OCT images and angiography of tissue can be obtained simultaneously, enabling to noninvasively reconstruct the morphological and microvascular structures for real-time monitoring of changes in biological tissues during laser microsurgery. Instead of high-cost pulsed lasers, continuous-wave laser diodes (CW-LDs) with the central wavelengths of 450 nm and 532 nm are used for blood coagulation, corresponding to higher absorption coefficients of oxyhemoglobin and deoxyhemoglobin. Experimental results showed that the location of laser exposure can be accurately controlled with the proposed approach of imaging-based feedback positioning. Moreover, blood coagulation can be efficiently induced by CW-LDs and the coagulation process can be monitored in real-time with OCT. This technology enables to potentially provide accurate positioning for laser microsurgery and control the laser exposure to avoid extra damage by real-time OCT imaging.
NASA Astrophysics Data System (ADS)
Tameem, Hussain Z.; Sinha, Usha S.
2007-11-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.
The Use of LIDAR and Volunteered Geographic Information to Map Flood Extents and Inundation
NASA Astrophysics Data System (ADS)
McDougall, K.; Temple-Watts, P.
2012-07-01
Floods are one of the most destructive natural disasters that threaten communities and properties. In recent decades, flooding has claimed more lives, destroyed more houses and ruined more agricultural land than any other natural hazard. The accurate prediction of the areas of inundation from flooding is critical to saving lives and property, but relies heavily on accurate digital elevation and hydrologic models. The 2011 Brisbane floods provided a unique opportunity to capture high resolution digital aerial imagery as the floods neared their peak, allowing the capture of areas of inundation over the various city suburbs. This high quality imagery, together with accurate LiDAR data over the area and publically available volunteered geographic imagery through repositories such as Flickr, enabled the reconstruction of flood extents and the assessment of both area and depth of inundation for the assessment of damage. In this study, approximately 20 images of flood damaged properties were utilised to identify the peak of the flood. Accurate position and height values were determined through the use of RTK GPS and conventional survey methods. This information was then utilised in conjunction with river gauge information to generate a digital flood surface. The LiDAR generated DEM was then intersected with the flood surface to reconstruct the area of inundation. The model determined areas of inundation were then compared to the mapped flood extent from the high resolution digital imagery to assess the accuracy of the process. The paper concludes that accurate flood extent prediction or mapping is possible through this method, although its accuracy is dependent on the number and location of sampled points. The utilisation of LiDAR generated DEMs and DSMs can also provide an excellent mechanism to estimate depths of inundation and hence flood damage
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan
2011-01-01
Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940
Towle, Erica L.; Richards, Lisa M.; Kazmi, S. M. Shams; Fox, Douglas J.; Dunn, Andrew K.
2013-01-01
BACKGROUND Assessment of the vasculature is critical for overall success in cranial vascular neurological surgery procedures. Although several methods of monitoring cortical perfusion intraoperatively are available, not all are appropriate or convenient in a surgical environment. Recently, 2 optical methods of care have emerged that are able to obtain high spatial resolution images with easily implemented instrumentation: indocyanine green (ICG) angiography and laser speckle contrast imaging (LSCI). OBJECTIVE To evaluate the usefulness of ICG and LSCI in measuring vessel perfusion. METHODS An experimental setup was developed that simultaneously collects measurements of ICG fluorescence and LSCI in a rodent model. A 785-nm laser diode was used for both excitation of the ICG dye and the LSCI illumination. A photothrombotic clot model was used to occlude specific vessels within the field of view to enable comparison of the 2 methods for monitoring vessel perfusion. RESULTS The induced blood flow change demonstrated that ICG is an excellent method for visualizing the volume and type of vessel at a single point in time; however, it is not always an accurate representation of blood flow. In contrast, LSCI provides a continuous and accurate measurement of blood flow changes without the need of an external contrast agent. CONCLUSION These 2 methods should be used together to obtain a complete understanding of tissue perfusion. PMID:22843129
Treating knee pain: history taking and accurate diagnoses.
Barratt, Julian
2010-07-01
Prompt and effective diagnosis and treatment for common knee problems depend on practitioners' ability to distinguish between traumatic and inflammatory knee conditions. This article aims to enable practitioners to make accurate assessments, carry out knee examinations and undertake selected special tests as necessary before discharging or referring patients.
Recent Progress in the Remote Detection of Vapours and Gaseous Pollutants.
ERIC Educational Resources Information Center
Moffat, A. J.; And Others
Work has been continuing on the correlation spectrometry techniques described at previous remote sensing symposiums. Advances in the techniques are described which enable accurate quantitative measurements of diffused atmospheric gases to be made using controlled light sources, accurate quantitative measurements of gas clouds relative to…
A better understanding of SOA formation, properties and behavior in the humid eastern U.S. including dependence on anthropogenic emissions (RFA Q #1, 2). More accurate air quality prediction enabling more accurate air quality management (EPA Goal #1). Scientific insights co...
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronand; Russell, Jeff; Prados, Don; Stanley, Thomas
2005-01-01
Remotely sensed ground reflectance is the foundation of any interoperability or change detection technique. Satellite intercomparisons and accurate vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), require the generation of accurate reflectance maps (NDVI is used to describe or infer a wide variety of biophysical parameters and is defined in terms of near-infrared (NIR) and red band reflectances). Accurate reflectance-map generation from satellite imagery relies on the removal of solar and satellite geometry and of atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance has been widely applied to a few systems only. The ability to obtain atmospherically corrected imagery and products from various satellites is essential to enable widescale use of remotely sensed, multitemporal imagery for a variety of applications. An atmospheric correction approach derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that can be applied to high-spatial-resolution satellite imagery under many conditions was evaluated to demonstrate a reliable, effective reflectance map generation method. Additional information is included in the original extended abstract.
Machine Learning of Accurate Energy-Conserving Molecular Force Fields
NASA Astrophysics Data System (ADS)
Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert; GDML Collaboration
Efficient and accurate access to the Born-Oppenheimer potential energy surface (PES) is essential for long time scale molecular dynamics (MD) simulations. Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio MD trajectories (AIMD). The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/Å for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, malonaldehyde, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative MD simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.
Wagner, Rebecca; Wetzel, Stephanie J; Kern, John; Kingston, H M Skip
2012-02-01
The employment of chemical weapons by rogue states and/or terrorist organizations is an ongoing concern in the United States. The quantitative analysis of nerve agents must be rapid and reliable for use in the private and public sectors. Current methods describe a tedious and time-consuming derivatization for gas chromatography-mass spectrometry and liquid chromatography in tandem with mass spectrometry. Two solid-phase extraction (SPE) techniques for the analysis of glyphosate and methylphosphonic acid are described with the utilization of isotopically enriched analytes for quantitation via atmospheric pressure chemical ionization-quadrupole time-of-flight mass spectrometry (APCI-Q-TOF-MS) that does not require derivatization. Solid-phase extraction-isotope dilution mass spectrometry (SPE-IDMS) involves pre-equilibration of a naturally occurring sample with an isotopically enriched standard. The second extraction method, i-Spike, involves loading an isotopically enriched standard onto the SPE column before the naturally occurring sample. The sample and the spike are then co-eluted from the column enabling precise and accurate quantitation via IDMS. The SPE methods in conjunction with IDMS eliminate concerns of incomplete elution, matrix and sorbent effects, and MS drift. For accurate quantitation with IDMS, the isotopic contribution of all atoms in the target molecule must be statistically taken into account. This paper describes two newly developed sample preparation techniques for the analysis of nerve agent surrogates in drinking water as well as statistical probability analysis for proper molecular IDMS. The methods described in this paper demonstrate accurate molecular IDMS using APCI-Q-TOF-MS with limits of quantitation as low as 0.400 mg/kg for glyphosate and 0.031 mg/kg for methylphosphonic acid. Copyright © 2012 John Wiley & Sons, Ltd.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
NASA Astrophysics Data System (ADS)
Bell, L. R.; Dowling, J. A.; Pogson, E. M.; Metcalfe, P.; Holloway, L.
2017-01-01
Accurate, efficient auto-segmentation methods are essential for the clinical efficacy of adaptive radiotherapy delivered with highly conformal techniques. Current atlas based auto-segmentation techniques are adequate in this respect, however fail to account for inter-observer variation. An atlas-based segmentation method that incorporates inter-observer variation is proposed. This method is validated for a whole breast radiotherapy cohort containing 28 CT datasets with CTVs delineated by eight observers. To optimise atlas accuracy, the cohort was divided into categories by mean body mass index and laterality, with atlas’ generated for each in a leave-one-out approach. Observer CTVs were merged and thresholded to generate an auto-segmentation model representing both inter-observer and inter-patient differences. For each category, the atlas was registered to the left-out dataset to enable propagation of the auto-segmentation from atlas space. Auto-segmentation time was recorded. The segmentation was compared to the gold-standard contour using the dice similarity coefficient (DSC) and mean absolute surface distance (MASD). Comparison with the smallest and largest CTV was also made. This atlas-based auto-segmentation method incorporating inter-observer variation was shown to be efficient (<4min) and accurate for whole breast radiotherapy, with good agreement (DSC>0.7, MASD <9.3mm) between the auto-segmented contours and CTV volumes.
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Sanchez Lopez, Hector; Freschi, Fabio; Trakic, Adnan; Smith, Elliot; Herbert, Jeremy; Fuentes, Miguel; Wilson, Stephen; Liu, Limei; Repetto, Maurizio; Crozier, Stuart
2014-05-01
This article aims to present a fast, efficient and accurate multi-layer integral method (MIM) for the evaluation of complex spatiotemporal eddy currents in nonmagnetic and thin volumes of irregular geometries induced by arbitrary arrangements of gradient coils. The volume of interest is divided into a number of layers, wherein the thickness of each layer is assumed to be smaller than the skin depth and where one of the linear dimensions is much smaller than the remaining two dimensions. The diffusion equation of the current density is solved both in time-harmonic and transient domain. The experimentally measured magnetic fields produced by the coil and the induced eddy currents as well as the corresponding time-decay constants were in close agreement with the results produced by the MIM. Relevant parameters such as power loss and force induced by the eddy currents in a split cryostat were simulated using the MIM. The proposed method is capable of accurately simulating the current diffusion process inside thin volumes, such as the magnet cryostat. The method permits the priori-calculation of optimal pre-emphasis parameters. The MIM enables unified designs of gradient coil-magnet structures for an optimal mitigation of deleterious eddy current effects. Copyright © 2013 Wiley Periodicals, Inc.
Rapid detection of potyviruses from crude plant extracts.
Silva, Gonçalo; Oyekanmi, Joshua; Nkere, Chukwuemeka K; Bömer, Moritz; Kumar, P Lava; Seal, Susan E
2018-04-01
Potyviruses (genus Potyvirus; family Potyviridae) are widely distributed and represent one of the most economically important genera of plant viruses. Therefore, their accurate detection is a key factor in developing efficient control strategies. However, this can sometimes be problematic particularly in plant species containing high amounts of polysaccharides and polyphenols such as yam (Dioscorea spp.). Here, we report the development of a reliable, rapid and cost-effective detection method for the two most important potyviruses infecting yam based on reverse transcription-recombinase polymerase amplification (RT-RPA). The developed method, named 'Direct RT-RPA', detects each target virus directly from plant leaf extracts prepared with a simple and inexpensive extraction method avoiding laborious extraction of high-quality RNA. Direct RT-RPA enables the detection of virus-positive samples in under 30 min at a single low operation temperature (37 °C) without the need for any expensive instrumentation. The Direct RT-RPA tests constitute robust, accurate, sensitive and quick methods for detection of potyviruses from recalcitrant plant species. The minimal sample preparation requirements and the possibility of storing RPA reagents without cold chain storage, allow Direct RT-RPA to be adopted in minimally equipped laboratories and with potential use in plant clinic laboratories and seed certification facilities worldwide. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Characterizing the D2 statistic: word matches in biological sequences.
Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J
2009-01-01
Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.
NASA Astrophysics Data System (ADS)
Utama, M. Iqbal Bakti; Lu, Xin; Zhan, Da; Ha, Son Tung; Yuan, Yanwen; Shen, Zexiang; Xiong, Qihua
2014-10-01
Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures.Patterning two-dimensional materials into specific spatial arrangements and geometries is essential for both fundamental studies of materials and practical applications in electronics. However, the currently available patterning methods generally require etching steps that rely on complicated and expensive procedures. We report here a facile patterning method for atomically thin MoSe2 films using stripping with an SU-8 negative resist layer exposed to electron beam lithography. Additional steps of chemical and physical etching were not necessary in this SU-8 patterning method. The SU-8 patterning was used to define a ribbon channel from a field effect transistor of MoSe2 film, which was grown by chemical vapor deposition. The narrowing of the conduction channel area with SU-8 patterning was crucial in suppressing the leakage current within the device, thereby allowing a more accurate interpretation of the electrical characterization results from the sample. An electrical transport study, enabled by the SU-8 patterning, showed a variable range hopping behavior at high temperatures. Electronic supplementary information (ESI) available: Further experiments on patterning and additional electrical characterizations data. See DOI: 10.1039/c4nr03817g
NASA Astrophysics Data System (ADS)
Lozano-Vega, Gildardo; Benezeth, Yannick; Marzani, Franck; Boochs, Frank
2014-09-01
Accurate recognition of airborne pollen taxa is crucial for understanding and treating allergic diseases which affect an important proportion of the world population. Modern computer vision techniques enable the detection of discriminant characteristics. Apertures are among the important characteristics which have not been adequately explored until now. A flexible method of detection, localization, and counting of apertures of different pollen taxa with varying appearances is proposed. Aperture description is based on primitive images following the bag-of-words strategy. A confidence map is estimated based on the classification of sampled regions. The method is designed to be extended modularly to new aperture types employing the same algorithm by building individual classifiers. The method was evaluated on the top five allergenic pollen taxa in Germany, and its robustness to unseen particles was verified.
Wells, David B; Bhattacharya, Swati; Carr, Rogan; Maffeo, Christopher; Ho, Anthony; Comer, Jeffrey; Aksimentiev, Aleksei
2012-01-01
Molecular dynamics (MD) simulations have become a standard method for the rational design and interpretation of experimental studies of DNA translocation through nanopores. The MD method, however, offers a multitude of algorithms, parameters, and other protocol choices that can affect the accuracy of the resulting data as well as computational efficiency. In this chapter, we examine the most popular choices offered by the MD method, seeking an optimal set of parameters that enable the most computationally efficient and accurate simulations of DNA and ion transport through biological nanopores. In particular, we examine the influence of short-range cutoff, integration timestep and force field parameters on the temperature and concentration dependence of bulk ion conductivity, ion pairing, ion solvation energy, DNA structure, DNA-ion interactions, and the ionic current through a nanopore.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
Automated analysis of clonal cancer cells by intravital imaging
Coffey, Sarah Earley; Giedt, Randy J; Weissleder, Ralph
2013-01-01
Longitudinal analyses of single cell lineages over prolonged periods have been challenging particularly in processes characterized by high cell turn-over such as inflammation, proliferation, or cancer. RGB marking has emerged as an elegant approach for enabling such investigations. However, methods for automated image analysis continue to be lacking. Here, to address this, we created a number of different multicolored poly- and monoclonal cancer cell lines for in vitro and in vivo use. To classify these cells in large scale data sets, we subsequently developed and tested an automated algorithm based on hue selection. Our results showed that this method allows accurate analyses at a fraction of the computational time required by more complex color classification methods. Moreover, the methodology should be broadly applicable to both in vitro and in vivo analyses. PMID:24349895
Fast and accurate enzyme activity measurements using a chip-based microfluidic calorimeter.
van Schie, Morten M C H; Ebrahimi, Kourosh Honarmand; Hagen, Wilfred R; Hagedoorn, Peter-Leon
2018-03-01
Recent developments in microfluidic and nanofluidic technologies have resulted in development of new chip-based microfluidic calorimeters with potential use in different fields. One application would be the accurate high-throughput measurement of enzyme activity. Calorimetry is a generic way to measure activity of enzymes, but unlike conventional calorimeters, chip-based calorimeters can be easily automated and implemented in high-throughput screening platforms. However, application of chip-based microfluidic calorimeters to measure enzyme activity has been limited due to problems associated with miniaturization such as incomplete mixing and a decrease in volumetric heat generated. To address these problems we introduced a calibration method and devised a convenient protocol for using a chip-based microfluidic calorimeter. Using the new calibration method, the progress curve of alkaline phosphatase, which has product inhibition for phosphate, measured by the calorimeter was the same as that recorded by UV-visible spectroscopy. Our results may enable use of current chip-based microfluidic calorimeters in a simple manner as a tool for high-throughput screening of enzyme activity with potential applications in drug discovery and enzyme engineering. Copyright © 2017. Published by Elsevier Inc.
3-D rigid body tracking using vision and depth sensors.
Gedik, O Serdar; Alatan, A Aydn
2013-10-01
In robotics and augmented reality applications, model-based 3-D tracking of rigid objects is generally required. With the help of accurate pose estimates, it is required to increase reliability and decrease jitter in total. Among many solutions of pose estimation in the literature, pure vision-based 3-D trackers require either manual initializations or offline training stages. On the other hand, trackers relying on pure depth sensors are not suitable for AR applications. An automated 3-D tracking algorithm, which is based on fusion of vision and depth sensors via extended Kalman filter, is proposed in this paper. A novel measurement-tracking scheme, which is based on estimation of optical flow using intensity and shape index map data of 3-D point cloud, increases 2-D, as well as 3-D, tracking performance significantly. The proposed method requires neither manual initialization of pose nor offline training, while enabling highly accurate 3-D tracking. The accuracy of the proposed method is tested against a number of conventional techniques, and a superior performance is clearly observed in terms of both objectively via error metrics and subjectively for the rendered scenes.
An Overview of Distributed Microgrid State Estimation and Control for Smart Grids
Rana, Md Masud; Li, Li
2015-01-01
Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316
78 FR 56174 - In-Core Thermocouples at Different Elevations and Radial Positions in Reactor Core
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-12
...-core thermocouples at different elevations and radial positions throughout the reactor core to enable... different elevations and radial positions throughout the reactor core to enable NPP operators to accurately... NPPs with in-core thermocouples at different elevations and radial positions throughout the reactor...
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
2017-04-26
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolb, Brian; Lentz, Levi C.; Kolpak, Alexie M.
Modern ab initio methods have rapidly increased our understanding of solid state materials properties, chemical reactions, and the quantum interactions between atoms. However, poor scaling often renders direct ab initio calculations intractable for large or complex systems. There are two obvious avenues through which to remedy this problem: (i) develop new, less expensive methods to calculate system properties, or (ii) make existing methods faster. This paper describes an open source framework designed to pursue both of these avenues. PROPhet (short for PROPerty Prophet) utilizes machine learning techniques to find complex, non-linear mappings between sets of material or system properties. Themore » result is a single code capable of learning analytical potentials, non-linear density functionals, and other structure-property or property-property relationships. These capabilities enable highly accurate mesoscopic simulations, facilitate computation of expensive properties, and enable the development of predictive models for systematic materials design and optimization. Here, this work explores the coupling of machine learning to ab initio methods through means both familiar (e.g., the creation of various potentials and energy functionals) and less familiar (e.g., the creation of density functionals for arbitrary properties), serving both to demonstrate PROPhet’s ability to create exciting post-processing analysis tools and to open the door to improving ab initio methods themselves with these powerful machine learning techniques.« less
Fast left ventricle tracking in CMR images using localized anatomical affine optical flow
NASA Astrophysics Data System (ADS)
Queirós, Sandro; Vilaça, João. L.; Morais, Pedro; Fonseca, Jaime C.; D'hooge, Jan; Barbosa, Daniel
2015-03-01
In daily cardiology practice, assessment of left ventricular (LV) global function using non-invasive imaging remains central for the diagnosis and follow-up of patients with cardiovascular diseases. Despite the different methodologies currently accessible for LV segmentation in cardiac magnetic resonance (CMR) images, a fast and complete LV delineation is still limitedly available for routine use. In this study, a localized anatomically constrained affine optical flow method is proposed for fast and automatic LV tracking throughout the full cardiac cycle in short-axis CMR images. Starting from an automatically delineated LV in the end-diastolic frame, the endocardial and epicardial boundaries are propagated by estimating the motion between adjacent cardiac phases using optical flow. In order to reduce the computational burden, the motion is only estimated in an anatomical region of interest around the tracked boundaries and subsequently integrated into a local affine motion model. Such localized estimation enables to capture complex motion patterns, while still being spatially consistent. The method was validated on 45 CMR datasets taken from the 2009 MICCAI LV segmentation challenge. The proposed approach proved to be robust and efficient, with an average distance error of 2.1 mm and a correlation with reference ejection fraction of 0.98 (1.9 +/- 4.5%). Moreover, it showed to be fast, taking 5 seconds for the tracking of a full 4D dataset (30 ms per image). Overall, a novel fast, robust and accurate LV tracking methodology was proposed, enabling accurate assessment of relevant global function cardiac indices, such as volumes and ejection fraction
Liu, Chan-Chan; Cheng, Ming-En; Peng, Huasheng; Duan, Hai-Yan; Huang, Luqi
2015-05-01
Authentication is the first priority when evaluating the quality of Chinese herbal medicines, particularly highly toxic medicines. The most commonly used authentication methods are morphological identification and microscopic identification. Unfortunately, these methods could not effectively evaluate some herbs with complex interior structures, such as root of Aconitum species with a circular conical shape and an interior structure with successive changes. Defining the part that should be selected as the standard plays an essential role in accurate microscopic identification. In this study, we first present a visual 3D model of Aconitum carmichaeli Debx. constructed obtained from microscopic analysis of serial sections. Based on this model, we concluded that the point of largest root diameter should be used as the standard for comparison and identification. The interior structure at this point is reproducible and its shape and appearance can easily be used to distinguish among species. We also report details of the interior structures of parts not shown in the 3D model, such as stone cells and cortical thickness. To demonstrate the usefulness of the results from the 3D model, we have distinguished the microscopic structures, at their largest segments, of the other three Aconitum species used for local habitat species of Caowu. This work provides the basis for resolution of some debate regarding the microstructural differences among these species. Thus, we conclude that the 3D model composed of serial sections has enabled the selection of a standard cross-section that will enable the accurate identification of Aconitum species in Chinese medicine. © 2015 Wiley Periodicals, Inc.
de Vries, W H K; Veeger, H E J; Cutti, A G; Baten, C; van der Helm, F C T
2010-07-20
Inertial Magnetic Measurement Systems (IMMS) are becoming increasingly popular by allowing for measurements outside the motion laboratory. The latest models enable long term, accurate measurement of segment motion in terms of joint angles, if initial segment orientations can accurately be determined. The standard procedure for definition of segmental orientation is based on the measurement of positions of bony landmarks (BLM). However, IMMS do not deliver position information, so an alternative method to establish IMMS based, anatomically understandable segment orientations is proposed. For five subjects, IMMS recordings were collected in a standard anatomical position for definition of static axes, and during a series of standardized motions for the estimation of kinematic axes of rotation. For all axes, the intra- and inter-individual dispersion was estimated. Subsequently, local coordinate systems (LCS) were constructed on the basis of the combination of IMMS axes with the lowest dispersion and compared with BLM based LCS. The repeatability of the method appeared to be high; for every segment at least two axes could be determined with a dispersion of at most 3.8 degrees. Comparison of IMMS based with BLM based LCS yielded compatible results for the thorax, but less compatible results for the humerus, forearm and hand, where differences in orientation rose to 17.2 degrees. Although different from the 'gold standard' BLM based LCS, IMMS based LCS can be constructed repeatable, enabling the estimation of segment orientations outside the laboratory. A procedure for the definition of local reference frames using IMMS is proposed. 2010 Elsevier Ltd. All rights reserved.
A Proposal for Modeling Real Hardware, Weather and Marine Conditions for Underwater Sensor Networks
Climent, Salvador; Capella, Juan Vicente; Blanc, Sara; Perles, Angel; Serrano, Juan José
2013-01-01
Network simulators are useful for researching protocol performance, appraising new hardware capabilities and evaluating real application scenarios. However, these tasks can only be achieved when using accurate models and real parameters that enable the extraction of trustworthy results and conclusions. This paper presents an underwater wireless sensor network ecosystem for the ns-3 simulator. This ecosystem is composed of a new energy-harvesting model and a low-cost, low-power underwater wake-up modem model that, alongside existing models, enables the performance of accurate simulations by providing real weather and marine conditions from the location where the real application is to be deployed. PMID:23748171
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1989-01-01
Over the years, NASA has been conducting the Advanced Earth-to-Orbit (AETO) Propulsion Technology Program to provide the knowledge, understanding, and design methodology that will allow the development of advanced Earth-to-orbit propulsion systems with high performance, extended service life, automated operations, and diagnostics for in-flight health monitoring. The objective of the Aerothermodynamic Loads Definition Study is to develop methods to more accurately predict the operating environment in AETO propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. The approach taken consists of 2 parts: to modify, apply, and disseminate existing computational fluid dynamics tools in response to current needs and to develop new technology that will enable more accurate computation of the time averaged and unsteady aerothermodynamic loads in the SSME powerhead. The software tools are detailed. Significant progress was made in the area of turbomachinery, where there is an overlap between the AETO efforts and research in the aeronautical gas turbine field.
Brandenburg, Jan Gerit; Caldeweyher, Eike; Grimme, Stefan
2016-06-21
We extend the recently introduced PBEh-3c global hybrid density functional [S. Grimme et al., J. Chem. Phys., 2015, 143, 054107] by a screened Fock exchange variant based on the Henderson-Janesko-Scuseria exchange hole model. While the excellent performance of the global hybrid is maintained for small covalently bound molecules, its performance for computed condensed phase mass densities is further improved. Most importantly, a speed up of 30 to 50% can be achieved and especially for small orbital energy gap cases, the method is numerically much more robust. The latter point is important for many applications, e.g., for metal-organic frameworks, organic semiconductors, or protein structures. This enables an accurate density functional based electronic structure calculation of a full DNA helix structure on a single core desktop computer which is presented as an example in addition to comprehensive benchmark results.
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Simulating Colour Vision Deficiency from a Spectral Image.
Shrestha, Raju
2016-01-01
People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto
2017-06-26
Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.
Estimation of maternal and neonatal mortality at the subnational level in Liberia.
Moseson, Heidi; Massaquoi, Moses; Bawo, Luke; Birch, Linda; Dahn, Bernice; Zolia, Yah; Barreix, Maria; Gerdts, Caitlin
2014-11-01
To establish representative local-area baseline estimates of maternal and neonatal mortality using a novel adjusted sisterhood method. The status of maternal and neonatal health in Bomi County, Liberia, was investigated in June 2013 using a population-based survey (n=1985). The standard direct sisterhood method was modified to account for place and time of maternal death to enable calculation of subnational estimates. The modified method of measuring maternal mortality successfully enabled the calculation of area-specific estimates. Of 71 reported deaths of sisters, 18 (25.4%) were due to pregnancy-related causes and had occurred in the past 3 years in Bomi County. The estimated maternal mortality ratio was 890 maternal deaths for every 100 000 live births (95% CI, 497-1301]. The neonatal mortality rate was estimated to be 47 deaths for every 1000 live births (95% CI, 42-52). In total, 322 (16.9%) of 1900 women with accurate age data reported having had a stillbirth. The modified direct sisterhood method may be useful to other countries seeking a more regionally nuanced understanding of areas in which neonatal and maternal mortality levels still need to be reduced to meet Millennium Development Goals. Copyright © 2014 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Siok, Katarzyna; Jenerowicz, Agnieszka; Woroszkiewicz, Małgorzata
2017-07-01
Archival aerial photographs are often the only reliable source of information about the area. However, these data are single-band data that do not allow unambiguous detection of particular forms of land cover. Thus, the authors of this article seek to develop a method of coloring panchromatic aerial photographs, which enable increasing the spectral information of such images. The study used data integration algorithms based on pansharpening, implemented in commonly used remote sensing programs: ERDAS, ENVI, and PCI. Aerial photos and Landsat multispectral data recorded in 1987 and 2016 were chosen. This study proposes the use of modified intensity-hue-saturation and Brovey methods. The use of these methods enabled the addition of red-green-blue (RGB) components to monochrome images, thus enhancing their interpretability and spectral quality. The limitations of the proposed method relate to the availability of RGB satellite imagery, the accuracy of mutual orientation of the aerial and the satellite data, and the imperfection of archival aerial photographs. Therefore, it should be expected that the results of coloring will not be perfect compared to the results of the fusion of recent data with a similar ground sampling resolution, but still, they will allow a more accurate and efficient classification of land cover registered on archival aerial photographs.
Investigation of metabolic objectives in cultured hepatocytes.
Uygun, Korkut; Matthew, Howard W T; Huang, Yinlun
2007-06-15
Using optimization based methods to predict fluxes in metabolic flux balance models has been a successful approach for some microorganisms, enabling construction of in silico models and even inference of some regulatory motifs. However, this success has not been translated to mammalian cells. The lack of knowledge about metabolic objectives in mammalian cells is a major obstacle that prevents utilization of various metabolic engineering tools and methods for tissue engineering and biomedical purposes. In this work, we investigate and identify possible metabolic objectives for hepatocytes cultured in vitro. To achieve this goal, we present a special data-mining procedure for identifying metabolic objective functions in mammalian cells. This multi-level optimization based algorithm enables identifying the major fluxes in the metabolic objective from MFA data in the absence of information about critical active constraints of the system. Further, once the objective is determined, active flux constraints can also be identified and analyzed. This information can be potentially used in a predictive manner to improve cell culture results or clinical metabolic outcomes. As a result of the application of this method, it was found that in vitro cultured hepatocytes maximize oxygen uptake, coupling of urea and TCA cycles, and synthesis of serine and urea. Selection of these fluxes as the metabolic objective enables accurate prediction of the flux distribution in the system given a limited amount of flux data; thus presenting a workable in silico model for cultured hepatocytes. It is observed that an overall homeostasis picture is also emergent in the findings.
Interactive-rate Motion Planning for Concentric Tube Robots.
Torres, Luis G; Baykal, Cenk; Alterovitz, Ron
2014-05-01
Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient's anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method's high speed enables a user to continuously and freely move the robot's tip while the motion planner ensures that the robot's shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device's shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot's tip through the environment while the robot automatically avoids collisions with the anatomical obstacles.
An M-estimator for reduced-rank system identification.
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S; Vogelstein, Joshua T
2017-01-15
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ 1 and ℓ 2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models.
An M-estimator for reduced-rank system identification
Chen, Shaojie; Liu, Kai; Yang, Yuguang; Xu, Yuting; Lee, Seonjoo; Lindquist, Martin; Caffo, Brian S.; Vogelstein, Joshua T.
2018-01-01
High-dimensional time-series data from a wide variety of domains, such as neuroscience, are being generated every day. Fitting statistical models to such data, to enable parameter estimation and time-series prediction, is an important computational primitive. Existing methods, however, are unable to cope with the high-dimensional nature of these data, due to both computational and statistical reasons. We mitigate both kinds of issues by proposing an M-estimator for Reduced-rank System IDentification ( MR. SID). A combination of low-rank approximations, ℓ1 and ℓ2 penalties, and some numerical linear algebra tricks, yields an estimator that is computationally efficient and numerically stable. Simulations and real data examples demonstrate the usefulness of this approach in a variety of problems. In particular, we demonstrate that MR. SID can accurately estimate spatial filters, connectivity graphs, and time-courses from native resolution functional magnetic resonance imaging data. MR. SID therefore enables big time-series data to be analyzed using standard methods, readying the field for further generalizations including non-linear and non-Gaussian state-space models. PMID:29391659
NASA Astrophysics Data System (ADS)
Reece, Amy E.
The microfabrication of microfluidic control systems and advances in molecular amplification tools has enabled the miniaturization of single cell analytical platforms for the efficient, highly selective enumeration and molecular characterization of rare and diseased cells from clinical samples. In many cases, the high-throughput nature of microfluidic inertial focusing has enabled the popularization of this new class of Lab-on-a-Chip devices that exhibit numerous advantages over conventional methods as prognostic and diagnostic tools. Inertial focusing is the passive, sheathless alignment of particles and cells to precise spatiotemporal equilibrium positions that arise from a force balance between opposing inertial lift forces and hydrodynamic repulsions. The applicability of inertial focusing to a spectrum of filtration, separation and encapsulation challenges places heavy emphasis upon the accurate description of the hydrodynamic forces responsible for predictable inertial focusing behavior. These inertial focusing fundamentals, limitations and their applications are studied extensively throughout this work.
Extending Strong Scaling of Quantum Monte Carlo to the Exascale
NASA Astrophysics Data System (ADS)
Shulenburger, Luke; Baczewski, Andrew; Luo, Ye; Romero, Nichols; Kent, Paul
Quantum Monte Carlo is one of the most accurate and most computationally expensive methods for solving the electronic structure problem. In spite of its significant computational expense, its massively parallel nature is ideally suited to petascale computers which have enabled a wide range of applications to relatively large molecular and extended systems. Exascale capabilities have the potential to enable the application of QMC to significantly larger systems, capturing much of the complexity of real materials such as defects and impurities. However, both memory and computational demands will require significant changes to current algorithms to realize this possibility. This talk will detail both the causes of the problem and potential solutions. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corp, a wholly owned subsidiary of Lockheed Martin Corp, for the US Department of Energys National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less
Simultaneous Estimation of Withaferin A and Z-Guggulsterone in Marketed Formulation by RP-HPLC.
Agrawal, Poonam; Vegda, Rashmi; Laddha, Kirti
2015-07-01
A simple, rapid, precise and accurate high-performance liquid chromatography (HPLC) method was developed for simultaneous estimation of withaferin A and Z-guggulsterone in a polyherbal formulation containing Withania somnifera and Commiphora wightii. The chromatographic separation was achieved on a Purosphere RP-18 column (particle size 5 µm) with a mobile phase consisting of Solvent A (acetonitrile) and Solvent B (water) with the following gradients: 0-7 min, 50% A in B; 7-9 min, 50-80% A in B; 9-20 min, 80% A in B at a flow rate of 1 mL/min and detection at 235 nm. The marker compounds were well separated on the chromatogram within 20 min. The results obtained indicate accuracy and reliability of the developed simultaneous HPLC method for the quantification of withaferin A and Z-guggulsterone. The proposed method was found to be reproducible, specific, precise and accurate for simultaneous estimation of these marker compounds in a combined dosage form. The HPLC method was appropriate and the two markers are well resolved, enabling efficient quantitative analysis of withaferin A and Z-guggulsterone. The method can be successively used for quantitative analysis of these two marker constituents in combination of marketed polyherbal formulation. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Hoang, Tuan L.; Marian, Jaime; Bulatov, Vasily V.; Hosemann, Peter
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a procedure for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe3+, He+ and H+) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Han, B; Xing, L
2016-06-15
Purpose: EPID-based patient-specific quality assurance provides verification of the planning setup and delivery process that phantomless QA and log-file based virtual dosimetry methods cannot achieve. We present a method for EPID-based QA utilizing spatially-variant EPID response kernels that allows for direct calculation of the entrance fluence and 3D phantom dose. Methods: An EPID dosimetry system was utilized for 3D dose reconstruction in a cylindrical phantom for the purposes of end-to-end QA. Monte Carlo (MC) methods were used to generate pixel-specific point-spread functions (PSFs) characterizing the spatially non-uniform EPID portal response in the presence of phantom scatter. The spatially-variant PSFs weremore » decomposed into spatially-invariant basis PSFs with the symmetric central-axis kernel as the primary basis kernel and off-axis representing orthogonal perturbations in pixel-space. This compact and accurate characterization enables the use of a modified Richardson-Lucy deconvolution algorithm to directly reconstruct entrance fluence from EPID images without iterative scatter subtraction. High-resolution phantom dose kernels were cogenerated in MC with the PSFs enabling direct recalculation of the resulting phantom dose by rapid forward convolution once the entrance fluence was calculated. A Delta4 QA phantom was used to validate the dose reconstructed in this approach. Results: The spatially-invariant representation of the EPID response accurately reproduced the entrance fluence with >99.5% fidelity with a simultaneous reduction of >60% in computational overhead. 3D dose for 10{sub 6} voxels was reconstructed for the entire phantom geometry. A 3D global gamma analysis demonstrated a >95% pass rate at 3%/3mm. Conclusion: Our approach demonstrates the capabilities of an EPID-based end-to-end QA methodology that is more efficient than traditional EPID dosimetry methods. Displacing the point of measurement external to the QA phantom reduces the necessary complexity of the phantom itself while offering a method that is highly scalable and inherently generalizable to rotational and trajectory based deliveries. This research was partially supported by Varian.« less
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
Li, Jun; Jiang, Bin; Song, Hongwei; ...
2015-04-17
Here, we survey the recent advances in theoretical understanding of quantum state resolved dynamics, using the title reactions as examples. It is shown that the progress was made possible by major developments in two areas. First, an accurate analytical representation of many high-level ab initio points over a large configuration space can now be made with high fidelity and the necessary permutation symmetry. The resulting full-dimensional global potential energy surfaces enable dynamical calculations using either quasi-classical trajectory or more importantly quantum mechanical methods. The second advance is the development of accurate and efficient quantum dynamical methods, which are necessary formore » providing a reliable treatment of quantum effects in reaction dynamics such as tunneling, resonances, and zero-point energy. The powerful combination of the two advances has allowed us to achieve a quantitatively accurate characterization of the reaction dynamics, which unveiled rich dynamical features such as steric steering, strong mode specificity, and bond selectivity. The dependence of reactivity on reactant modes can be rationalized by the recently proposed sudden vector projection model, which attributes the mode specificity and bond selectivity to the coupling of reactant modes with the reaction coordinate at the relevant transition state. The deeper insights provided by these theoretical studies have advanced our understanding of reaction dynamics to a new level.« less
NASA Astrophysics Data System (ADS)
Lazariev, A.; Allouche, A.-R.; Aubert-Frécon, M.; Fauvelle, F.; Piotto, M.; Elbayed, K.; Namer, I.-J.; van Ormondt, D.; Graveron-Demilly, D.
2011-11-01
High-resolution magic angle spinning (HRMAS) nuclear magnetic resonance (NMR) is playing an increasingly important role for diagnosis. This technique enables setting up metabolite profiles of ex vivo pathological and healthy tissue. The need to monitor diseases and pharmaceutical follow-up requires an automatic quantitation of HRMAS 1H signals. However, for several metabolites, the values of chemical shifts of proton groups may slightly differ according to the micro-environment in the tissue or cells, in particular to its pH. This hampers the accurate estimation of the metabolite concentrations mainly when using quantitation algorithms based on a metabolite basis set: the metabolite fingerprints are not correct anymore. In this work, we propose an accurate method coupling quantum mechanical simulations and quantitation algorithms to handle basis-set changes. The proposed algorithm automatically corrects mismatches between the signals of the simulated basis set and the signal under analysis by maximizing the normalized cross-correlation between the mentioned signals. Optimized chemical shift values of the metabolites are obtained. This method, QM-QUEST, provides more robust fitting while limiting user involvement and respects the correct fingerprints of metabolites. Its efficiency is demonstrated by accurately quantitating 33 signals from tissue samples of human brains with oligodendroglioma, obtained at 11.7 tesla. The corresponding chemical shift changes of several metabolites within the series are also analyzed.
Living Labs: overview of ecological approaches for health promotion and rehabilitation.
Korman, M; Weiss, P L; Kizony, R
2016-01-01
The term "Living Lab" was coined to reflect the use of sensors to monitor human behavior in real life environments. Until recently such measurements had been feasible only within experimental laboratory settings. The objective of this paper is to highlight research on health care sensing and monitoring devices that enable direct, objective and accurate capture of real-world functioning. Selected articles exemplifying the key technologies that allow monitoring of the motor-cognitive activity of persons with disabilities during naturally occurring daily experiences in real-life settings are discussed in terms of (1) the ways in which the Living Lab approach has been used to date, (2) limitations related to clinical assessment in rehabilitation settings and (3) three categories of the instruments most commonly used for this purpose: personal technologies, ambient technologies and external assistive systems. Technology's most important influences on clinical practice and rehabilitation are in a shift from laboratory-based to field-centered research and a transition between in-clinic performance to daily life activities. Numerous applications show its potential for real-time clinical assessment. Current technological solutions that may provide clinicians with objective, unobtrusive measurements of health and function, as well as tools that support rehabilitation on an individual basis in natural environments provide an important asset to standard clinical measures. Until recently objective clinical assessment could not be readily performed in a client's daily functional environment. Novel technologies enable health care sensing and monitoring devices that enable direct, objective and accurate capture of real-world functioning. Such technologies are referred to as a "Living Lab" approach since they enable the capture of objective and non-obtrusive data that clinicians can use to assess performance. Research and development in this field help clinicians support maintain independence and quality of life for people who have disabilities or who are aging, and to promote more effective methods of long-term rehabilitation and maintenance of a healthy life style.
Jorjani, Hadi; Zavolan, Mihaela
2014-04-01
Accurate identification of transcription start sites (TSSs) is an essential step in the analysis of transcription regulatory networks. In higher eukaryotes, the capped analysis of gene expression technology enabled comprehensive annotation of TSSs in genomes such as those of mice and humans. In bacteria, an equivalent approach, termed differential RNA sequencing (dRNA-seq), has recently been proposed, but the application of this approach to a large number of genomes is hindered by the paucity of computational analysis methods. With few exceptions, when the method has been used, annotation of TSSs has been largely done manually. In this work, we present a computational method called 'TSSer' that enables the automatic inference of TSSs from dRNA-seq data. The method rests on a probabilistic framework for identifying both genomic positions that are preferentially enriched in the dRNA-seq data as well as preferentially captured relative to neighboring genomic regions. Evaluating our approach for TSS calling on several publicly available datasets, we find that TSSer achieves high consistency with the curated lists of annotated TSSs, but identifies many additional TSSs. Therefore, TSSer can accelerate genome-wide identification of TSSs in bacterial genomes and can aid in further characterization of bacterial transcription regulatory networks. TSSer is freely available under GPL license at http://www.clipz.unibas.ch/TSSer/index.php
Geometry of an outcrop-scale duplex in Devonian flysch, Maine
Bradley, D.C.; Bradley, L.M.
1994-01-01
We describe an outcrop-scale duplex consisting of 211 exposed repetitions of a single bed. The duplex marks an early Acadian (Middle Devonian) oblique thrust zone in the Lower Devonian flysch of northern Maine. Detailed mapping at a scale of 1:8 has enabled us to measure accurately parameters such as horse length and thickness, ramp angles and displacements; we compare these and derivative values with those of published descriptions of duplexes, and with theoretical models. Shortening estimates based on line balancing are consistently smaller than two methods of area balancing, suggesting that layer-parallel shortening preceded thrusting. ?? 1994.
Bartschat, Klaus; Kushner, Mark J.
2016-01-01
Electron collisions with atoms, ions, molecules, and surfaces are critically important to the understanding and modeling of low-temperature plasmas (LTPs), and so in the development of technologies based on LTPs. Recent progress in obtaining experimental benchmark data and the development of highly sophisticated computational methods is highlighted. With the cesium-based diode-pumped alkali laser and remote plasma etching of Si3N4 as examples, we demonstrate how accurate and comprehensive datasets for electron collisions enable complex modeling of plasma-using technologies that empower our high-technology–based society. PMID:27317740
High speed, long distance, data transmission multiplexing circuit
Mariotti, Razvan
1991-01-01
A high speed serial data transmission multiplexing circuit, which is operable to accurately transmit data over long distances (up to 3 Km), and to multiplex, select and continuously display real time analog signals in a bandwidth from DC to 100 Khz. The circuit is made fault tolerant by use of a programmable flywheel algorithm, which enables the circuit to tolerate one transmission error before losing synchronization of the transmitted frames of data. A method of encoding and framing captured and transmitted data is used which has a low overhead and prevents some particular transmitted data patterns from locking an included detector/decoder circuit.
Modeling 3D PCMI using the Extended Finite Element Method with higher order elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, W.; Spencer, Benjamin W.
2017-03-31
This report documents the recent development to enable XFEM to work with higher order elements. It also demonstrates the application of higher order (quadratic) elements to both 2D and 3D models of PCMI problems, where discrete fractures in the fuel are represented using XFEM. The modeling results demonstrate the ability of the higher order XFEM to accurately capture the effects of a crack on the response in the vicinity of the intersecting surfaces of cracked fuel and cladding, as well as represent smooth responses in the regions away from the crack.
From the Landgrave in Kassel to Isaac Newton
NASA Astrophysics Data System (ADS)
Høg, E.
2018-01-01
Landgrave Wilhelm IV established in 1560 the first permanent astronomical observatory in Europe. When he met the young Tycho Brahe in 1575 he recognized the genius and recommended him warmly to the Danish king Frederik II. Wilhelm and Tycho must share the credit for renewing astronomy with very accurate observations of positions of stars by new instrumentation and new methods. Tycho's observations of planets during 20 years enabled Johannes Kepler to derive the laws of planetary motion. These laws set Isaac Newton in a position to publish the laws of physical motion and universal gravitation in 1687 - the basis for the technical revolution.
Identification of an Efficient Gene Expression Panel for Glioblastoma Classification
Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni
2016-01-01
We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170
The research on medical image classification algorithm based on PLSA-BOW model.
Cao, C H; Cao, H L
2016-04-29
With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.
Ouyang, Lei; Yao, Ling; Zhou, Taohong; Zhu, Lihua
2018-10-16
Malachite Green (MG) is a banned pesticide for aquaculture products. As a required inspection item, its fast and accurate determination before the products' accessing market is very important. Surface enhanced Raman scattering (SERS) is a promising tool for MG sensing, but it requires the overcoming of several problems such as fairly poor sensitivity and reproducibility, especially laser induced chemical conversion and photo-bleaching during SERS observation. By using a graphene wrapped Ag array based flexible membrane sensor, a modified SERS strategy was proposed for the sensitive and accurate detection of MG. The graphene layer functioned as an inert protector for impeding chemical transferring of the bioproduct Leucomalachite Green (LMG) to MG during the SERS detection, and as a heat transmitter for preventing laser induced photo-bleaching, which enables the separate detection of MG and LMG in fish extracts. The combination of the Ag array and the graphene cover also produced plentiful densely and uniformly distributed hot spots, leading to analytical enhancement factor up to 3.9 × 10 8 and excellent reproducibility (relative standard deviation low to 5.8% for 70 runs). The proposed method was easily used for MG detection with limit of detection (LOD) as low as 2.7 × 10 -11 mol L -1 . The flexibility of the sensor enable it have a merit for in-field fast detection of MG residues on the scale of a living fish through a surface extraction and paste transferring manner. The developed strategy was successfully applied in the analysis of real samples, showing good prospects for both the fast inspection and quantitative detection of MG. Copyright © 2018 Elsevier B.V. All rights reserved.
Koumbaris, George; Kypri, Elena; Tsangaras, Kyriakos; Achilleos, Achilleas; Mina, Petros; Neofytou, Maria; Velissariou, Voula; Christopoulou, Georgia; Kallikas, Ioannis; González-Liñán, Alicia; Benusiene, Egle; Latos-Bielenska, Anna; Marek, Pietryga; Santana, Alfredo; Nagy, Nikoletta; Széll, Márta; Laudanski, Piotr; Papageorgiou, Elisavet A; Ioannides, Marios; Patsalis, Philippos C
2016-06-01
There is great need for the development of highly accurate cost effective technologies that could facilitate the widespread adoption of noninvasive prenatal testing (NIPT). We developed an assay based on the targeted analysis of cell-free DNA for the detection of fetal aneuploidies of chromosomes 21, 18, and 13. This method enabled the capture and analysis of selected genomic regions of interest. An advanced fetal fraction estimation and aneuploidy determination algorithm was also developed. This assay allowed for accurate counting and assessment of chromosomal regions of interest. The analytical performance of the assay was evaluated in a blind study of 631 samples derived from pregnancies of at least 10 weeks of gestation that had also undergone invasive testing. Our blind study exhibited 100% diagnostic sensitivity and specificity and correctly classified 52/52 (95% CI, 93.2%-100%) cases of trisomy 21, 16/16 (95% CI, 79.4%-100%) cases of trisomy 18, 5/5 (95% CI, 47.8%-100%) cases of trisomy 13, and 538/538 (95% CI, 99.3%-100%) normal cases. The test also correctly identified fetal sex in all cases (95% CI, 99.4%-100%). One sample failed prespecified assay quality control criteria, and 19 samples were nonreportable because of low fetal fraction. The extent to which free fetal DNA testing can be applied as a universal screening tool for trisomy 21, 18, and 13 depends mainly on assay accuracy and cost. Cell-free DNA analysis of targeted genomic regions in maternal plasma enables accurate and cost-effective noninvasive fetal aneuploidy detection, which is critical for widespread adoption of NIPT. © 2016 American Association for Clinical Chemistry.
NASA Technical Reports Server (NTRS)
Gliese, U.; Avanov, L. A.; Barrie, A. C.; Kujawski, J. T.; Mariano, A. J.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Gershman, D. J.; Dorelli, J. C.;
2015-01-01
The Fast Plasma Investigation (FPI) on NASAs Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers (DESs) and 16 Dual Ion Spectrometers (DISs) with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. Traditionally, the micro-channel plate (MCP) based detection systems for electrostatic particle spectrometers have been calibrated using the plateau curve technique. In this, a fixed detection threshold is set. The detection system count rate is then measured as a function of MCP voltage to determine the MCP voltage that ensures the count rate has reached a constant value independent of further variation in the MCP voltage. This is achieved when most of the MCP pulse height distribution (PHD) is located at higher values (larger pulses) than the detection system discrimination threshold. This method is adequate in single-channel detection systems and in multi-channel detection systems with very low crosstalk between channels. However, in dense multi-channel systems, it can be inadequate. Furthermore, it fails to fully describe the behavior of the detection system and individually characterize each of its fundamental parameters. To improve this situation, we have developed a detailed phenomenological description of the detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new detection system calibration method that enables accurate and repeatable measurement and calibration of MCP gain, MCP efficiency, signal loss due to variation in gain and efficiency, crosstalk from effects both above and below the MCP, noise margin, and stability margin in one single measurement. More precise calibration is highly desirable as the instruments will produce higher quality raw data that will require less post-acquisition data correction using results from in-flight pitch angle distribution measurements and ground calibration measurements. The detection system description and the fundamental concepts of this new calibration method, named threshold scan, will be presented. It will be shown how to derive all the individual detection system parameters and how to choose the optimum detection system operating point. This new method has been successfully applied to achieve a highly accurate calibration of the DESs and DISs of the MMS mission. The practical application of the method will be presented together with the achieved calibration results and their significance. Finally, it will be shown that, with further detailed modeling, this method can be extended for use in flight to achieve and maintain a highly accurate detection system calibration across a large number of instruments during the mission.
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
Reconstructing metastatic seeding patterns of human cancers
Reiter, Johannes G.; Makohon-Moore, Alvin P.; Gerold, Jeffrey M.; Bozic, Ivana; Chatterjee, Krishnendu; Iacobuzio-Donahue, Christine A.; Vogelstein, Bert; Nowak, Martin A.
2017-01-01
Reconstructing the evolutionary history of metastases is critical for understanding their basic biological principles and has profound clinical implications. Genome-wide sequencing data has enabled modern phylogenomic methods to accurately dissect subclones and their phylogenies from noisy and impure bulk tumour samples at unprecedented depth. However, existing methods are not designed to infer metastatic seeding patterns. Here we develop a tool, called Treeomics, to reconstruct the phylogeny of metastases and map subclones to their anatomic locations. Treeomics infers comprehensive seeding patterns for pancreatic, ovarian, and prostate cancers. Moreover, Treeomics correctly disambiguates true seeding patterns from sequencing artifacts; 7% of variants were misclassified by conventional statistical methods. These artifacts can skew phylogenies by creating illusory tumour heterogeneity among distinct samples. In silico benchmarking on simulated tumour phylogenies across a wide range of sample purities (15–95%) and sequencing depths (25-800 × ) demonstrates the accuracy of Treeomics compared with existing methods. PMID:28139641
Geometrically complex 3D-printed phantoms for diffuse optical imaging.
Dempsey, Laura A; Persad, Melissa; Powell, Samuel; Chitnis, Danial; Hebden, Jeremy C
2017-03-01
Tissue-equivalent phantoms that mimic the optical properties of human and animal tissues are commonly used in diffuse optical imaging research to characterize instrumentation or evaluate an image reconstruction method. Although many recipes have been produced for generating solid phantoms with specified absorption and transport scattering coefficients at visible and near-infrared wavelengths, the construction methods are generally time-consuming and are unable to create complex geometries. We present a method of generating phantoms using a standard 3D printer. A simple recipe was devised which enables printed phantoms to be produced with precisely known optical properties. To illustrate the capability of the method, we describe the creation of an anatomically accurate, tissue-equivalent premature infant head optical phantom with a hollow brain space based on MRI atlas data. A diffuse optical image of the phantom is acquired when a high contrast target is inserted into the hollow space filled with an aqueous scattering solution.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
NASA Astrophysics Data System (ADS)
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Charge carrier mobility in thin films of organic semiconductors by the gated van der Pauw method
Rolin, Cedric; Kang, Enpu; Lee, Jeong-Hwan; Borghs, Gustaaf; Heremans, Paul; Genoe, Jan
2017-01-01
Thin film transistors based on high-mobility organic semiconductors are prone to contact problems that complicate the interpretation of their electrical characteristics and the extraction of important material parameters such as the charge carrier mobility. Here we report on the gated van der Pauw method for the simple and accurate determination of the electrical characteristics of thin semiconducting films, independently from contact effects. We test our method on thin films of seven high-mobility organic semiconductors of both polarities: device fabrication is fully compatible with common transistor process flows and device measurements deliver consistent and precise values for the charge carrier mobility and threshold voltage in the high-charge carrier density regime that is representative of transistor operation. The gated van der Pauw method is broadly applicable to thin films of semiconductors and enables a simple and clean parameter extraction independent from contact effects. PMID:28397852
Waites, Ken B; Duffy, Lynn B; Bébéar, Cécile M; Matlow, Anne; Talkington, Deborah F; Kenny, George E; Totten, Patricia A; Bade, Donald J; Zheng, Xiaotian; Davidson, Maureen K; Shortridge, Virginia D; Watts, Jeffrey L; Brown, Steven D
2012-11-01
An international multilaboratory collaborative study was conducted to develop standard media and consensus methods for the performance and quality control of antimicrobial susceptibility testing of Mycoplasma pneumoniae, Mycoplasma hominis, and Ureaplasma urealyticum using broth microdilution and agar dilution techniques. A reference strain from the American Type Culture Collection was designated for each species, which was to be used for quality control purposes. Repeat testing of replicate samples of each reference strain by participating laboratories utilizing both methods and different lots of media enabled a 3- to 4-dilution MIC range to be established for drugs in several different classes, including tetracyclines, macrolides, ketolides, lincosamides, and fluoroquinolones. This represents the first multilaboratory collaboration to standardize susceptibility testing methods and to designate quality control parameters to ensure accurate and reliable assay results for mycoplasmas and ureaplasmas that infect humans.
Geometrically complex 3D-printed phantoms for diffuse optical imaging
Dempsey, Laura A.; Persad, Melissa; Powell, Samuel; Chitnis, Danial; Hebden, Jeremy C.
2017-01-01
Tissue-equivalent phantoms that mimic the optical properties of human and animal tissues are commonly used in diffuse optical imaging research to characterize instrumentation or evaluate an image reconstruction method. Although many recipes have been produced for generating solid phantoms with specified absorption and transport scattering coefficients at visible and near-infrared wavelengths, the construction methods are generally time-consuming and are unable to create complex geometries. We present a method of generating phantoms using a standard 3D printer. A simple recipe was devised which enables printed phantoms to be produced with precisely known optical properties. To illustrate the capability of the method, we describe the creation of an anatomically accurate, tissue-equivalent premature infant head optical phantom with a hollow brain space based on MRI atlas data. A diffuse optical image of the phantom is acquired when a high contrast target is inserted into the hollow space filled with an aqueous scattering solution. PMID:28663863
A dimensionally split Cartesian cut cell method for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert
2018-07-01
We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.
Accurate formulas for interaction force and energy in frequency modulation force spectroscopy
NASA Astrophysics Data System (ADS)
Sader, John E.; Jarvis, Suzanne P.
2004-03-01
Frequency modulation atomic force microscopy utilizes the change in resonant frequency of a cantilever to detect variations in the interaction force between cantilever tip and sample. While a simple relation exists enabling the frequency shift to be determined for a given force law, the required complementary inverse relation does not exist for arbitrary oscillation amplitudes of the cantilever. In this letter we address this problem and present simple yet accurate formulas that enable the interaction force and energy to be determined directly from the measured frequency shift. These formulas are valid for any oscillation amplitude and interaction force, and are therefore of widespread applicability in frequency modulation dynamic force spectroscopy.
Shields, C Wyatt; Reyes, Catherine D; López, Gabriel P
2015-03-07
Accurate and high throughput cell sorting is a critical enabling technology in molecular and cellular biology, biotechnology, and medicine. While conventional methods can provide high efficiency sorting in short timescales, advances in microfluidics have enabled the realization of miniaturized devices offering similar capabilities that exploit a variety of physical principles. We classify these technologies as either active or passive. Active systems generally use external fields (e.g., acoustic, electric, magnetic, and optical) to impose forces to displace cells for sorting, whereas passive systems use inertial forces, filters, and adhesion mechanisms to purify cell populations. Cell sorting on microchips provides numerous advantages over conventional methods by reducing the size of necessary equipment, eliminating potentially biohazardous aerosols, and simplifying the complex protocols commonly associated with cell sorting. Additionally, microchip devices are well suited for parallelization, enabling complete lab-on-a-chip devices for cellular isolation, analysis, and experimental processing. In this review, we examine the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions. We organize these sorting technologies by the type of cell preparation required (i.e., fluorescent label-based sorting, bead-based sorting, and label-free sorting) as well as by the physical principles underlying each sorting mechanism.
Shields, C. Wyatt; Reyes, Catherine D.; López, Gabriel P.
2015-01-01
Accurate and high throughput cell sorting is a critical enabling technology in molecular and cellular biology, biotechnology, and medicine. While conventional methods can provide high efficiency sorting in short timescales, advances in microfluidics have enabled the realization of miniaturized devices offering similar capabilities that exploit a variety of physical principles. We classify these technologies as either active or passive. Active systems generally use external fields (e.g., acoustic, electric, magnetic, and optical) to impose forces to displace cells for sorting, whereas passive systems use inertial forces, filters, and adhesion mechanisms to purify cell populations. Cell sorting on microchips provides numerous advantages over conventional methods by reducing the size of necessary equipment, eliminating potentially biohazardous aerosols, and simplifying the complex protocols commonly associated with cell sorting. Additionally, microchip devices are well suited for parallelization, enabling complete lab-on-a-chip devices for cellular isolation, analysis, and experimental processing. In this review, we examine the breadth of microfluidic cell sorting technologies, while focusing on those that offer the greatest potential for translation into clinical and industrial practice and that offer multiple, useful functions. We organize these sorting technologies by the type of cell preparation required (i.e., fluorescent label-based sorting, bead-based sorting, and label-free sorting) as well as by the physical principles underlying each sorting mechanism. PMID:25598308
Solution of axisymmetric and two-dimensional inviscid flow over blunt bodies by the method of lines
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II
1978-01-01
Comparisons with experimental data and the results of other computational methods demonstrated that very accurate solutions can be obtained by using relatively few lines with the method of lines approach. This method is semidiscrete and has relatively low core storage requirements as compared with fully discrete methods since very little data were stored across the shock layer. This feature is very attractive for three dimensional problems because it enables computer storage requirements to be reduced by approximately an order of magnitude. In the present study it was found that nine lines was a practical upper limit for two dimensional and axisymmetric problems. This condition limits application of the method to smooth body geometries where relatively few lines would be adequate to describe changes in the flow variables around the body. Extension of the method to three dimensions was conceptually straightforward; however, three dimensional applications would also be limited to smooth body geometries although not necessarily to total of nine lines.
Application of dietary fiber method AOAC 2011.25 in fruit and comparison with AOAC 991.43 method.
Tobaruela, Eric de C; Santos, Aline de O; Almeida-Muradian, Ligia B de; Araujo, Elias da S; Lajolo, Franco M; Menezes, Elizabete W
2018-01-01
AOAC 2011.25 method enables the quantification of most of the dietary fiber (DF) components according to the definition proposed by Codex Alimentarius. This study aimed to compare the DF content in fruits analyzed by the AOAC 2011.25 and AOAC 991.43 methods. Plums (Prunus salicina), atemoyas (Annona x atemoya), jackfruits (Artocarpus heterophyllus), and mature coconuts (Cocos nucifera) from different Brazilian regions (3 lots/fruit) were analyzed for DF, resistant starch, and fructans contents. The AOAC 2011.25 method was evaluated for precision, accuracy, and linearity in different food matrices and carbohydrate standards. The DF contents of plums, atemoyas, and jackfruits obtained by AOAC 2011.25 was higher than those obtained by AOAC 991.43 due to the presence of fructans. The DF content of mature coconuts obtained by the same methods did not present a significant difference. The AOAC 2011.25 method is recommended for fruits with considerable fructans content because it achieves more accurate values. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pauwels, Xavier, E-mail: xpauwels@hotmail.com; Azahaf, Mustapha, E-mail: mustapha.azahaf@chru-lille.fr; Lassailly, Guillaume, E-mail: guillaume.lassailly@chru-lille.fr
Purpose Most transplant centers use chemoembolisation as locoregional bridge therapy for hepatocellular carcinoma (HCC) before liver transplantation (LT). Chemoembolisation using beads loaded with doxorubicin (DEBDOX) is a promising technique that enables delivery of a large quantity of drugs against HCC. We sought to assess the imaging–histologic correlation after DEBDOX chemoembolisation.Materials and Methods All consecutive patients who had undergone DEBDOX chemoembolisation before receiving liver graft for HCC were included. Tumour response was evaluated according to Response Evaluation Criteria in Solid Tumours (RECIST) and modified RECIST (mRECIST) criteria. The result of final imaging made before LT was correlated with histological data to predict tumourmore » necrosis.ResultsTwenty-eight patients underwent 43 DEBDOX procedures for 45 HCC. Therapy had a significant effect as shown by a decrease in the mean size of the largest nodule (p = 0.02) and the sum of viable part of tumour sizes according to mRECIST criteria (p < 0.001). An objective response using mRECIST criteria was significantly correlated with mean tumour necrosis ≥90 % (p = 0.03). A complete response using mRECIST criteria enabled accurate prediction of complete tumour necrosis (p = 0.01). Correlations using RECIST criteria were not significant.ConclusionOur data confirm the potential benefit of DEBDOX chemoembolisation as bridge therapy before LT, and they provide a rational basis for new studies focusing on recurrence-free survival after LT. Radiologic evaluation according to mRECIST criteria enables accurate prediction of tumour necrosis, whereas RECIST criteria do not.« less
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
Rapid Harmonic Analysis of Piezoelectric MEMS Resonators.
Puder, Jonathan M; Pulskamp, Jeffrey S; Rudy, Ryan Q; Cassella, Cristian; Rinaldi, Matteo; Chen, Guofeng; Bhave, Sunil A; Polcawich, Ronald G
2018-06-01
This paper reports on a novel simulation method combining the speed of analytical evaluation with the accuracy of finite-element analysis (FEA). This method is known as the rapid analytical-FEA technique (RAFT). The ability of the RAFT to accurately predict frequency response orders of magnitude faster than conventional simulation methods while providing deeper insights into device design not possible with other types of analysis is detailed. Simulation results from the RAFT across wide bandwidths are compared to measured results of resonators fabricated with various materials, frequencies, and topologies with good agreement. These include resonators targeting beam extension, disk flexure, and Lamé beam modes. An example scaling analysis is presented and other applications enabled are discussed as well. The supplemental material includes example code for implementation in ANSYS, although any commonly employed FEA package may be used.
Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method.
Sinha, Debalina; Pavanello, Michele
2015-08-28
The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.
Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, Debalina; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu
2015-08-28
The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term themore » Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.« less
Background oriented schlieren in a density stratified fluid.
Verso, Lilly; Liberzon, Alex
2015-10-01
Non-intrusive quantitative fluid density measurement methods are essential in the stratified flow experiments. Digital imaging leads to synthetic schlieren methods in which the variations of the index of refraction are reconstructed computationally. In this study, an extension to one of these methods, called background oriented schlieren, is proposed. The extension enables an accurate reconstruction of the density field in stratified liquid experiments. Typically, the experiments are performed by the light source, background pattern, and the camera positioned on the opposite sides of a transparent vessel. The multimedia imaging through air-glass-water-glass-air leads to an additional aberration that destroys the reconstruction. A two-step calibration and image remapping transform are the key components that correct the images through the stratified media and provide a non-intrusive full-field density measurements of transparent liquids.
Finite-element lattice Boltzmann simulations of contact line dynamics
NASA Astrophysics Data System (ADS)
Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim
2018-01-01
The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.
Aeroheating Predictions for X-34 Using an Inviscid-Boundary Layer Method
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Kleb, William L.; Alter, Steven J.
1998-01-01
Radiative equilibrium surface temperatures and surface heating rates from a combined inviscid-boundary layer method are presented for the X-34 Reusable Launch Vehicle for several points along the hypersonic descent portion of its trajectory. Inviscid, perfect-gas solutions are generated with the Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) and the Data-Parallel Lower-Upper Relaxation (DPLUR) code. Surface temperatures and heating rates are then computed using the Langley Approximate Three-Dimensional Convective Heating (LATCH) engineering code employing both laminar and turbulent flow models. The combined inviscid-boundary layer method provides accurate predictions of surface temperatures over most of the vehicle and requires much less computational effort than a Navier-Stokes code. This enables the generation of a more thorough aerothermal database which is necessary to design the thermal protection system and specify the vehicle's flight limits.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra
NASA Astrophysics Data System (ADS)
Rezakhah, Saeid; Maleki, Yasaman
2016-07-01
Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.
GapMap: Enabling Comprehensive Autism Resource Epidemiology
Albert, Nikhila; Schwartz, Jessey; Du, Michael
2017-01-01
Background For individuals with autism spectrum disorder (ASD), finding resources can be a lengthy and difficult process. The difficulty in obtaining global, fine-grained autism epidemiological data hinders researchers from quickly and efficiently studying large-scale correlations among ASD, environmental factors, and geographical and cultural factors. Objective The objective of this study was to define resource load and resource availability for families affected by autism and subsequently create a platform to enable a more accurate representation of prevalence rates and resource epidemiology. Methods We created a mobile application, GapMap, to collect locational, diagnostic, and resource use information from individuals with autism to compute accurate prevalence rates and better understand autism resource epidemiology. GapMap is hosted on AWS S3, running on a React and Redux front-end framework. The backend framework is comprised of an AWS API Gateway and Lambda Function setup, with secure and scalable end points for retrieving prevalence and resource data, and for submitting participant data. Measures of autism resource scarcity, including resource load, resource availability, and resource gaps were defined and preliminarily computed using simulated or scraped data. Results The average distance from an individual in the United States to the nearest diagnostic center is approximately 182 km (50 miles), with a standard deviation of 235 km (146 miles). The average distance from an individual with ASD to the nearest diagnostic center, however, is only 32 km (20 miles), suggesting that individuals who live closer to diagnostic services are more likely to be diagnosed. Conclusions This study confirmed that individuals closer to diagnostic services are more likely to be diagnosed and proposes GapMap, a means to measure and enable the alleviation of increasingly overburdened diagnostic centers and resource-poor areas where parents are unable to diagnose their children as quickly and easily as needed. GapMap will collect information that will provide more accurate data for computing resource loads and availability, uncovering the impact of resource epidemiology on age and likelihood of diagnosis, and gathering localized autism prevalence rates. PMID:28473303
DOE Office of Scientific and Technical Information (OSTI.GOV)
van Rij, Jennifer A; Yu, Yi-Hsiang; Guo, Yi
This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluidmore » dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.« less
Rapid Screening of Cancer Margins in Tissue with Multimodal Confocal Microscopy
Gareau, Daniel S.; Jeon, Hana; Nehal, Kishwer S.; Rajadhyaksha, Milind
2012-01-01
Background Complete and accurate excision of cancer is guided by the examination of histopathology. However, preparation of histopathology is labor intensive and slow, leading to insufficient sampling of tissue and incomplete and/or inaccurate excision of margins. We demonstrate the potential utility of multimodal confocal mosaicing microscopy for rapid screening of cancer margins, directly in fresh surgical excisions, without the need for conventional embedding, sectioning or processing. Materials/Methods A multimodal confocal mosaicing microscope was developed to image basal cell carcinoma margins in surgical skin excisions, with resolution that shows nuclear detail. Multimodal contrast is with fluorescence for imaging nuclei and reflectance for cellular cytoplasm and dermal collagen. Thirtyfive excisions of basal cell carcinomas from Mohs surgery were imaged, and the mosaics analyzed by comparison to the corresponding frozen pathology. Results Confocal mosaics are produced in about 9 minutes, displaying tissue in fields-of-view of 12 mm with 2X magnification. A digital staining algorithm transforms black and white contrast to purple and pink, which simulates the appearance of standard histopathology. Mosaicing enables rapid digital screening, which mimics the examination of histopathology. Conclusions Multimodal confocal mosaicing microscopy offers a technology platform to potentially enable real-time pathology at the bedside. The imaging may serve as an adjunct to conventional histopathology, to expedite screening of margins and guide surgery toward more complete and accurate excision of cancer. PMID:22721570
Scaling up high throughput field phenotyping of corn and soy research plots using ground rovers
NASA Astrophysics Data System (ADS)
Peshlov, Boyan; Nakarmi, Akash; Baldwin, Steven; Essner, Scott; French, Jasenka
2017-05-01
Crop improvement programs require large and meticulous selection processes that effectively and accurately collect and analyze data to generate quality plant products as efficiently as possible, develop superior cropping and/or crop improvement methods. Typically, data collection for such testing is performed by field teams using hand-held instruments or manually-controlled devices. Although steps are taken to reduce error, the data collected in such manner can be unreliable due to human error and fatigue, which reduces the ability to make accurate selection decisions. Monsanto engineering teams have developed a high-clearance mobile platform (Rover) as a step towards high throughput and high accuracy phenotyping at an industrial scale. The rovers are equipped with GPS navigation, multiple cameras and sensors and on-board computers to acquire data and compute plant vigor metrics per plot. The supporting IT systems enable automatic path planning, plot identification, image and point cloud data QA/QC and near real-time analysis where results are streamed to enterprise databases for additional statistical analysis and product advancement decisions. Since the rover program was launched in North America in 2013, the number of research plots we can analyze in a growing season has expanded dramatically. This work describes some of the successes and challenges in scaling up of the rover platform for automated phenotyping to enable science at scale.
Beyond Captions: Linking Figures with Abstract Sentences in Biomedical Articles
Bockhorst, Joseph P.; Conroy, John M.; Agarwal, Shashank; O’Leary, Dianne P.; Yu, Hong
2012-01-01
Although figures in scientific articles have high information content and concisely communicate many key research findings, they are currently under utilized by literature search and retrieval systems. Many systems ignore figures, and those that do not typically only consider caption text. This study describes and evaluates a fully automated approach for associating figures in the body of a biomedical article with sentences in its abstract. We use supervised methods to learn probabilistic language models, hidden Markov models, and conditional random fields for predicting associations between abstract sentences and figures. Three kinds of evidence are used: text in abstract sentences and figures, relative positions of sentences and figures, and the patterns of sentence/figure associations across an article. Each information source is shown to have predictive value, and models that use all kinds of evidence are more accurate than models that do not. Our most accurate method has an -score of 69% on a cross-validation experiment, is competitive with the accuracy of human experts, has significantly better predictive accuracy than state-of-the-art methods and enables users to access figures associated with an abstract sentence with an average of 1.82 fewer mouse clicks. A user evaluation shows that human users find our system beneficial. The system is available at http://FigureItOut.askHERMES.org. PMID:22815711
Free energy landscape for the binding process of Huperzine A to acetylcholinesterase
Bai, Fang; Xu, Yechun; Chen, Jing; Liu, Qiufeng; Gu, Junfeng; Wang, Xicheng; Ma, Jianpeng; Li, Honglin; Onuchic, José N.; Jiang, Hualiang
2013-01-01
Drug-target residence time (t = 1/koff, where koff is the dissociation rate constant) has become an important index in discovering better- or best-in-class drugs. However, little effort has been dedicated to developing computational methods that can accurately predict this kinetic parameter or related parameters, koff and activation free energy of dissociation (). In this paper, energy landscape theory that has been developed to understand protein folding and function is extended to develop a generally applicable computational framework that is able to construct a complete ligand-target binding free energy landscape. This enables both the binding affinity and the binding kinetics to be accurately estimated. We applied this method to simulate the binding event of the anti-Alzheimer’s disease drug (−)−Huperzine A to its target acetylcholinesterase (AChE). The computational results are in excellent agreement with our concurrent experimental measurements. All of the predicted values of binding free energy and activation free energies of association and dissociation deviate from the experimental data only by less than 1 kcal/mol. The method also provides atomic resolution information for the (−)−Huperzine A binding pathway, which may be useful in designing more potent AChE inhibitors. We expect this methodology to be widely applicable to drug discovery and development. PMID:23440190
NASA Astrophysics Data System (ADS)
Madsen, Niels Kristian; Godtliebsen, Ian H.; Losilla, Sergio A.; Christiansen, Ove
2018-01-01
A new implementation of vibrational coupled-cluster (VCC) theory is presented, where all amplitude tensors are represented in the canonical polyadic (CP) format. The CP-VCC algorithm solves the non-linear VCC equations without ever constructing the amplitudes or error vectors in full dimension but still formally includes the full parameter space of the VCC[n] model in question resulting in the same vibrational energies as the conventional method. In a previous publication, we have described the non-linear-equation solver for CP-VCC calculations. In this work, we discuss the general algorithm for evaluating VCC error vectors in CP format including the rank-reduction methods used during the summation of the many terms in the VCC amplitude equations. Benchmark calculations for studying the computational scaling and memory usage of the CP-VCC algorithm are performed on a set of molecules including thiadiazole and an array of polycyclic aromatic hydrocarbons. The results show that the reduced scaling and memory requirements of the CP-VCC algorithm allows for performing high-order VCC calculations on systems with up to 66 vibrational modes (anthracene), which indeed are not possible using the conventional VCC method. This paves the way for obtaining highly accurate vibrational spectra and properties of larger molecules.
Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-01-01
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction. PMID:26099070
Free energy landscape for the binding process of Huperzine A to acetylcholinesterase.
Bai, Fang; Xu, Yechun; Chen, Jing; Liu, Qiufeng; Gu, Junfeng; Wang, Xicheng; Ma, Jianpeng; Li, Honglin; Onuchic, José N; Jiang, Hualiang
2013-03-12
Drug-target residence time (t = 1/k(off), where k(off) is the dissociation rate constant) has become an important index in discovering better- or best-in-class drugs. However, little effort has been dedicated to developing computational methods that can accurately predict this kinetic parameter or related parameters, k(off) and activation free energy of dissociation (ΔG(off)≠). In this paper, energy landscape theory that has been developed to understand protein folding and function is extended to develop a generally applicable computational framework that is able to construct a complete ligand-target binding free energy landscape. This enables both the binding affinity and the binding kinetics to be accurately estimated. We applied this method to simulate the binding event of the anti-Alzheimer's disease drug (-)-Huperzine A to its target acetylcholinesterase (AChE). The computational results are in excellent agreement with our concurrent experimental measurements. All of the predicted values of binding free energy and activation free energies of association and dissociation deviate from the experimental data only by less than 1 kcal/mol. The method also provides atomic resolution information for the (-)-Huperzine A binding pathway, which may be useful in designing more potent AChE inhibitors. We expect this methodology to be widely applicable to drug discovery and development.
Laan, Nick; de Bruin, Karla G; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-06-22
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin's location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin's location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.
NASA Astrophysics Data System (ADS)
Laan, Nick; de Bruin, Karla G.; Slenter, Denise; Wilhelm, Julie; Jermy, Mark; Bonn, Daniel
2015-06-01
Bloodstain Pattern Analysis is a forensic discipline in which, among others, the position of victims can be determined at crime scenes on which blood has been shed. To determine where the blood source was investigators use a straight-line approximation for the trajectory, ignoring effects of gravity and drag and thus overestimating the height of the source. We determined how accurately the location of the origin can be estimated when including gravity and drag into the trajectory reconstruction. We created eight bloodstain patterns at one meter distance from the wall. The origin’s location was determined for each pattern with: the straight-line approximation, our method including gravity, and our method including both gravity and drag. The latter two methods require the volume and impact velocity of each bloodstain, which we are able to determine with a 3D scanner and advanced fluid dynamics, respectively. We conclude that by including gravity and drag in the trajectory calculation, the origin’s location can be determined roughly four times more accurately than with the straight-line approximation. Our study enables investigators to determine if the victim was sitting or standing, or it might be possible to connect wounds on the body to specific patterns, which is important for crime scene reconstruction.
Templated fabrication of hollow nanospheres with 'windows' of accurate size and tunable number.
Xie, Duan; Hou, Yidong; Su, Yarong; Gao, Fuhua; Du, Jinglei
2015-01-01
The 'windows' or 'doors' on the surface of a closed hollow structure can enable the exchange of material and information between the interior and exterior of one hollow sphere or between two hollow spheres, and this information or material exchange can also be controlled through altering the window' size. Thus, it is very interesting and important to achieve the fabrication and adjustment of the 'windows' or 'doors' on the surface of a closed hollow structure. In this paper, we propose a new method based on the temple-assisted deposition method to achieve the fabrication of hollow spheres with windows of accurate size and number. Through precisely controlling of deposition parameters (i.e., deposition angle and number), hollow spheres with windows of total size from 0% to 50% and number from 1 to 6 have been successfully achieved. A geometrical model has been developed for the morphology simulation and size calculation of the windows, and the simulation results meet well with the experiment. This model will greatly improve the convenience and efficiency of this temple-assisted deposition method. In addition, these hollow spheres with desired windows also can be dispersed into liquid or arranged regularly on any desired substrate. These advantages will maximize their applications in many fields, such as drug transport and nano-research container.
USDA-ARS?s Scientific Manuscript database
Molecular tools enable the collection of accurate estimates of human blood index (HBI) in Phlebotomus argentipes. The refinement of a metacyclic-specific qPCR assay to identify L. donovani in P. argentipes would enable quantification of the entomological inoculation rate (EIR) for the first time. Li...
USDA-ARS?s Scientific Manuscript database
Molecular tools enable the collection of accurate estimates of human blood index (HBI) in P. argentipes. The refinement of a metacyclic-specific qPCR assay to identify L. donovani in P. argentipes would enable quantification of the entomological inoculation rate (EIR) for the first time. Likewise, a...
Poller, Wolfram C; Löwa, Norbert; Wiekhorst, Frank; Taupitz, Matthias; Wagner, Susanne; Möller, Konstantin; Baumann, Gert; Stangl, Verena; Trahms, Lutz; Ludwig, Antje
2016-02-01
In vivo tracking of nanoparticle-labeled cells by magnetic resonance imaging (MRI) crucially depends on accurate determination of cell-labeling efficacy prior to transplantation. Here, we analyzed the feasibility and accuracy of magnetic particle spectroscopy (MPS) for estimation of cell-labeling efficacy in living THP-1 cells incubated with very small superparamagnetic iron oxide nanoparticles (VSOP). Cell viability and proliferation capacity were not affected by the MPS measurement procedure. In VSOP samples without cell contact, MPS enabled highly accurate quantification. In contrast, MPS constantly overestimated the amount of cell associated and internalized VSOP. Analyses of the MPS spectrum shape expressed as harmonic ratio A₅/A₃ revealed distinct changes in the magnetic behavior of VSOP in response to cellular uptake. These changes were proportional to the deviation between MPS and actual iron amount, therefore allowing for adjusted iron quantification. Transmission electron microscopy provided visual evidence that changes in the magnetic properties correlated with cell surface interaction of VSOP as well as with alterations of particle structure and arrangement during the phagocytic process. Altogether, A₅/A₃-adjusted MPS enables highly accurate, cell-preserving VSOP quantification and furthermore provides information on the magnetic characteristics of internalized VSOP.
NASA Astrophysics Data System (ADS)
Pan, Shijia; Mirshekari, Mostafa; Fagert, Jonathon; Ramirez, Ceferino Gabriel; Chung, Albert Jin; Hu, Chih Chi; Shen, John Paul; Zhang, Pei; Noh, Hae Young
2018-02-01
Many human activities induce excitations on ambient structures with various objects, causing the structures to vibrate. Accurate vibration excitation source detection and characterization enable human activity information inference, hence allowing human activity monitoring for various smart building applications. By utilizing structural vibrations, we can achieve sparse and non-intrusive sensing, unlike pressure- and vision-based methods. Many approaches have been presented on vibration-based source characterization, and they often either focus on one excitation type or have limited performance due to the dispersion and attenuation effects of the structures. In this paper, we present our method to characterize two main types of excitations induced by human activities (impulse and slip-pulse) on multiple structures. By understanding the physical properties of waves and their propagation, the system can achieve accurate excitation tracking on different structures without large-scale labeled training data. Specifically, our algorithm takes properties of surface waves generated by impulse and of body waves generated by slip-pulse into account to handle the dispersion and attenuation effects when different types of excitations happen on various structures. We then evaluate the algorithm through multiple scenarios. Our method achieves up to a six times improvement in impulse localization accuracy and a three times improvement in slip-pulse trajectory length estimation compared to existing methods that do not take wave properties into account.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Kraemer, D; Chen, G
2014-02-01
Accurate measurements of thermal conductivity are of great importance for materials research and development. Steady-state methods determine thermal conductivity directly from the proportionality between heat flow and an applied temperature difference (Fourier Law). Although theoretically simple, in practice, achieving high accuracies with steady-state methods is challenging and requires rather complex experimental setups due to temperature sensor uncertainties and parasitic heat loss. We developed a simple differential steady-state method in which the sample is mounted between an electric heater and a temperature-controlled heat sink. Our method calibrates for parasitic heat losses from the electric heater during the measurement by maintaining a constant heater temperature close to the environmental temperature while varying the heat sink temperature. This enables a large signal-to-noise ratio which permits accurate measurements of samples with small thermal conductance values without an additional heater calibration measurement or sophisticated heater guards to eliminate parasitic heater losses. Additionally, the differential nature of the method largely eliminates the uncertainties of the temperature sensors, permitting measurements with small temperature differences, which is advantageous for samples with high thermal conductance values and/or with strongly temperature-dependent thermal conductivities. In order to accelerate measurements of more than one sample, the proposed method allows for measuring several samples consecutively at each temperature measurement point without adding significant error. We demonstrate the method by performing thermal conductivity measurements on commercial bulk thermoelectric Bi2Te3 samples in the temperature range of 30-150 °C with an error below 3%.
Barbara, Joanna E; Castro-Perez, Jose M
2011-10-30
Electrophilic reactive metabolite screening by liquid chromatography/mass spectrometry (LC/MS) is commonly performed during drug discovery and early-stage drug development. Accurate mass spectrometry has excellent utility in this application, but sophisticated data processing strategies are essential to extract useful information. Herein, a unified approach to glutathione (GSH) trapped reactive metabolite screening with high-resolution LC/TOF MS(E) analysis and drug-conjugate-specific in silico data processing was applied to rapid analysis of test compounds without the need for stable- or radio-isotope-labeled trapping agents. Accurate mass defect filtering (MDF) with a C-heteroatom dealkylation algorithm dynamic with mass range was compared to linear MDF and shown to minimize false positive results. MS(E) data-filtering, time-alignment and data mining post-acquisition enabled detection of 53 GSH conjugates overall formed from 5 drugs. Automated comparison of sample and control data in conjunction with the mass defect filter enabled detection of several conjugates that were not evident with mass defect filtering alone. High- and low-energy MS(E) data were time-aligned to generate in silico product ion spectra which were successfully applied to structural elucidation of detected GSH conjugates. Pseudo neutral loss and precursor ion chromatograms derived post-acquisition demonstrated 50.9% potential coverage, at best, of the detected conjugates by any individual precursor or neutral loss scan type. In contrast with commonly applied neutral loss and precursor-based techniques, the unified method has the advantage of applicability across different classes of GSH conjugates. The unified method was also successfully applied to cyanide trapping analysis and has potential for application to alternate trapping agents. Copyright © 2011 John Wiley & Sons, Ltd.
On the development of a methodology for extensive in-situ and continuous atmospheric CO2 monitoring
NASA Astrophysics Data System (ADS)
Wang, K.; Chang, S.; Jhang, T.
2010-12-01
Carbon dioxide is recognized as the dominating greenhouse gas contributing to anthropogenic global warming. Stringent controls on carbon dioxide emissions are viewed as necessary steps in controlling atmospheric carbon dioxide concentrations. From the view point of policy making, regulation of carbon dioxide emissions and its monitoring are keys to the success of stringent controls on carbon dioxide emissions. Especially, extensive atmospheric CO2 monitoring is a crucial step to ensure that CO2 emission control strategies are closely followed. In this work we develop a methodology that enables reliable and accurate in-situ and continuous atmospheric CO2 monitoring for policy making. The methodology comprises the use of gas filter correlation (GFC) instrument for in-situ CO2 monitoring, the use of CO2 working standards accompanying the continuous measurements, and the use of NOAA WMO CO2 standard gases for calibrating the working standards. The use of GFC instruments enables 1-second data sampling frequency with the interference of water vapor removed from added dryer. The CO2 measurements are conducted in the following timed and cycled manner: zero CO2 measurement, two standard CO2 gases measurements, and ambient air measurements. The standard CO2 gases are calibrated again NOAA WMO CO2 standards. The methodology is used in indoor CO2 measurements in a commercial office (about 120 people working inside), ambient CO2 measurements, and installed in a fleet of in-service commercial cargo ships for monitoring CO2 over global marine boundary layer. These measurements demonstrate our method is reliable, accurate, and traceable to NOAA WMO CO2 standards. The portability of the instrument and the working standards make the method readily applied for large-scale and extensive CO2 measurements.
In situ droplet surface tension and viscosity measurements in gas metal arc welding
NASA Astrophysics Data System (ADS)
Bachmann, B.; Siewert, E.; Schein, J.
2012-05-01
In this paper, we present an adaptation of a drop oscillation technique that enables in situ measurements of thermophysical properties of an industrial pulsed gas metal arc welding (GMAW) process. Surface tension, viscosity, density and temperature were derived expanding the portfolio of existing methods and previously published measurements of surface tension in pulsed GMAW. Natural oscillations of pure liquid iron droplets are recorded during the material transfer with a high-speed camera. Frame rates up to 30 000 fps were utilized to visualize iron droplet oscillations which were in the low kHz range. Image processing algorithms were employed for edge contour extraction of the droplets and to derive parameters such as oscillation frequencies and damping rates along different dimensions of the droplet. Accurate surface tension measurements were achieved incorporating the effect of temperature on density. These are compared with a second method that has been developed to accurately determine the mass of droplets produced during the GMAW process which enables precise surface tension measurements with accuracies up to 1% and permits the study of thermophysical properties also for metals whose density highly depends on temperature. Thermophysical properties of pure liquid iron droplets formed by a wire with 1.2 mm diameter were investigated in a pulsed GMAW process with a base current of 100 A and a pulse current of 600 A. Surface tension and viscosity of a sample droplet were 1.83 ± 0.02 N m-1 and 2.9 ± 0.3 mPa s, respectively. The corresponding droplet temperature and density are 2040 ± 50 K and 6830 ± 50 kg m-3, respectively.
Bian, Xiao-Peng; Yang, Tao; Lin, An-Jun; Jiang, Shao-Yong
2015-01-01
We have developed a technique for the rapid, precise and accurate determination of sulfur isotopes (δ(34)S) by MC-ICP-MS applicable to a range of sulfur-bearing solutions of different sulfur content. The 10 ppm Alfa-S solution (ammonium sulfate solution, working standard of the lab of the authors) was used to bracket other Alfa-S solutions of different concentrations and the measured δ(34)SV-CDT values of Alfa-S solutions deviate from the reference value to varying degrees (concentration effect). The stability of concentration effect has been verified and a correction curve has been constructed based on Alfa-S solutions to correct measured δ(34)SV-CDT values. The curve has been applied to AS solutions (dissolved ammonium sulfate from the lab of the authors) and pore water samples successfully, validating the reliability of our analytical method. This method also enables us to measure the sulfur concentration simultaneously when analyzing the sulfur isotope composition. There is a strong linear correlation (R(2)>0.999) between the sulfur concentrations and the intensity ratios of samples and the standard. We have constructed a regression curve based on Alfa-S solutions and this curve has been successfully used to determine sulfur concentrations of AS solutions and pore water samples. The analytical technique presented here enable rapid, precise and accurate S isotope measurement for a wide range of sulfur-bearing solutions - in particular for pore water samples with complex matrix and varying sulfur concentrations. Also, simultaneous measurement of sulfur concentrations is available. Copyright © 2014 Elsevier B.V. All rights reserved.
Image analysis driven single-cell analytics for systems microbiology.
Balomenos, Athanasios D; Tsakanikas, Panagiotis; Aspridou, Zafiro; Tampakaki, Anastasia P; Koutsoumanis, Konstantinos P; Manolakos, Elias S
2017-04-04
Time-lapse microscopy is an essential tool for capturing and correlating bacterial morphology and gene expression dynamics at single-cell resolution. However state-of-the-art computational methods are limited in terms of the complexity of cell movies that they can analyze and lack of automation. The proposed Bacterial image analysis driven Single Cell Analytics (BaSCA) computational pipeline addresses these limitations thus enabling high throughput systems microbiology. BaSCA can segment and track multiple bacterial colonies and single-cells, as they grow and divide over time (cell segmentation and lineage tree construction) to give rise to dense communities with thousands of interacting cells in the field of view. It combines advanced image processing and machine learning methods to deliver very accurate bacterial cell segmentation and tracking (F-measure over 95%) even when processing images of imperfect quality with several overcrowded colonies in the field of view. In addition, BaSCA extracts on the fly a plethora of single-cell properties, which get organized into a database summarizing the analysis of the cell movie. We present alternative ways to analyze and visually explore the spatiotemporal evolution of single-cell properties in order to understand trends and epigenetic effects across cell generations. The robustness of BaSCA is demonstrated across different imaging modalities and microscopy types. BaSCA can be used to analyze accurately and efficiently cell movies both at a high resolution (single-cell level) and at a large scale (communities with many dense colonies) as needed to shed light on e.g. how bacterial community effects and epigenetic information transfer play a role on important phenomena for human health, such as biofilm formation, persisters' emergence etc. Moreover, it enables studying the role of single-cell stochasticity without losing sight of community effects that may drive it.
Kanai, Masatake; Mano, Shoji; Nishimura, Mikio
2017-01-11
Plant seeds accumulate large amounts of storage reserves comprising biodegradable organic matter. Humans rely on seed storage reserves for food and as industrial materials. Gene expression profiles are powerful tools for investigating metabolic regulation in plant cells. Therefore, detailed, accurate gene expression profiles during seed development are required for crop breeding. Acquiring highly purified RNA is essential for producing these profiles. Efficient methods are needed to isolate highly purified RNA from seeds. Here, we describe a method for isolating RNA from seeds containing large amounts of oils, proteins, and polyphenols, which have inhibitory effects on high-purity RNA isolation. Our method enables highly purified RNA to be obtained from seeds without the use of phenol, chloroform, or additional processes for RNA purification. This method is applicable to Arabidopsis, rapeseed, and soybean seeds. Our method will be useful for monitoring the expression patterns of low level transcripts in developing and mature seeds.
NASA Astrophysics Data System (ADS)
Vincenti, Henri; Vay, Jean-Luc
2018-07-01
The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.
NASA Astrophysics Data System (ADS)
Sastry, Kumara Narasimha
2007-03-01
Effective and efficient rnultiscale modeling is essential to advance both the science and synthesis in a, wide array of fields such as physics, chemistry, materials science; biology, biotechnology and pharmacology. This study investigates the efficacy and potential of rising genetic algorithms for rnultiscale materials modeling and addresses some of the challenges involved in designing competent algorithms that solve hard problems quickly, reliably and accurately. In particular, this thesis demonstrates the use of genetic algorithms (GAs) and genetic programming (GP) in multiscale modeling with the help of two non-trivial case studies in materials science and chemistry. The first case study explores the utility of genetic programming (GP) in multi-timescaling alloy kinetics simulations. In essence, GP is used to bridge molecular dynamics and kinetic Monte Carlo methods to span orders-of-magnitude in simulation time. Specifically, GP is used to regress symbolically an inline barrier function from a limited set of molecular dynamics simulations to enable kinetic Monte Carlo that simulate seconds of real time. Results on a non-trivial example of vacancy-assisted migration on a surface of a face-centered cubic (fcc) Copper-Cobalt (CuxCo 1-x) alloy show that GP predicts all barriers with 0.1% error from calculations for less than 3% of active configurations, independent of type of potentials used to obtain the learning set of barriers via molecular dynamics. The resulting method enables 2--9 orders-of-magnitude increase in real-time dynamics simulations taking 4--7 orders-of-magnitude less CPU time. The second case study presents the application of multiobjective genetic algorithms (MOGAs) in multiscaling quantum chemistry simulations. Specifically, MOGAs are used to bridge high-level quantum chemistry and semiempirical methods to provide accurate representation of complex molecular excited-state and ground-state behavior. Results on ethylene and benzene---two common building blocks in organic chemistry---indicate that MOGAs produce High-quality semiempirical methods that (1) are stable to small perturbations, (2) yield accurate configuration energies on untested and critical excited states, and (3) yield ab initio quality excited-state dynamics. The proposed method enables simulations of more complex systems to realistic, multi-picosecond timescales, well beyond previous attempts or expectation of human experts, and 2--3 orders-of-magnitude reduction in computational cost. While the two applications use simple evolutionary operators, in order to tackle more complex systems, their scalability and limitations have to be investigated. The second part of the thesis addresses some of the challenges involved with a successful design of genetic algorithms and genetic programming for multiscale modeling. The first issue addressed is the scalability of genetic programming, where facetwise models are built to assess the population size required by GP to ensure adequate supply of raw building blocks and also to ensure accurate decision-making between competing building blocks. This study also presents a design of competent genetic programming, where traditional fixed recombination operators are replaced by building and sampling probabilistic models of promising candidate programs. The proposed scalable GP, called extended compact GP (eCGP), combines the ideas from extended compact genetic algorithm (eCGA) and probabilistic incremental program evolution (PIPE) and adaptively identifies, propagates and exchanges important subsolutions of a search problem. Results show that eCGP scales cubically with problem size on both GP-easy and GP-hard problems. Finally, facetwise models are developed to explore limitations of scalability of MOGAs, where the scalability of multiobjective algorithms in reliably maintaining Pareto-optimal solutions is addressed. The results show that even when the building blocks are accurately identified, massive multimodality of the search problems can easily overwhelm the nicher (diversity preserving operator) and lead to exponential scale-up. Facetwise models are developed, which incorporate the combined effects of model accuracy, decision making, and sub-structure supply, as well as the effect of niching on the population sizing, to predict a limit on the growth rate of a maximum number of sub-structures that can compete in the two objectives to circumvent the failure of the niching method. The results show that if the number of competing building blocks between multiple objectives is less than the proposed limit, multiobjective GAs scale-up polynomially with the problem size on boundedly-difficult problems.
NASA Astrophysics Data System (ADS)
Li, Zhi-Guo; Chen, Qi-Feng; Gu, Yun-Jun; Zheng, Jun; Chen, Xiang-Rong
2016-10-01
The accurate hydrodynamic description of an event or system that addresses the equations of state, phase transitions, dissociations, ionizations, and compressions, determines how materials respond to a wide range of physical environments. To understand dense matter behavior in extreme conditions requires the continual development of diagnostic methods for accurate measurements of the physical parameters. Here, we present a comprehensive diagnostic technique that comprises optical pyrometry, velocity interferometry, and time-resolved spectroscopy. This technique was applied to shock compression experiments of dense gaseous deuterium-helium mixtures driven via a two-stage light gas gun. The advantage of this approach lies in providing measurements of multiple physical parameters in a single experiment, such as light radiation histories, particle velocity profiles, and time-resolved spectra, which enables simultaneous measurements of shock velocity, particle velocity, pressure, density, and temperature and expands understanding of dense high pressure shock situations. The combination of multiple diagnostics also allows different experimental observables to be measured and cross-checked. Additionally, it implements an accurate measurement of the principal Hugoniots of deuterium-helium mixtures, which provides a benchmark for the impedance matching measurement technique.
Assessment of distraction from erotic stimuli by nonerotic interference.
Anderson, Alex B; Hamilton, Lisa Dawn
2015-01-01
Distraction from erotic cues during sexual encounters is a major contributor to sexual difficulties in men and women. Being able to assess distraction in studies of sexual arousal will help clarify underlying contributions to sexual problems. The current study aimed to identify the most accurate assessment of distraction from erotic cues in healthy men (n = 29) and women (n = 38). Participants were assigned to a no distraction, low distraction, or high distraction condition. Distraction was induced using an auditory distraction task presented during the viewing of an erotic video. Attention to erotic cues was assessed using three methods: a written quiz, a visual quiz, and a self-reported distraction measure. Genital and psychological sexual responses were also measured. Self-reported distraction and written quiz scores most accurately represented the level of distraction present, while self-reported distraction also corresponded with a decrease in genital arousal. Findings support the usefulness of self-report measures in conjunction with a brief quiz on the erotic material as the most accurate and sensitive ways to simply measure experimentally-induced distraction. Insight into distraction assessment techniques will enable evaluation of naturally occurring distraction in patients suffering from sexual problems.
From Pressure to Path: Barometer-based Vehicle Tracking
Ho, Bo-Jhang; Martin, Paul; Swaminathan, Prashanth; Srivastava, Mani
2017-01-01
Pervasive mobile devices have enabled countless context-and location-based applications that facilitate navigation, life-logging, and more. As we build the next generation of smart cities, it is important to leverage the rich sensing modalities that these numerous devices have to offer. This work demonstrates how mobile devices can be used to accurately track driving patterns based solely on pressure data collected from the device’s barometer. Specifically, by correlating pressure time-series data against topographic elevation data and road maps for a given region, a centralized computer can estimate the likely paths through which individual users have driven, providing an exceptionally low-power method for measuring driving patterns of a given individual or for analyzing group behavior across multiple users. This work also brings to bear a more nefarious side effect of pressure-based path estimation: a mobile application can, without consent and without notifying the user, use pressure data to accurately detect an individual’s driving behavior, compromising both user privacy and security. We further analyze the ability to predict driving trajectories in terms of the variance in barometer pressure and geographical elevation, demonstrating cases in which more than 80% of paths can be accurately predicted. PMID:29503981
From Pressure to Path: Barometer-based Vehicle Tracking.
Ho, Bo-Jhang; Martin, Paul; Swaminathan, Prashanth; Srivastava, Mani
2015-11-01
Pervasive mobile devices have enabled countless context-and location-based applications that facilitate navigation, life-logging, and more. As we build the next generation of smart cities, it is important to leverage the rich sensing modalities that these numerous devices have to offer. This work demonstrates how mobile devices can be used to accurately track driving patterns based solely on pressure data collected from the device's barometer. Specifically, by correlating pressure time-series data against topographic elevation data and road maps for a given region, a centralized computer can estimate the likely paths through which individual users have driven, providing an exceptionally low-power method for measuring driving patterns of a given individual or for analyzing group behavior across multiple users. This work also brings to bear a more nefarious side effect of pressure-based path estimation: a mobile application can, without consent and without notifying the user, use pressure data to accurately detect an individual's driving behavior, compromising both user privacy and security. We further analyze the ability to predict driving trajectories in terms of the variance in barometer pressure and geographical elevation, demonstrating cases in which more than 80% of paths can be accurately predicted.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Custom implant design for large cranial defects.
Marreiros, Filipe M M; Heuzé, Y; Verius, M; Unterhofer, C; Freysinger, W; Recheis, W
2016-12-01
The aim of this work was to introduce a computer-aided design (CAD) tool that enables the design of large skull defect (>100 [Formula: see text]) implants. Functional and aesthetically correct custom implants are extremely important for patients with large cranial defects. For these cases, preoperative fabrication of implants is recommended to avoid problems of donor site morbidity, sufficiency of donor material and quality. Finally, crafting the correct shape is a non-trivial task increasingly complicated by defect size. We present a CAD tool to design such implants for the neurocranium. A combination of geometric morphometrics and radial basis functions, namely thin-plate splines, allows semiautomatic implant generation. The method uses symmetry and the best fitting shape to estimate missing data directly within the radiologic volume data. In addition, this approach delivers correct implant fitting via a boundary fitting approach. This method generates a smooth implant surface, free of sharp edges that follows the main contours of the boundary, enabling accurate implant placement in the defect site intraoperatively. The present approach is evaluated and compared to existing methods. A mean error of 89.29 % (72.64-100 %) missing landmarks with an error less or equal to 1 mm was obtained. In conclusion, the results show that our CAD tool can generate patient-specific implants with high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeraatkar, Navid; Farahani, Mohammad Hossein; Rahmim, Arman
Purpose: Given increasing efforts in biomedical research utilizing molecular imaging methods, development of dedicated high-performance small-animal SPECT systems has been growing rapidly in the last decade. In the present work, we propose and assess an alternative concept for SPECT imaging enabling desktop open-gantry imaging of small animals. Methods: The system, PERSPECT, consists of an imaging desk, with a set of tilted detector and pinhole collimator placed beneath it. The object to be imaged is simply placed on the desk. Monte Carlo (MC) and analytical simulations were utilized to accurately model and evaluate the proposed concept and design. Furthermore, a dedicatedmore » image reconstruction algorithm, finite-aperture-based circular projections (FABCP), was developed and validated for the system, enabling more accurate modeling of the system and higher quality reconstructed images. Image quality was quantified as a function of different tilt angles in the acquisition and number of iterations in the reconstruction algorithm. Furthermore, more complex phantoms including Derenzo, Defrise, and mouse whole body were simulated and studied. Results: The sensitivity of the PERSPECT was 207 cps/MBq. It was quantitatively demonstrated that for a tilt angle of 30°, comparable image qualities were obtained in terms of normalized squared error, contrast, uniformity, noise, and spatial resolution measurements, the latter at ∼0.6 mm. Furthermore, quantitative analyses demonstrated that 3 iterations of FABCP image reconstruction (16 subsets/iteration) led to optimally reconstructed images. Conclusions: The PERSPECT, using a novel imaging protocol, can achieve comparable image quality performance in comparison with a conventional pinhole SPECT with the same configuration. The dedicated FABCP algorithm, which was developed for reconstruction of data from the PERSPECT system, can produce high quality images for small-animal imaging via accurate modeling of the system as incorporated in the forward- and back-projection steps. Meanwhile, the developed MC model and the analytical simulator of the system can be applied for further studies on development and evaluation of the system.« less
Schwarz, T; Weber, M; Wörner, M; Renkawitz, T; Grifka, J; Craiovan, B
2017-05-01
Accurate assessment of cup orientation on postoperative radiographs is essential for evaluating outcome after THA. However, accuracy is impeded by the deviation of the central X-ray beam in relation to the cup and the impossibility of measuring retroversion on standard pelvic radiographs. In an experimental trial, we built an artificial cup holder enabling the setting of different angles of anatomical anteversion and inclination. Twelve different cup orientations were investigated by three examiners. After comparing the two methods for radiographic measurement of the cup position developed by Lewinnek and Widmer, we showed how to differentiate between anteversion and retroversion in each cup position by using a second plane. To show the effect of the central beam offset on the cup, we X-rayed a defined cup position using a multidirectional central beam offset. According to Murray's definition of anteversion and inclination, we created a novel corrective procedure to balance measurement errors caused by deviation of the central beam. Measurement of the 12 different cup positions with the Lewinnek's method yielded a mean deviation of [Formula: see text] (95 % CI 1.3-2.3) from the original cup anteversion. The respective deviation with the Widmer/Liaw's method was [Formula: see text] (95 % CI 2.4-4.0). In each case, retroversion could be differentiated from anteversion with a second radiograph. Because of the multidirectional central beam offset ([Formula: see text] cm) from the acetabular cup in the cup holder ([Formula: see text] anteversion and [Formula: see text] inclination), the mean absolute difference for anteversion was [Formula: see text] (range [Formula: see text] to [Formula: see text] and [Formula: see text] (range [Formula: see text] to [Formula: see text] for inclination. The application of our novel mathematical correction of the central beam offset reduced deviation to a mean difference of [Formula: see text] for anteversion and [Formula: see text] for inclination. This novel calculation for central beam offset correction enables highly accurate measurement of the cup position.
Archer John Porter Martin CBE 1 March 1910 - 28 July 2002.
Lovelock, James
2004-01-01
We judge the worth of a scientist by the benefits he or she brings to science and society; by this measure Archer Martin was outstanding, and rightfully his contribution was recognized with a Nobel Prize. Scientific instruments and instrumental methods now come almost entirely from commercial sources and we take them for granted and often have little idea how they work. Archer Martin was of a different time when scientists would often devise their own new instruments, which usually they fully understood, and then they would use them to explore the world. The chromatographic methods and instruments Martin devised were at least as crucial in the genesis and development of molecular biology as were those from X-ray crystallography. Liquid partition chromatography, especially in its two-dimensional paper form, revealed the amino acid composition of proteins and the nucleic acid composition of DNA and RNA with a rapid and elegant facility. Gas chromatography (GC) enabled the accurate and rapid analysis of lips, which previously had been painfully slow and little more than a greasy sticky confusion of beaker chemistry. Martin's instruments enabled progress in the sciences ranging from geophysics to biology , and without im we might have waited decades before another equivalent genius appeared. More than this, the environmental awareness that Rachel Carson gave us would never had solidified as it did without the evidence of global change measured by GC. This instrumental method provided accurate evidence about the ubiquity of pesticides and pollutants and later made us aware of the growing accumulation in the atmosphere of chlorinated fluorocarbons, nitrous oxide and other ozone-depleting chemicals. If all this were not enough to glorify Martin's partition chromatography, there is the undoubted fact that its simplicity, economy and exquisite resolving power transformed the chemical industry and made possible so many of the conveniences we now take for granted.
Accurate chemical master equation solution using multi-finite buffers
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-06-29
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
Accurate chemical master equation solution using multi-finite buffers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
Accurate spectroscopic redshift of the multiply lensed quasar PSOJ0147 from the Pan-STARRS survey
NASA Astrophysics Data System (ADS)
Lee, C.-H.
2017-09-01
Context. The gravitational lensing time delay method provides a one-step determination of the Hubble constant (H0) with an uncertainty level on par with the cosmic distance ladder method. However, to further investigate the nature of the dark energy, a H0 estimate down to 1% level is greatly needed. This requires dozens of strongly lensed quasars that are yet to be delivered by ongoing and forthcoming all-sky surveys. Aims: In this work we aim to determine the spectroscopic redshift of PSOJ0147, the first strongly lensed quasar candidate found in the Pan-STARRS survey. The main goal of our work is to derive an accurate redshift estimate of the background quasar for cosmography. Methods: To obtain timely spectroscopically follow-up, we took advantage of the fast-track service programme that is carried out by the Nordic Optical Telescope. Using a grism covering 3200-9600 Å, we identified prominent emission line features, such as Lyα, N V, O I, C II, Si IV, C IV, and [C III] in the spectra of the background quasar of the PSOJ0147 lens system. This enables us to determine accurately the redshift of the background quasar. Results: The spectrum of the background quasar exhibits prominent absorption features bluewards of the strong emission lines, such as Lyα, N V, and C IV. These blue absorption lines indicate that the background source is a broad absorption line (BAL) quasar. Unfortunately, the BAL features hamper an accurate determination of redshift using the above-mentioned strong emission lines. Nevertheless, we are able to determine a redshift of 2.341 ± 0.001 from three of the four lensed quasar images with the clean forbidden line [C III]. In addition, we also derive a maximum outflow velocity of 9800 km s-1 with the broad absorption features bluewards of the C IV emission line. This value of maximum outflow velocity is in good agreement with other BAL quasars.
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
NASA Astrophysics Data System (ADS)
Zhao, Huangxuan; Wang, Guangsong; Lin, Riqiang; Gong, Xiaojing; Song, Liang; Li, Tan; Wang, Wenjia; Zhang, Kunya; Qian, Xiuqing; Zhang, Haixia; Li, Lin; Liu, Zhicheng; Liu, Chengbo
2018-04-01
For the diagnosis and evaluation of ophthalmic diseases, imaging and quantitative characterization of vasculature in the iris are very important. The recently developed photoacoustic imaging, which is ultrasensitive in imaging endogenous hemoglobin molecules, provides a highly efficient label-free method for imaging blood vasculature in the iris. However, the development of advanced vascular quantification algorithms is still needed to enable accurate characterization of the underlying vasculature. We have developed a vascular information quantification algorithm by adopting a three-dimensional (3-D) Hessian matrix and applied for processing iris vasculature images obtained with a custom-built optical-resolution photoacoustic imaging system (OR-PAM). For the first time, we demonstrate in vivo 3-D vascular structures of a rat iris with a the label-free imaging method and also accurately extract quantitative vascular information, such as vessel diameter, vascular density, and vascular tortuosity. Our results indicate that the developed algorithm is capable of quantifying the vasculature in the 3-D photoacoustic images of the iris in-vivo, thus enhancing the diagnostic capability of the OR-PAM system for vascular-related ophthalmic diseases in vivo.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Predicting speech intelligibility in noise for hearing-critical jobs
NASA Astrophysics Data System (ADS)
Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian
2003-10-01
Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Neutron Polarization Analysis for Biphasic Solvent Extraction Systems
Motokawa, Ryuhei; Endo, Hitoshi; Nagao, Michihiro; ...
2016-06-16
Here we performed neutron polarization analysis (NPA) of extracted organic phases containing complexes, comprised of Zr(NO 3) 4 and tri-n-butyl phosphate, which enabled decomposition of the intensity distribution of small-angle neutron scattering (SANS) into the coherent and incoherent scattering components. The coherent scattering intensity, containing structural information, and the incoherent scattering compete over a wide range of magnitude of scattering vector, q, specifically when q is larger than q* ≈ 1/R g, where R g is the radius of gyration of scatterer. Therefore, it is important to determine the incoherent scattering intensity exactly to perform an accurate structural analysis frommore » SANS data when R g is small, such as the aforementioned extracted coordination species. Although NPA is the best method for evaluating the incoherent scattering component for accurately determining the coherent scattering in SANS, this method is not used frequently in SANS data analysis because it is technically challenging. In this study, we successfully demonstrated that experimental determination of the incoherent scattering using NPA is suitable for sample systems containing a small scatterer with a weak coherent scattering intensity, such as extracted complexes in biphasic solvent extraction systems.« less
Modifying PASVART to solve singular nonlinear 2-point boundary problems
NASA Technical Reports Server (NTRS)
Fulton, James P.
1988-01-01
To study the buckling and post-buckling behavior of shells and various other structures, one must solve a nonlinear 2-point boundary problem. Since closed-form analytic solutions for such problems are virtually nonexistent, numerical approximations are inevitable. This makes the availability of accurate and reliable software indispensable. In a series of papers Lentini and Pereyra, expanding on the work of Keller, developed PASVART: an adaptive finite difference solver for nonlinear 2-point boundary problems. While the program does produce extremely accurate solutions with great efficiency, it is hindered by a major limitation. PASVART will only locate isolated solutions of the problem. In buckling problems, the solution set is not unique. It will contain singular or bifurcation points, where different branches of the solution set may intersect. Thus, PASVART is useless precisely when the problem becomes interesting. To resolve this deficiency we propose a modification of PASVART that will enable the user to perform a more complete bifurcation analysis. PASVART would be combined with the Thurston bifurcation solution: as adaptation of Newton's method that was motivated by the work of Koiter 3 are reinterpreted in terms of an iterative computational method by Thurston.
NASA Astrophysics Data System (ADS)
Yao, Cang Lang; Li, Jian Chen; Gao, Wang; Tkatchenko, Alexandre; Jiang, Qing
2017-12-01
We propose an effective method to accurately determine the defect formation energy Ef and charge transition level ɛ of the point defects using exclusively cohesive energy Ecoh and the fundamental band gap Eg of pristine host materials. We find that Ef of the point defects can be effectively separated into geometric and electronic contributions with a functional form: Ef=χ Ecoh+λ Eg , where χ and λ are dictated by the geometric and electronic factors of the point defects (χ and λ are defect dependent). Such a linear combination of Ecoh and Eg reproduces Ef with an accuracy better than 5% for electronic structure methods ranging from hybrid density-functional theory (DFT) to many-body random-phase approximation (RPA) and experiments. Accordingly, ɛ is also determined by Ecoh/Eg and the defect geometric/electronic factors. The identified correlation is rather general for monovacancies and interstitials, which holds in a wide variety of semiconductors covering Si, Ge, phosphorenes, ZnO, GaAs, and InP, and enables one to obtain reliable values of Ef and ɛ of the point defects for RPA and experiments based on semilocal DFT calculations.
A fast numerical scheme for causal relativistic hydrodynamics with dissipation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro
2011-08-01
Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less
Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883
Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.
NASA Astrophysics Data System (ADS)
Laune, Jordan; Tzeferacos, Petros; Feister, Scott; Fatenejad, Milad; Yurchak, Roman; Flocke, Norbert; Weide, Klaus; Lamb, Donald
2017-10-01
Thermodynamic and opacity properties of materials are necessary to accurately simulate laser-driven laboratory experiments. Such data are compiled in tabular format since the thermodynamic range that needs to be covered cannot be described with one single theoretical model. Moreover, tabulated data can be made available prior to runtime, reducing both compute cost and code complexity. This approach is employed by the FLASH code. Equation of state (EoS) and opacity data comes in various formats, matrix-layouts, and file-structures. We discuss recent developments on opacplot2, an open-source Python module that manipulates tabulated EoS and opacity data. We present software that builds upon opacplot2 and enables easy-to-use conversion of different table formats into the IONMIX format, the native tabular input used by FLASH. Our work enables FLASH users to take advantage of a wider range of accurate EoS and opacity tables in simulating HELP experiments at the National Laser User Facilities.
NASA Technical Reports Server (NTRS)
Smyrlis, Yiorgos S.; Papageorgiou, Demetrios T.
1991-01-01
The results of extensive computations are presented in order to accurately characterize transitions to chaos for the Kuramoto-Sivashinsky equation. In particular, the oscillatory dynamics in a window that supports a complete sequence of period doubling bifurcations preceding chaos is followed. As many as thirteen period doublings are followed and used to compute the Feigenbaum number for the cascade and so enable, for the first time, an accurate numerical evaluation of the theory of universal behavior of nonlinear systems, for an infinite dimensional dynamical system. Furthermore, the dynamics at the threshold of chaos exhibit a fractal behavior which is demonstrated and used to compute a universal scaling factor that enables the self-similar continuation of the solution into a chaotic regime.
Yendrek, Craig R.; Tomaz, Tiago; Montes, Christopher M.; Cao, Youyuan; Morse, Alison M.; Brown, Patrick J.; McIntyre, Lauren M.; Leakey, Andrew D.B.
2017-01-01
High-throughput, noninvasive field phenotyping has revealed genetic variation in crop morphological, developmental, and agronomic traits, but rapid measurements of the underlying physiological and biochemical traits are needed to fully understand genetic variation in plant-environment interactions. This study tested the application of leaf hyperspectral reflectance (λ = 500–2,400 nm) as a high-throughput phenotyping approach for rapid and accurate assessment of leaf photosynthetic and biochemical traits in maize (Zea mays). Leaf traits were measured with standard wet-laboratory and gas-exchange approaches alongside measurements of leaf reflectance. Partial least-squares regression was used to develop a measure of leaf chlorophyll content, nitrogen content, sucrose content, specific leaf area, maximum rate of phosphoenolpyruvate carboxylation, [CO2]-saturated rate of photosynthesis, and leaf oxygen radical absorbance capacity from leaf reflectance spectra. Partial least-squares regression models accurately predicted five out of seven traits and were more accurate than previously used simple spectral indices for leaf chlorophyll, nitrogen content, and specific leaf area. Correlations among leaf traits and statistical inferences about differences among genotypes and treatments were similar for measured and modeled data. The hyperspectral reflectance approach to phenotyping was dramatically faster than traditional measurements, enabling over 1,000 rows to be phenotyped during midday hours over just 2 to 4 d, and offers a nondestructive method to accurately assess physiological and biochemical trait responses to environmental stress. PMID:28049858
Inferring probabilistic stellar rotation periods using Gaussian processes
NASA Astrophysics Data System (ADS)
Angus, Ruth; Morton, Timothy; Aigrain, Suzanne; Foreman-Mackey, Daniel; Rajpaul, Vinesh
2018-02-01
Variability in the light curves of spotted, rotating stars is often non-sinusoidal and quasi-periodic - spots move on the stellar surface and have finite lifetimes, causing stellar flux variations to slowly shift in phase. A strictly periodic sinusoid therefore cannot accurately model a rotationally modulated stellar light curve. Physical models of stellar surfaces have many drawbacks preventing effective inference, such as highly degenerate or high-dimensional parameter spaces. In this work, we test an appropriate effective model: a Gaussian Process with a quasi-periodic covariance kernel function. This highly flexible model allows sampling of the posterior probability density function of the periodic parameter, marginalizing over the other kernel hyperparameters using a Markov Chain Monte Carlo approach. To test the effectiveness of this method, we infer rotation periods from 333 simulated stellar light curves, demonstrating that the Gaussian process method produces periods that are more accurate than both a sine-fitting periodogram and an autocorrelation function method. We also demonstrate that it works well on real data, by inferring rotation periods for 275 Kepler stars with previously measured periods. We provide a table of rotation periods for these and many more, altogether 1102 Kepler objects of interest, and their posterior probability density function samples. Because this method delivers posterior probability density functions, it will enable hierarchical studies involving stellar rotation, particularly those involving population modelling, such as inferring stellar ages, obliquities in exoplanet systems, or characterizing star-planet interactions. The code used to implement this method is available online.
Thermal dosimetry for bladder hyperthermia treatment. An overview.
Schooneveldt, Gerben; Bakker, Akke; Balidemaj, Edmond; Chopra, Rajiv; Crezee, Johannes; Geijsen, Elisabeth D; Hartmann, Josefin; Hulshof, Maarten C C M; Kok, H Petra; Paulides, Margarethus M; Sousa-Escandon, Alejandro; Stauffer, Paul R; Maccarini, Paolo F
2016-06-01
The urinary bladder is a fluid-filled organ. This makes, on the one hand, the internal surface of the bladder wall relatively easy to heat and ensures in most cases a relatively homogeneous temperature distribution; on the other hand the variable volume, organ motion, and moving fluid cause artefacts for most non-invasive thermometry methods, and require additional efforts in planning accurate thermal treatment of bladder cancer. We give an overview of the thermometry methods currently used and investigated for hyperthermia treatments of bladder cancer, and discuss their advantages and disadvantages within the context of the specific disease (muscle-invasive or non-muscle-invasive bladder cancer) and the heating technique used. The role of treatment simulation to determine the thermal dose delivered is also discussed. Generally speaking, invasive measurement methods are more accurate than non-invasive methods, but provide more limited spatial information; therefore, a combination of both is desirable, preferably supplemented by simulations. Current efforts at research and clinical centres continue to improve non-invasive thermometry methods and the reliability of treatment planning and control software. Due to the challenges in measuring temperature across the non-stationary bladder wall and surrounding tissues, more research is needed to increase our knowledge about the penetration depth and typical heating pattern of the various hyperthermia devices, in order to further improve treatments. The ability to better determine the delivered thermal dose will enable clinicians to investigate the optimal treatment parameters, and consequentially, to give better controlled, thus even more reliable and effective, thermal treatments.
RIO: a new computational framework for accurate initial data of binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-06-01
We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)
Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.; ...
2018-02-14
Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less
Veenstra, Richard D
2016-01-01
The development of the patch clamp technique has enabled investigators to directly measure gap junction conductance between isolated pairs of small cells with resolution to the single channel level. The dual patch clamp recording technique requires specialized equipment and the acquired skill to reliably establish gigaohm seals and the whole cell recording configuration with high efficiency. This chapter describes the equipment needed and methods required to achieve accurate measurement of macroscopic and single gap junction channel conductances. Inherent limitations with the dual whole cell recording technique and methods to correct for series access resistance errors are defined as well as basic procedures to determine the essential electrical parameters necessary to evaluate the accuracy of gap junction conductance measurements using this approach.
A phased antenna array for surface plasmons
Dikken, Dirk Jan W.; Korterik, Jeroen P.; Segerink, Frans B.; Herek, Jennifer L.; Prangsma, Jord C.
2016-01-01
Surface plasmon polaritons are electromagnetic waves that propagate tightly bound to metal surfaces. The concentration of the electromagnetic field at the surface as well as the short wavelength of surface plasmons enable sensitive detection methods and miniaturization of optics. We present an optical frequency plasmonic analog to the phased antenna array as it is well known in radar technology and radio astronomy. Individual holes in a thick gold film act as dipolar emitters of surface plasmon polaritons whose phase is controlled individually using a digital spatial light modulator. We show experimentally, using a phase sensitive near-field microscope, that this optical system allows accurate directional emission of surface waves. This compact and flexible method allows for dynamically shaping the propagation of plasmons and holds promise for nanophotonic applications employing propagating surface plasmons. PMID:27121099
High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Computer-Aided Design (3BID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowlkes, Jason D.; Winkler, Robert; Lewis, Brett B.
Currently, there are few techniques that allow true 3D-printing on the nanoscale. The most promising candidate to fill this void is focused electron-beam-induced deposition (FEBID), a resist-free, nanofabrication compatible, direct-write method. The basic working principles of a computer-aided design (CAD) program (3BID) enabling 3D-FEBID is presented and simultaneously released for download. The 3BID capability significantly expands the currently limited toolbox for 3D-nanoprinting, providing access to geometries for optoelectronic, plasmonic, and nanomagnetic applications that were previously unattainable due to the lack of a suitable method for synthesis. In conclusion, the CAD approach supplants trial and error toward more precise/accurate FEBID requiredmore » for real applications/device prototyping.« less
Local facet approximation for image stitching
NASA Astrophysics Data System (ADS)
Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun
2018-01-01
Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.
Zhang, Ruiguo
2017-01-01
Objective: Amiodarone-induced thyrotoxicosis (AIT) is caused by amiodarone as a side effect of cardiovascular disease treatment. Based on the differences in their pathological and physiological mechanisms, many methods have been developed so far to differentiate AIT subtypes such as colour flow Doppler sonography (CFDS) and 24-h radioiodine uptake (RAIU). However, these methods suffer from inadequate accuracy in distinguishing different types of AITs and sometimes lead to misdiagnosis and delayed treatments. Therefore, there is an unmet demand for an efficient method for accurate classification of AIT. Methods: Technetium-99 methoxyisobutylisonitrile (99mTc-MIBI) thyroid imaging was performed on 15 patients for AIT classification, and the results were compared with other conventional methods such as CFDS, RAIU and 99mTcO4 imaging. Results: High uptake and retention of MIBI in thyroid tissue is characteristic in Type I AIT, while in sharp contrast, low uptake of MIBI in the thyroid tissue was observed in Type II AIT. Mixed-type AIT shows uptake value between Types I and II. MIBI imaging outperforms other methods with a lower misdiagnosis rate. Conclusion: Among the methods evaluated, MIBI imaging enables an accurate identification of Type I, II and mixed-type AITs by showing distinct images for different types of AITs. The results obtained from our selected subjects revealed that MIBI imaging is a reliable method for diagnosis and classification of AITs and MIBI imaging has potential in the treatment of thyroid diseases. Advances in knowledge: 99mTc-MIBI imaging is a useful method for the diagnosis of AIT. It can distinguish different types of AITs especially for mixed-type AIT, which is usually difficult to treat. 99mTc-MIBI has potential advantages over conventional methods in the efficient treatment of AIT. PMID:28106465
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
A Hybrid Windkessel Model of Blood Flow in Arterial Tree Using Velocity Profile Method
NASA Astrophysics Data System (ADS)
Aboelkassem, Yasser; Virag, Zdravko
2016-11-01
For the study of pulsatile blood flow in the arterial system, we derived a coupled Windkessel-Womersley mathematical model. Initially, a 6-elements Windkessel model is proposed to describe the hemodynamics transport in terms of constant resistance, inductance and capacitance. This model can be seen as a two compartment model, in which the compartments are connected by a rigid pipe, modeled by one inductor and resistor. The first viscoelastic compartment models proximal part of the aorta, the second elastic compartment represents the rest of the arterial tree and aorta can be seen as the connection pipe. Although the proposed 6-elements lumped model was able to accurately reconstruct the aortic pressure, it can't be used to predict the axial velocity distribution in the aorta and the wall shear stress and consequently, proper time varying pressure drop. We then modified this lumped model by replacing the connection pipe circuit elements with a vessel having a radius R and a length L. The pulsatile flow motions in the vessel are resolved instantaneously along with the Windkessel like model enable not only accurate prediction of the aortic pressure but also wall shear stress and frictional pressure drop. The proposed hybrid model has been validated using several in-vivo aortic pressure and flow rate data acquired from different species such as, humans, dogs and pigs. The method accurately predicts the time variation of wall shear stress and frictional pressure drop. Institute for Computational Medicine, Dept. Biomedical Engineering.
Coupled-cluster based R-matrix codes (CCRM): Recent developments
NASA Astrophysics Data System (ADS)
Sur, Chiranjib; Pradhan, Anil K.
2008-05-01
We report the ongoing development of the new coupled-cluster R-matrix codes (CCRM) for treating electron-ion scattering and radiative processes within the framework of the relativistic coupled-cluster method (RCC), interfaced with the standard R-matrix methodology. The RCC method is size consistent and in principle equivalent to an all-order many-body perturbation theory. The RCC method is one of the most accurate many-body theories, and has been applied for several systems. This project should enable the study of electron-interactions with heavy atoms/ions, utilizing not only high speed computing platforms but also improved theoretical description of the relativistic and correlation effects for the target atoms/ions as treated extensively within the RCC method. Here we present a comprehensive outline of the newly developed theoretical method and a schematic representation of the new suite of CCRM codes. We begin with the flowchart and description of various stages involved in this development. We retain the notations and nomenclature of different stages as analogous to the standard R-matrix codes.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves.
Katekhaye, S; Kale, M S; Laddha, K S
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C(18) column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r(2)>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves.
Development and Validation of an HPLC Method for Karanjin in Pongamia pinnata linn. Leaves
Katekhaye, S; Kale, M. S.; Laddha, K. S.
2012-01-01
A rapid, simple and specific reversed-phase HPLC method has been developed for analysis of karanjin in Pongamia pinnata Linn. leaves. HPLC analysis was performed on a C18 column using an 85:13.5:1.5 (v/v) mixtures of methanol, water and acetic acid as isocratic mobile phase at a flow rate of 1 ml/min. UV detection was at 300 nm. The method was validated for accuracy, precision, linearity, specificity. Validation revealed the method is specific, accurate, precise, reliable and reproducible. Good linear correlation coefficients (r2>0.997) were obtained for calibration plots in the ranges tested. Limit of detection was 4.35 μg and limit of quantification was 16.56 μg. Intra and inter-day RSD of retention times and peak areas was less than 1.24% and recovery was between 95.05 and 101.05%. The established HPLC method is appropriate enabling efficient quantitative analysis of karanjin in Pongamia pinnata leaves. PMID:23204626
Grundy, H H; Reece, P; Buckley, M; Solazzo, C M; Dowle, A A; Ashford, D; Charlton, A J; Wadsley, M K; Collins, M J
2016-01-01
Gelatine is a component of a wide range of foods. It is manufactured as a by-product of the meat industry from bone and hide, mainly from bovine and porcine sources. Accurate food labelling enables consumers to make informed decisions about the food they buy. Since labelling currently relies heavily on due diligence involving a paper trail, there could be benefits in developing a reliable test method for the consumer industries in terms of the species origin of gelatine. We present a method to determine the species origin of gelatines by peptide mass spectrometry methods. An evaluative comparison is also made with ELISA and PCR technologies. Commercial gelatines were found to contain undeclared species. Furthermore, undeclared bovine peptides were observed in commercial injection matrices. This analytical method could therefore support the food industry in terms of determining the species authenticity of gelatine in foods. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
Advanced Testing Method for Ground Thermal Conductivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaobing; Clemenzi, Rick; Liu, Su
A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce themore » cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.« less
Mass Spectrometry Based Ultrasensitive DNA Methylation Profiling Using Target Fragmentation Assay.
Lin, Xiang-Cheng; Zhang, Ting; Liu, Lan; Tang, Hao; Yu, Ru-Qin; Jiang, Jian-Hui
2016-01-19
Efficient tools for profiling DNA methylation in specific genes are essential for epigenetics and clinical diagnostics. Current DNA methylation profiling techniques have been limited by inconvenient implementation, requirements of specific reagents, and inferior accuracy in quantifying methylation degree. We develop a novel mass spectrometry method, target fragmentation assay (TFA), which enable to profile methylation in specific sequences. This method combines selective capture of DNA target from restricted cleavage of genomic DNA using magnetic separation with MS detection of the nonenzymatic hydrolysates of target DNA. This method is shown to be highly sensitive with a detection limit as low as 0.056 amol, allowing direct profiling of methylation using genome DNA without preamplification. Moreover, this method offers a unique advantage in accurately determining DNA methylation level. The clinical applicability was demonstrated by DNA methylation analysis using prostate tissue samples, implying the potential of this method as a useful tool for DNA methylation profiling in early detection of related diseases.
Focke, Felix; Haase, Ilka; Fischer, Markus
2011-01-26
Usually spices are identified morphologically using simple methods like magnifying glasses or microscopic instruments. On the other hand, molecular biological methods like the polymerase chain reaction (PCR) enable an accurate and specific detection also in complex matrices. Generally, the origins of spices are plants with diverse genetic backgrounds and relationships. The processing methods used for the production of spices are complex and individual. Consequently, the development of a reliable DNA-based method for spice analysis is a challenging intention. However, once established, this method will be easily adapted to less difficult food matrices. In the current study, several alternative methods for the isolation of DNA from spices have been developed and evaluated in detail with regard to (i) its purity (photometric), (ii) yield (fluorimetric methods), and (iii) its amplifiability (PCR). Whole genome amplification methods were used to preamplify isolates to improve the ratio between amplifiable DNA and inhibiting substances. Specific primer sets were designed, and the PCR conditions were optimized to detect 18 spices selectively. Assays of self-made spice mixtures were performed to proof the applicability of the developed methods.
Integrating complex business processes for knowledge-driven clinical decision support systems.
Kamaleswaran, Rishikesan; McGregor, Carolyn
2012-01-01
This paper presents in detail the component of the Complex Business Process for Stream Processing framework that is responsible for integrating complex business processes to enable knowledge-driven Clinical Decision Support System (CDSS) recommendations. CDSSs aid the clinician in supporting the care of patients by providing accurate data analysis and evidence-based recommendations. However, the incorporation of a dynamic knowledge-management system that supports the definition and enactment of complex business processes and real-time data streams has not been researched. In this paper we discuss the process web service as an innovative method of providing contextual information to a real-time data stream processing CDSS.
Quantitative phylogenetic assessment of microbial communities in diverse environments.
von Mering, C; Hugenholtz, P; Raes, J; Tringe, S G; Doerks, T; Jensen, L J; Ward, N; Bork, P
2007-02-23
The taxonomic composition of environmental communities is an important indicator of their ecology and function. We used a set of protein-coding marker genes, extracted from large-scale environmental shotgun sequencing data, to provide a more direct, quantitative, and accurate picture of community composition than that provided by traditional ribosomal RNA-based approaches depending on the polymerase chain reaction. Mapping marker genes from four diverse environmental data sets onto a reference species phylogeny shows that certain communities evolve faster than others. The method also enables determination of preferred habitats for entire microbial clades and provides evidence that such habitat preferences are often remarkably stable over time.
Donor acceptor electronic couplings in π-stacks: How many states must be accounted for?
NASA Astrophysics Data System (ADS)
Voityuk, Alexander A.
2006-04-01
Two-state model is commonly used to estimate the donor-acceptor electronic coupling Vda for electron transfer. However, in some important cases, e.g. for DNA π-stacks, this scheme fails to provide accurate values of Vda because of multistate effects. The Generalized Mulliken-Hush method enables a multistate treatment of Vda. In this Letter, we analyze the dependence of calculated electronic couplings on the number of the adiabatic states included in the model. We suggest a simple scheme to determine this number. The superexchange correction of the two-state approximation is shown to provide good estimates of the electronic coupling.
NASA Astrophysics Data System (ADS)
Huang, Wei; Ma, Chengfu; Chen, Yuhang
2014-12-01
A method for simple and reliable displacement measurement with nanoscale resolution is proposed. The measurement is realized by combining a common optical microscopy imaging of a specially coded nonperiodic microstructure, namely two-dimensional zero-reference mark (2-D ZRM), and subsequent correlation analysis of the obtained image sequence. The autocorrelation peak contrast of the ZRM code is maximized with well-developed artificial intelligence algorithms, which enables robust and accurate displacement determination. To improve the resolution, subpixel image correlation analysis is employed. Finally, we experimentally demonstrate the quasi-static and dynamic displacement characterization ability of a micro 2-D ZRM.
Data mining in bioinformatics using Weka.
Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H
2004-10-12
The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.
Practical quantification of necrosis in histological whole-slide images.
Homeyer, André; Schenk, Andrea; Arlt, Janine; Dahmen, Uta; Dirsch, Olaf; Hahn, Horst K
2013-06-01
Since the histological quantification of necrosis is a common task in medical research and practice, we evaluate different image analysis methods for quantifying necrosis in whole-slide images. In a practical usage scenario, we assess the impact of different classification algorithms and feature sets on both accuracy and computation time. We show how a well-chosen combination of multiresolution features and an efficient postprocessing step enables the accurate quantification necrosis in gigapixel images in less than a minute. The results are general enough to be applied to other areas of histological image analysis as well. Copyright © 2013 Elsevier Ltd. All rights reserved.
Electrical resistivity measurements on fragile organic single crystals in the diamond anvil cell
NASA Astrophysics Data System (ADS)
Adachi, T.; Tanaka, H.; Kobayashi, H.; Miyazaki, T.
2001-05-01
A method of sample assembly for four-probe resistivity measurements on fragile organic single crystals using a diamond anvil cell is presented. A procedure to keep insulation between the metal gasket and four leads of thin gold wires bonded to the sample crystal by gold paint is described in detail. The resistivity measurements performed on a single crystal of an organic semiconductor and that of neutral molecules up to 15 GPa and down to 4.2 K showed that this new procedure of four-probe diamond anvil resistivity measurements enables us to obtain sufficiently accurate resistivity data of organic crystals.
Fast and accurate de novo genome assembly from long uncorrected reads
Vaser, Robert; Sović, Ivan; Nagarajan, Niranjan
2017-01-01
The assembly of long reads from Pacific Biosciences and Oxford Nanopore Technologies typically requires resource-intensive error-correction and consensus-generation steps to obtain high-quality assemblies. We show that the error-correction step can be omitted and that high-quality consensus sequences can be generated efficiently with a SIMD-accelerated, partial-order alignment–based, stand-alone consensus module called Racon. Based on tests with PacBio and Oxford Nanopore data sets, we show that Racon coupled with miniasm enables consensus genomes with similar or better quality than state-of-the-art methods while being an order of magnitude faster. PMID:28100585
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.