Sample records for auto-calibrating partially parallel

  1. Simultaneous auto-calibration and gradient delays estimation (SAGE) in non-Cartesian parallel MRI using low-rank constraints.

    PubMed

    Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael

    2018-03-09

    To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.

  2. Parallel computing for automated model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, John S.; Danielson, Gary R.; Schulz, Douglas A.

    2002-07-29

    Natural resources model calibration is a significant burden on computing and staff resources in modeling efforts. Most assessments must consider multiple calibration objectives (for example magnitude and timing of stream flow peak). An automated calibration process that allows real time updating of data/models, allowing scientists to focus effort on improving models is needed. We are in the process of building a fully featured multi objective calibration tool capable of processing multiple models cheaply and efficiently using null cycle computing. Our parallel processing and calibration software routines have been generically, but our focus has been on natural resources model calibration. Somore » far, the natural resources models have been friendly to parallel calibration efforts in that they require no inter-process communication, only need a small amount of input data and only output a small amount of statistical information for each calibration run. A typical auto calibration run might involve running a model 10,000 times with a variety of input parameters and summary statistical output. In the past model calibration has been done against individual models for each data set. The individual model runs are relatively fast, ranging from seconds to minutes. The process was run on a single computer using a simple iterative process. We have completed two Auto Calibration prototypes and are currently designing a more feature rich tool. Our prototypes have focused on running the calibration in a distributed computing cross platform environment. They allow incorporation of?smart? calibration parameter generation (using artificial intelligence processing techniques). Null cycle computing similar to SETI@Home has also been a focus of our efforts. This paper details the design of the latest prototype and discusses our plans for the next revision of the software.« less

  3. Multiway analysis methods applied to the fluorescence excitation-emission dataset for the simultaneous quantification of valsartan and amlodipine in tablets

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2017-09-01

    In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.

  4. 3D hyperpolarized C-13 EPI with calibrationless parallel imaging

    NASA Astrophysics Data System (ADS)

    Gordon, Jeremy W.; Hansen, Rie B.; Shin, Peter J.; Feng, Yesu; Vigneron, Daniel B.; Larson, Peder E. Z.

    2018-04-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and temporal resolution. Calibrationless parallel imaging approaches are well-suited for this application because they eliminate the need to acquire coil profile maps or auto-calibration data. In this work, we explored the utility of a calibrationless parallel imaging method (SAKE) and corresponding sampling strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism.

  5. Quantitative metrics for evaluating parallel acquisition techniques in diffusion tensor imaging at 3 Tesla.

    PubMed

    Ardekani, Siamak; Selva, Luis; Sayre, James; Sinha, Usha

    2006-11-01

    Single-shot echo-planar based diffusion tensor imaging is prone to geometric and intensity distortions. Parallel imaging is a means of reducing these distortions while preserving spatial resolution. A quantitative comparison at 3 T of parallel imaging for diffusion tensor images (DTI) using k-space (generalized auto-calibrating partially parallel acquisitions; GRAPPA) and image domain (sensitivity encoding; SENSE) reconstructions at different acceleration factors, R, is reported here. Images were evaluated using 8 human subjects with repeated scans for 2 subjects to estimate reproducibility. Mutual information (MI) was used to assess the global changes in geometric distortions. The effects of parallel imaging techniques on random noise and reconstruction artifacts were evaluated by placing 26 regions of interest and computing the standard deviation of apparent diffusion coefficient and fractional anisotropy along with the error of fitting the data to the diffusion model (residual error). The larger positive values in mutual information index with increasing R values confirmed the anticipated decrease in distortions. Further, the MI index of GRAPPA sequences for a given R factor was larger than the corresponding mSENSE images. The residual error was lowest in the images acquired without parallel imaging and among the parallel reconstruction methods, the R = 2 acquisitions had the least error. The standard deviation, accuracy, and reproducibility of the apparent diffusion coefficient and fractional anisotropy in homogenous tissue regions showed that GRAPPA acquired with R = 2 had the least amount of systematic and random noise and of these, significant differences with mSENSE, R = 2 were found only for the fractional anisotropy index. Evaluation of the current implementation of parallel reconstruction algorithms identified GRAPPA acquired with R = 2 as optimal for diffusion tensor imaging.

  6. Simultaneous multi-slice combined with PROPELLER.

    PubMed

    Norbeck, Ola; Avventi, Enrico; Engström, Mathias; Rydén, Henric; Skare, Stefan

    2018-08-01

    Simultaneous multi-slice (SMS) imaging is an advantageous method for accelerating MRI scans, allowing reduced scan time, increased slice coverage, or high temporal resolution with limited image quality penalties. In this work we combine the advantages of SMS acceleration with the motion correction and artifact reduction capabilities of the PROPELLER technique. A PROPELLER sequence was developed with support for CAIPIRINHA and phase optimized multiband radio frequency pulses. To minimize the time spent on acquiring calibration data, both in-plane-generalized autocalibrating partial parallel acquisition (GRAPPA) and slice-GRAPPA weights for all PROPELLER blade angles were calibrated on a single fully sampled PROPELLER blade volume. Therefore, the proposed acquisition included a single fully sampled blade volume, with the remaining blades accelerated in both the phase and slice encoding directions without additional auto calibrating signal lines. Comparison to 3D RARE was performed as well as demonstration of 3D motion correction performance on the SMS PROPELLER data. We show that PROPELLER acquisitions can be efficiently accelerated with SMS using a short embedded calibration. The potential in combining these two techniques was demonstrated with a high quality 1.0 × 1.0 × 1.0 mm 3 resolution T 2 -weighted volume, free from banding artifacts, and capable of 3D retrospective motion correction, with higher effective resolution compared to 3D RARE. With the combination of SMS acceleration and PROPELLER imaging, thin-sliced reformattable T 2 -weighted image volumes with 3D retrospective motion correction capabilities can be rapidly acquired with low sensitivity to flow and head motion. Magn Reson Med 80:496-506, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  7. Two-way and three-way approaches to ultra high performance liquid chromatography-photodiode array dataset for the quantitative resolution of a two-component mixture containing ciprofloxacin and ornidazole.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2016-09-01

    Two-way and three-way calibration models were applied to ultra high performance liquid chromatography with photodiode array data with coeluted peaks in the same wavelength and time regions for the simultaneous quantitation of ciprofloxacin and ornidazole in tablets. The chromatographic data cube (tensor) was obtained by recording chromatographic spectra of the standard and sample solutions containing ciprofloxacin and ornidazole with sulfadiazine as an internal standard as a function of time and wavelength. Parallel factor analysis and trilinear partial least squares were used as three-way calibrations for the decomposition of the tensor, whereas three-way unfolded partial least squares was applied as a two-way calibration to the unfolded dataset obtained from the data array of ultra high performance liquid chromatography with photodiode array detection. The validity and ability of two-way and three-way analysis methods were tested by analyzing validation samples: synthetic mixture, interday and intraday samples, and standard addition samples. Results obtained from two-way and three-way calibrations were compared to those provided by traditional ultra high performance liquid chromatography. The proposed methods, parallel factor analysis, trilinear partial least squares, unfolded partial least squares, and traditional ultra high performance liquid chromatography were successfully applied to the quantitative estimation of the solid dosage form containing ciprofloxacin and ornidazole. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Parallel auto-correlative statistics with VTK.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  9. Multilevel Parallelization of AutoDock 4.2.

    PubMed

    Norgan, Andrew P; Coffman, Paul K; Kocher, Jean-Pierre A; Katzmann, David J; Sosa, Carlos P

    2011-04-28

    Virtual (computational) screening is an increasingly important tool for drug discovery. AutoDock is a popular open-source application for performing molecular docking, the prediction of ligand-receptor interactions. AutoDock is a serial application, though several previous efforts have parallelized various aspects of the program. In this paper, we report on a multi-level parallelization of AutoDock 4.2 (mpAD4). Using MPI and OpenMP, AutoDock 4.2 was parallelized for use on MPI-enabled systems and to multithread the execution of individual docking jobs. In addition, code was implemented to reduce input/output (I/O) traffic by reusing grid maps at each node from docking to docking. Performance of mpAD4 was examined on two multiprocessor computers. Using MPI with OpenMP multithreading, mpAD4 scales with near linearity on the multiprocessor systems tested. In situations where I/O is limiting, reuse of grid maps reduces both system I/O and overall screening time. Multithreading of AutoDock's Lamarkian Genetic Algorithm with OpenMP increases the speed of execution of individual docking jobs, and when combined with MPI parallelization can significantly reduce the execution time of virtual screens. This work is significant in that mpAD4 speeds the execution of certain molecular docking workloads and allows the user to optimize the degree of system-level (MPI) and node-level (OpenMP) parallelization to best fit both workloads and computational resources.

  10. Applications of New Surrogate Global Optimization Algorithms including Efficient Synchronous and Asynchronous Parallelism for Calibration of Expensive Nonlinear Geophysical Simulation Models.

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.

    2016-12-01

    New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.

  11. Development of a generic auto-calibration package for regional ecological modeling and application in the Central Plains of the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Li, Zhengpeng; Dahal, Devendra; Young, Claudia J.; Schmidt, Gail L.; Liu, Jinxun; Davis, Brian; Sohl, Terry L.; Werner, Jeremy M.; Oeding, Jennifer

    2014-01-01

    Process-oriented ecological models are frequently used for predicting potential impacts of global changes such as climate and land-cover changes, which can be useful for policy making. It is critical but challenging to automatically derive optimal parameter values at different scales, especially at regional scale, and validate the model performance. In this study, we developed an automatic calibration (auto-calibration) function for a well-established biogeochemical model—the General Ensemble Biogeochemical Modeling System (GEMS)-Erosion Deposition Carbon Model (EDCM)—using data assimilation technique: the Shuffled Complex Evolution algorithm and a model-inversion R package—Flexible Modeling Environment (FME). The new functionality can support multi-parameter and multi-objective auto-calibration of EDCM at the both pixel and regional levels. We also developed a post-processing procedure for GEMS to provide options to save the pixel-based or aggregated county-land cover specific parameter values for subsequent simulations. In our case study, we successfully applied the updated model (EDCM-Auto) for a single crop pixel with a corn–wheat rotation and a large ecological region (Level II)—Central USA Plains. The evaluation results indicate that EDCM-Auto is applicable at multiple scales and is capable to handle land cover changes (e.g., crop rotations). The model also performs well in capturing the spatial pattern of grain yield production for crops and net primary production (NPP) for other ecosystems across the region, which is a good example for implementing calibration and validation of ecological models with readily available survey data (grain yield) and remote sensing data (NPP) at regional and national levels. The developed platform for auto-calibration can be readily expanded to incorporate other model inversion algorithms and potential R packages, and also be applied to other ecological models.

  12. Application of a hybrid MPI/OpenMP approach for parallel groundwater model calibration using multi-core computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less

  13. StrAuto: automation and parallelization of STRUCTURE analysis.

    PubMed

    Chhatre, Vikram E; Emerson, Kevin J

    2017-03-24

    Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .

  14. Parallel Reconstruction Using Null Operations (PRUNO)

    PubMed Central

    Zhang, Jian; Liu, Chunlei; Moseley, Michael E.

    2011-01-01

    A novel iterative k-space data-driven technique, namely Parallel Reconstruction Using Null Operations (PRUNO), is presented for parallel imaging reconstruction. In PRUNO, both data calibration and image reconstruction are formulated into linear algebra problems based on a generalized system model. An optimal data calibration strategy is demonstrated by using Singular Value Decomposition (SVD). And an iterative conjugate- gradient approach is proposed to efficiently solve missing k-space samples during reconstruction. With its generalized formulation and precise mathematical model, PRUNO reconstruction yields good accuracy, flexibility, stability. Both computer simulation and in vivo studies have shown that PRUNO produces much better reconstruction quality than autocalibrating partially parallel acquisition (GRAPPA), especially under high accelerating rates. With the aid of PRUO reconstruction, ultra high accelerating parallel imaging can be performed with decent image quality. For example, we have done successful PRUNO reconstruction at a reduction factor of 6 (effective factor of 4.44) with 8 coils and only a few autocalibration signal (ACS) lines. PMID:21604290

  15. LORAKS Makes Better SENSE: Phase-Constrained Partial Fourier SENSE Reconstruction without Phase Calibration

    PubMed Central

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P.

    2016-01-01

    Purpose Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. Theory and Methods The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly-accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely-used calibrationless uniformly-undersampled trajectories. Results Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. Conclusion The SENSE-LORAKS framework provides promising new opportunities for highly-accelerated MRI. PMID:27037836

  16. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  17. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  18. Inductance analyzer based on auto-balanced circuit for precision measurement of fluxgate impedance

    NASA Astrophysics Data System (ADS)

    Setiadi, Rahmondia N.; Schilling, Meinhard

    2018-05-01

    An instrument for fluxgate sensor impedance measurement based on an auto-balanced circuit has been designed and characterized. The circuit design is adjusted to comply with the fluxgate sensor characteristics which are low impedance and highly saturable core with very high permeability. The system utilizes a NI-DAQ card and LabVIEW to process the signal acquisition and evaluation. Some fixed reference resistances are employed for system calibration using linear regression. A multimeter HP 34401A and impedance analyzer Agilent 4294A are used as calibrator and validator for the resistance and inductance measurements. Here, we realized a fluxgate analyzer instrument based on auto-balanced circuit, which measures the resistance and inductance of the device under test with a small error and much lower excitation current to avoid core saturation compared to the used calibrator.

  19. LORAKS makes better SENSE: Phase-constrained partial fourier SENSE reconstruction without phase calibration.

    PubMed

    Kim, Tae Hyung; Setsompop, Kawin; Haldar, Justin P

    2017-03-01

    Parallel imaging and partial Fourier acquisition are two classical approaches for accelerated MRI. Methods that combine these approaches often rely on prior knowledge of the image phase, but the need to obtain this prior information can place practical restrictions on the data acquisition strategy. In this work, we propose and evaluate SENSE-LORAKS, which enables combined parallel imaging and partial Fourier reconstruction without requiring prior phase information. The proposed formulation is based on combining the classical SENSE model for parallel imaging data with the more recent LORAKS framework for MR image reconstruction using low-rank matrix modeling. Previous LORAKS-based methods have successfully enabled calibrationless partial Fourier parallel MRI reconstruction, but have been most successful with nonuniform sampling strategies that may be hard to implement for certain applications. By combining LORAKS with SENSE, we enable highly accelerated partial Fourier MRI reconstruction for a broader range of sampling trajectories, including widely used calibrationless uniformly undersampled trajectories. Our empirical results with retrospectively undersampled datasets indicate that when SENSE-LORAKS reconstruction is combined with an appropriate k-space sampling trajectory, it can provide substantially better image quality at high-acceleration rates relative to existing state-of-the-art reconstruction approaches. The SENSE-LORAKS framework provides promising new opportunities for highly accelerated MRI. Magn Reson Med 77:1021-1035, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems.

    PubMed

    Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza

    2018-02-27

    The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.

  1. Auto-Calibration and Fault Detection and Isolation of Skewed Redundant Accelerometers in Measurement While Drilling Systems

    PubMed Central

    Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza

    2018-01-01

    The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434

  2. Enhancing the usability and performance of structured association mapping algorithms using automation, parallelization, and visualization in the GenAMap software system

    PubMed Central

    2012-01-01

    Background Structured association mapping is proving to be a powerful strategy to find genetic polymorphisms associated with disease. However, these algorithms are often distributed as command line implementations that require expertise and effort to customize and put into practice. Because of the difficulty required to use these cutting-edge techniques, geneticists often revert to simpler, less powerful methods. Results To make structured association mapping more accessible to geneticists, we have developed an automatic processing system called Auto-SAM. Auto-SAM enables geneticists to run structured association mapping algorithms automatically, using parallelization. Auto-SAM includes algorithms to discover gene-networks and find population structure. Auto-SAM can also run popular association mapping algorithms, in addition to five structured association mapping algorithms. Conclusions Auto-SAM is available through GenAMap, a front-end desktop visualization tool. GenAMap and Auto-SAM are implemented in JAVA; binaries for GenAMap can be downloaded from http://sailing.cs.cmu.edu/genamap. PMID:22471660

  3. Non-motor tasks improve adaptive brain-computer interface performance in users with severe motor impairment

    PubMed Central

    Faller, Josef; Scherer, Reinhold; Friedrich, Elisabeth V. C.; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.

    2014-01-01

    Individuals with severe motor impairment can use event-related desynchronization (ERD) based BCIs as assistive technology. Auto-calibrating and adaptive ERD-based BCIs that users control with motor imagery tasks (“SMR-AdBCI”) have proven effective for healthy users. We aim to find an improved configuration of such an adaptive ERD-based BCI for individuals with severe motor impairment as a result of spinal cord injury (SCI) or stroke. We hypothesized that an adaptive ERD-based BCI, that automatically selects a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (“Auto-AdBCI”) could allow for higher control performance than a conventional SMR-AdBCI. To answer this question we performed offline analyses on two sessions (21 data sets total) of cue-guided, five-class electroencephalography (EEG) data recorded from individuals with SCI or stroke. On data from the twelve individuals in Session 1, we first identified three bipolar derivations for the SMR-AdBCI. In a similar way, we determined three bipolar derivations and four mental tasks for the Auto-AdBCI. We then simulated both, the SMR-AdBCI and the Auto-AdBCI configuration on the unseen data from the nine participants in Session 2 and compared the results. On the unseen data of Session 2 from individuals with SCI or stroke, we found that automatically selecting a user specific class-combination from motor-related and non motor-related mental tasks during initial auto-calibration (Auto-AdBCI) significantly (p < 0.01) improved classification performance compared to an adaptive ERD-based BCI that only used motor imagery tasks (SMR-AdBCI; average accuracy of 75.7 vs. 66.3%). PMID:25368546

  4. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  5. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  6. Spelling is Just a Click Away - A User-Centered Brain-Computer Interface Including Auto-Calibration and Predictive Text Entry.

    PubMed

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain-computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP-BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user's daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP-BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP-BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix.

  7. Spelling is Just a Click Away – A User-Centered Brain–Computer Interface Including Auto-Calibration and Predictive Text Entry

    PubMed Central

    Kaufmann, Tobias; Völker, Stefan; Gunesch, Laura; Kübler, Andrea

    2012-01-01

    Brain–computer interfaces (BCI) based on event-related potentials (ERP) allow for selection of characters from a visually presented character-matrix and thus provide a communication channel for users with neurodegenerative disease. Although they have been topic of research for more than 20 years and were multiply proven to be a reliable communication method, BCIs are almost exclusively used in experimental settings, handled by qualified experts. This study investigates if ERP–BCIs can be handled independently by laymen without expert support, which is inevitable for establishing BCIs in end-user’s daily life situations. Furthermore we compared the classic character-by-character text entry against a predictive text entry (PTE) that directly incorporates predictive text into the character-matrix. N = 19 BCI novices handled a user-centered ERP–BCI application on their own without expert support. The software individually adjusted classifier weights and control parameters in the background, invisible to the user (auto-calibration). All participants were able to operate the software on their own and to twice correctly spell a sentence with the auto-calibrated classifier (once with PTE, once without). Our PTE increased spelling speed and, importantly, did not reduce accuracy. In sum, this study demonstrates feasibility of auto-calibrating ERP–BCI use, independently by laymen and the strong benefit of integrating predictive text directly into the character-matrix. PMID:22833713

  8. Auto-Bäcklund transformations for a matrix partial differential equation

    NASA Astrophysics Data System (ADS)

    Gordoa, P. R.; Pickering, A.

    2018-07-01

    We derive auto-Bäcklund transformations, analogous to those of the matrix second Painlevé equation, for a matrix partial differential equation. We also then use these auto-Bäcklund transformations to derive matrix equations involving shifts in a discrete variable, a process analogous to the use of the auto-Bäcklund transformations of the matrix second Painlevé equation to derive a discrete matrix first Painlevé equation. The equations thus derived then include amongst other examples a semidiscrete matrix equation which can be considered to be an extension of this discrete matrix first Painlevé equation. The application of this technique to the auto-Bäcklund transformations of the scalar case of our partial differential equation has not been considered before, and so the results obtained here in this scalar case are also new. Other equations obtained here using this technique include a scalar semidiscrete equation which arises in the case of the second Painlevé equation, and which does not seem to have been thus derived previously.

  9. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  10. Prototype of an auto-calibrating, context-aware, hybrid brain-computer interface.

    PubMed

    Faller, J; Torrellas, S; Miralles, F; Holzner, C; Kapeller, C; Guger, C; Bund, J; Müller-Putz, G R; Scherer, R

    2012-01-01

    We present the prototype of a context-aware framework that allows users to control smart home devices and to access internet services via a Hybrid BCI system of an auto-calibrating sensorimotor rhythm (SMR) based BCI and another assistive device (Integra Mouse mouth joystick). While there is extensive literature that describes the merit of Hybrid BCIs, auto-calibrating and co-adaptive ERD BCI training paradigms, specialized BCI user interfaces, context-awareness and smart home control, there is up to now, no system that includes all these concepts in one integrated easy-to-use framework that can truly benefit individuals with severe functional disabilities by increasing independence and social inclusion. Here we integrate all these technologies in a prototype framework that does not require expert knowledge or excess time for calibration. In a first pilot-study, 3 healthy volunteers successfully operated the system using input signals from an ERD BCI and an Integra Mouse and reached average positive predictive values (PPV) of 72 and 98% respectively. Based on what we learned here we are planning to improve the system for a test with a larger number of healthy volunteers so we can soon bring the system to benefit individuals with severe functional disability.

  11. Procedure for calibrating the Technicon Colorimeter I.

    PubMed

    Black, J C; Furman, W B

    1975-05-01

    We describe a rapid method for calibrating the Technicon AutoAnalyzer colorimeter I. Test solutions of bromphenol blue are recommended for the calibration, in preference to solutions of potassium dichromate, based on considerations of the instrument's working range and of the stray light characteristics of the associated filters.

  12. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  13. Auto-Restricted Zone Demonstration in Memphis, TN

    DOT National Transportation Integrated Search

    1988-03-01

    The Service and Methods Demonstration conducted in downtown Memphis was one of 4 parallel demonstrations involving downtown auto-restricted zones (ARZs). Unlike the other three demonstrations (Boston, New York, and Providence), the Memphis ARZ was al...

  14. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  15. 77 FR 39767 - Self-Regulatory Organizations; National Stock Exchange, Inc.; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... has executed on average per trading day (excluding partial trading days) in AutoEx or Order Delivery... (``AutoEx'') shall mean only those executed shares of the ETP Holder that are submitted in AutoEx mode... period, a combined ADV in both AutoEx and Order Delivery of at least 11.5 million shares, of which at...

  16. ArF scanner performance improvement by using track integrated CD optimization

    NASA Astrophysics Data System (ADS)

    Huang, Jacky; Yu, Shinn-Sheng; Ke, Chih-Ming; Wu, Timothy; Wang, Yu-Hsi; Gau, Tsai-Sheng; Wang, Dennis; Li, Allen; Yang, Wenge; Kaoru, Araki

    2006-03-01

    In advanced semiconductor processing, shrinking CD is one of the main objectives when moving to the next generation technology. Improving CD uniformity (CDU) with shrinking CD is one of the biggest challenges. From ArF lithography CD error budget analysis, PEB (post exposure bake) contributes more than 40% CD variations. It turns out that hot plate performance such as CD matching and within-plate temperature control play key roles in litho cell wafer per hour (WPH). Traditionally wired or wireless thermal sensor wafers were used to match and optimize hot plates. However, sensor-to-sensor matching and sensor data quality vs. sensor lifetime or sensor thermal history are still unknown. These concerns make sensor wafers more suitable for coarse mean-temperature adjustment. For precise temperature adjustment, especially within-hot-plate temperature uniformity, using CD instead of sensor wafer temperature is a better and more straightforward metrology to calibrate hot plates. In this study, we evaluated TEL clean track integrated optical CD metrology (IM) combined with TEL CD Optimizer (CDO) software to improve 193-nm resist within-wafer and wafer-to-wafer CD uniformity. Within-wafer CD uniformity is mainly affected by the temperature non-uniformity on the PEB hot plate. Based on CD and PEB sensitivity of photo resists, a physical model has been established to control the CD uniformity through fine-tuning PEB temperature settings. CD data collected by track integrated CD metrology was fed into this model, and the adjustment of PEB setting was calculated and executed through track internal APC system. This auto measurement, auto feed forward, auto calibration and auto adjustment system can reduce the engineer key-in error and improve the hot plate calibration cycle time. And this PEB auto calibration system can easily bring hot-plate-to-hot-plate CD matching to within 0.5nm and within-wafer CDU (3σ) to less than 1.5nm.

  17. Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming

    PubMed Central

    Cambra, Carlos; Lacuesta, Raquel

    2018-01-01

    Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation. PMID:29693611

  18. Smart System for Bicarbonate Control in Irrigation for Hydroponic Precision Farming.

    PubMed

    Cambra, Carlos; Sendra, Sandra; Lloret, Jaime; Lacuesta, Raquel

    2018-04-25

    Improving the sustainability in agriculture is nowadays an important challenge. The automation of irrigation processes via low-cost sensors can to spread technological advances in a sector very influenced by economical costs. This article presents an auto-calibrated pH sensor able to detect and adjust the imbalances in the pH levels of the nutrient solution used in hydroponic agriculture. The sensor is composed by a pH probe and a set of micropumps that sequentially pour the different liquid solutions to maintain the sensor calibration and the water samples from the channels that contain the nutrient solution. To implement our architecture, we use an auto-calibrated pH sensor connected to a wireless node. Several nodes compose our wireless sensor networks (WSN) to control our greenhouse. The sensors periodically measure the pH level of each hydroponic support and send the information to a data base (DB) which stores and analyzes the data to warn farmers about the measures. The data can then be accessed through a user-friendly, web-based interface that can be accessed through the Internet by using desktop or mobile devices. This paper also shows the design and test bench for both the auto-calibrated pH sensor and the wireless network to check their correct operation.

  19. A framework for propagation of uncertainty contributed by parameterization, input data, model structure, and calibration/validation data in watershed modeling

    USDA-ARS?s Scientific Manuscript database

    The progressive improvement of computer science and development of auto-calibration techniques means that calibration of simulation models is no longer a major challenge for watershed planning and management. Modelers now increasingly focus on challenges such as improved representation of watershed...

  20. Taming parallel I/O complexity with auto-tuning

    DOE PAGES

    Behzad, Babak; Luu, Huong Vu Thanh; Huchette, Joseph; ...

    2013-11-17

    We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses a genetic algorithm to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, andmore » 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. In conclusion, we consistently demonstrate I/O write speedups between 2x and 100x for test configurations.« less

  1. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors.

    PubMed

    Gao, Zhiyuan; Yang, Congjie; Xu, Jiangtao; Nie, Kaiming

    2015-11-06

    This paper presents a dynamic range (DR) enhanced readout technique with a two-step time-to-digital converter (TDC) for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within -T(clk)~+T(clk). A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  2. Polymer Coatings Degradation Properties

    DTIC Science & Technology

    1985-02-01

    undertaken 124). The Box-Jenkins approach first evaluates the partial auto -correlation function and determines the order of the moving average memory function...78 - Tables 15 and 16 show the resalit- f- a, the partial auto correlation plots. Second order moving .-. "ra ;;th -he appropriate lags were...coated films. Kaempf, Guenter; Papenroth, Wolfgang; Kunststoffe Date: 1982 Volume: 72 Number:7 Pages: 424-429 Parameters influencing the accelerated

  3. Design and realization of photoelectric instrument binocular optical axis parallelism calibration system

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Chen, Yu-dan; Liu, Jie; Wu, Dong-sheng; Lu, Jun

    2016-10-01

    The maladjustment of photoelectric instrument binocular optical axis parallelism will affect the observe effect directly. A binocular optical axis parallelism digital calibration system is designed. On the basis of the principle of optical axis binocular photoelectric instrument calibration, the scheme of system is designed, and the binocular optical axis parallelism digital calibration system is realized, which include four modules: multiband parallel light tube, optical axis translation, image acquisition system and software system. According to the different characteristics of thermal infrared imager and low-light-level night viewer, different algorithms is used to localize the center of the cross reticle. And the binocular optical axis parallelism calibration is realized for calibrating low-light-level night viewer and thermal infrared imager.

  4. Auto-calibration of GF-1 WFV images using flat terrain

    NASA Astrophysics Data System (ADS)

    Zhang, Guo; Xu, Kai; Huang, Wenchao

    2017-12-01

    Four wide field view (WFV) cameras with 16-m multispectral medium-resolution and a combined swath of 800 km are onboard the Gaofen-1 (GF-1) satellite, which can increase the revisit frequency to less than 4 days and enable large-scale land monitoring. The detection and elimination of WFV camera distortions is key for subsequent applications. Due to the wide swath of WFV images, geometric calibration using either conventional methods based on the ground control field (GCF) or GCF independent methods is problematic. This is predominantly because current GCFs in China fail to cover the whole WFV image and most GCF independent methods are used for close-range photogrammetry or computer vision fields. This study proposes an auto-calibration method using flat terrain to detect nonlinear distortions of GF-1 WFV images. First, a classic geometric calibration model is built for the GF1 WFV camera, and at least two images with an overlap area that cover flat terrain are collected, then the elevation residuals between the real elevation and that calculated by forward intersection are used to solve nonlinear distortion parameters in WFV images. Experiments demonstrate that the orientation accuracy of the proposed method evaluated by GCF CPs is within 0.6 pixel, and residual errors manifest as random errors. Validation using Google Earth CPs further proves the effectiveness of auto-calibration, and the whole scene is undistorted compared to not using calibration parameters. The orientation accuracy of the proposed method and the GCF method is compared. The maximum difference is approximately 0.3 pixel, and the factors behind this discrepancy are analyzed. Generally, this method can effectively compensate for distortions in the GF-1 WFV camera.

  5. How Does Higher Frequency Monitoring Data Affect the Calibration of a Process-Based Water Quality Model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L.

    2014-12-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, but even in well-studied catchments, streams are often only sampled at a fortnightly or monthly frequency. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by one process-based catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the MCMC-DREAM algorithm. Using daily rather than fortnightly data resulted in improved simulation of the magnitude of peak TDP concentrations, in turn resulting in improved model performance statistics. Marginal posteriors were better constrained by the higher frequency data, resulting in a large reduction in parameter-related uncertainty in simulated TDP (the 95% credible interval decreased from 26 to 6 μg/l). The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, leading to the recommendation that parameters should not be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. Secondary study aims were to highlight the subjective elements involved in auto-calibration and suggest practical improvements that could make models such as INCA-P more suited to auto-calibration and uncertainty analyses. Two key improvements include model simplification, so that all model parameters can be included in an analysis of this kind, and better documenting of recommended ranges for each parameter, to help in choosing sensible priors.

  6. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Auto-calibration of a one-dimensional hydrodynamic-ecological model using a Monte Carlo approach: simulation of hypoxic events in a polymictic lake

    NASA Astrophysics Data System (ADS)

    Luo, L.

    2011-12-01

    Automated calibration of complex deterministic water quality models with a large number of biogeochemical parameters can reduce time-consuming iterative simulations involving empirical judgements of model fit. We undertook auto-calibration of the one-dimensional hydrodynamic-ecological lake model DYRESM-CAEDYM, using a Monte Carlo sampling (MCS) method, in order to test the applicability of this procedure for shallow, polymictic Lake Rotorua (New Zealand). The calibration procedure involved independently minimising the root-mean-square-error (RMSE), maximizing the Pearson correlation coefficient (r) and Nash-Sutcliffe efficient coefficient (Nr) for comparisons of model state variables against measured data. An assigned number of parameter permutations was used for 10,000 simulation iterations. The 'optimal' temperature calibration produced a RMSE of 0.54 °C, Nr-value of 0.99 and r-value of 0.98 through the whole water column based on comparisons with 540 observed water temperatures collected between 13 July 2007 - 13 January 2009. The modeled bottom dissolved oxygen concentration (20.5 m below surface) was compared with 467 available observations. The calculated RMSE of the simulations compared with the measurements was 1.78 mg L-1, the Nr-value was 0.75 and the r-value was 0.87. The autocalibrated model was further tested for an independent data set by simulating bottom-water hypoxia events for the period 15 January 2009 to 8 June 2011 (875 days). This verification produced an accurate simulation of five hypoxic events corresponding to DO < 2 mg L-1 during summer of 2009-2011. The RMSE was 2.07 mg L-1, Nr-value 0.62 and r-value of 0.81, based on the available data set of 738 days. The auto-calibration software of DYRESM-CAEDYM developed here is substantially less time-consuming and more efficient in parameter optimisation than traditional manual calibration which has been the standard tool practiced for similar complex water quality models.

  8. Externally Calibrated Parallel Imaging for 3D Multispectral Imaging Near Metallic Implants Using Broadband Ultrashort Echo Time Imaging

    PubMed Central

    Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.

    2017-01-01

    Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613

  9. Automated response matching for organic scintillation detector arrays

    NASA Astrophysics Data System (ADS)

    Aspinall, M. D.; Joyce, M. J.; Cave, F. D.; Plenteda, R.; Tomanin, A.

    2017-07-01

    This paper identifies a digitizer technology with unique features that facilitates feedback control for the realization of a software-based technique for automatically calibrating detector responses. Three such auto-calibration techniques have been developed and are described along with an explanation of the main configuration settings and potential pitfalls. Automating this process increases repeatability, simplifies user operation, enables remote and periodic system calibration where consistency across detectors' responses are critical.

  10. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    PubMed

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. A fiber-coupled displacement measuring interferometer for determination of the posture of a reflective surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Shuai; Hu, Peng-Cheng, E-mail: hupc@hit.edu.cn; Ding, Xue-Mei, E-mail: X.M.Ding@outlook.com

    A fiber-coupled displacement measuring interferometer capable of determining of the posture of a reflective surface of a measuring mirror is proposed. The newly constructed instrument combines fiber-coupled displacement and angular measurement technologies. The proposed interferometer has advantages of both the fiber-coupled and the spatially beam-separated interferometer. A portable dual-position sensitive detector (PSD)-based unit within this proposed interferometer measures the parallelism of the two source beams to guide the fiber-coupling adjustment. The portable dual PSD-based unit measures not only the pitch and yaw of the retro-reflector but also measures the posture of the reflective surface. The experimental results of displacement calibrationmore » show that the deviations between the proposed interferometer and a reference one, Agilent 5530, at two different common beam directions are both less than ±35 nm, thus verifying the effectiveness of the beam parallelism measurement. The experimental results of angular calibration show that deviations of pitch and yaw with the auto-collimator (as a reference) are less than ±2 arc sec, thus proving the proposed interferometer’s effectiveness for determination of the posture of a reflective surface.« less

  12. Development of the auto-steering software and equipment technology (ASSET)

    NASA Astrophysics Data System (ADS)

    McKay, Mark D.; Anderson, Matthew O.; Wadsworth, Derek C.

    2003-09-01

    The Idaho National Engineering and Environmental Laboratory (INEEL), through collaboration with INSAT Co., has developed a low cost robotic auto-steering system for parallel contour swathing. The capability to perform parallel contour swathing while minimizing "skip" and "overlap" is a necessity for cost-effective crop management within precision agriculture. Current methods for performing parallel contour swathing consist of using a Differential Global Position System (DGPS) coupled with a light bar system to prompt an operator where to steer. The complexity of operating heavy equipment, ensuring proper chemical mixture and application, and steering to a light bar indicator can be overwhelming to an operator. To simplify these tasks, an inexpensive robotic steering system has been developed and tested on several farming implements. This development leveraged research conducted by the INEEL and Utah State University. The INEEL-INSAT Auto-Steering Software and Equipment Technology provides the following: 1) the ability to drive in a straight line within +/- 2 feet while traveling at least 15 mph, 2) interfaces to a Real Time Kinematic (RTK) DGPS and sub-meter DGPS, 3) safety features such as Emergency-stop, steering wheel deactivation, computer watchdog deactivation, etc., and 4) a low-cost, field-ready system that is easily adapted to other systems.

  13. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  14. Software Tools for Design and Performance Evaluation of Intelligent Systems

    DTIC Science & Technology

    2004-08-01

    Self-calibration of Three-Legged Modular Reconfigurable Parallel Robots Based on Leg-End Distance Errors,” Robotica , Vol. 19, pp. 187-198. [4...9] Lintott, A. B., and Dunlop, G. R., “Parallel Topology Robot Calibration,” Robotica . [10] Vischer, P., and Clavel, R., “Kinematic Calibration...of the Parallel Delta Robot,” Robotica , Vol. 16, pp.207- 218, 1998. [11] Joshi, S.A., and Surianarayan, A., “Calibration of a 6-DOF Cable Robot Using

  15. A 32-channel photon counting module with embedded auto/cross-correlators for real-time parallel fluorescence correlation spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, S.; Labanca, I.; Rech, I.

    2014-10-15

    Fluorescence correlation spectroscopy (FCS) is a well-established technique to study binding interactions or the diffusion of fluorescently labeled biomolecules in vitro and in vivo. Fast FCS experiments require parallel data acquisition and analysis which can be achieved by exploiting a multi-channel Single Photon Avalanche Diode (SPAD) array and a corresponding multi-input correlator. This paper reports a 32-channel FPGA based correlator able to perform 32 auto/cross-correlations simultaneously over a lag-time ranging from 10 ns up to 150 ms. The correlator is included in a 32 × 1 SPAD array module, providing a compact and flexible instrument for high throughput FCS experiments.more » However, some inherent features of SPAD arrays, namely afterpulsing and optical crosstalk effects, may introduce distortions in the measurement of auto- and cross-correlation functions. We investigated these limitations to assess their impact on the module and evaluate possible workarounds.« less

  16. [Peripheral refraction and retinal contour in children with myopia by results of refractometry and partial coherence interferometry].

    PubMed

    Tarutta, E P; Milash, S V; Tarasova, N A; Romanova, L I; Markosian, G A; Epishina, M V

    2014-01-01

    To determine the posterior pole contour of the eye based on the relative peripheral refractive error and relative eye length. A parallel study was performed, which enrolled 38 children (76 eyes) with myopia from -1.25 to -10.82 diopters. The patients underwent peripheral refraction assessment with WR-5100K Binocular Auto Refractometer ("Grand Seiko", Japan) and partial coherence tomography with IOLMaster ("Carl Zeiss", Germany) for the relative eye length in areas located 15 and 30 degrees nasal and temporal from the central fovea along the horizontal meridian. In general, refractometry and interferometry showed high coincidence of defocus signs and values for the areas located 15 and 30 degrees nasal as well as 15 degrees temporal from the fovea. However, in 41% of patients defocus signs determined by the two methods mismatched in one or more areas. Most of the mismatch cases were mild myopia. We suppose that such a mismatch is caused by optical peculiarities of the anterior eye segment that have an impact on refractometry results.

  17. Fast hydrological model calibration based on the heterogeneous parallel computing accelerated shuffled complex evolution method

    NASA Astrophysics Data System (ADS)

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke

    2018-01-01

    Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.

  18. Simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii using excitation-emission matrix fluorescence coupled with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju

    2018-02-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.

  19. Partial Derivatives, the Energy Crisis, and Urban Transportation: Calculus for Concerned Citizens

    ERIC Educational Resources Information Center

    Bent, Henry A.

    1976-01-01

    Presented is a method utilizing partial derivatives for determining the energy saved per dollar by substituting bicycling for driving a car while not reducing jobs dependent upon the auto industry. (SL)

  20. Three-dimensional through-time radial GRAPPA for renal MR angiography.

    PubMed

    Wright, Katherine L; Lee, Gregory R; Ehses, Philipp; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-10-01

    To achieve high temporal and spatial resolution for contrast-enhanced time-resolved MR angiography exams (trMRAs), fast imaging techniques such as non-Cartesian parallel imaging must be used. In this study, the three-dimensional (3D) through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) method is used to reconstruct highly accelerated stack-of-stars data for time-resolved renal MRAs. Through-time radial GRAPPA has been recently introduced as a method for non-Cartesian GRAPPA weight calibration, and a similar concept can also be used in 3D acquisitions. By combining different sources of calibration information, acquisition time can be reduced. Here, different GRAPPA weight calibration schemes are explored in simulation, and the results are applied to reconstruct undersampled stack-of-stars data. Simulations demonstrate that an accurate and efficient approach to 3D calibration is to combine a small number of central partitions with as many temporal repetitions as exam time permits. These findings were used to reconstruct renal trMRA data with an in-plane acceleration factor as high as 12.6 with respect to the Nyquist sampling criterion, where the lowest root mean squared error value of 16.4% was achieved when using a calibration scheme with 8 partitions, 16 repetitions, and a 4 projection × 8 read point segment size. 3D through-time radial GRAPPA can be used to successfully reconstruct highly accelerated non-Cartesian data. By using in-plane radial undersampling, a trMRA can be acquired with a temporal footprint less than 4s/frame with a spatial resolution of approximately 1.5 mm × 1.5 mm × 3 mm. © 2014 Wiley Periodicals, Inc.

  1. Note: Digital laser frequency auto-locking for inter-satellite laser ranging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Yingxin; Yeh, Hsien-Chi, E-mail: yexianji@mail.hust.edu.cn; Li, Hongyin

    2016-05-15

    We present a prototype of a laser frequency auto-locking and re-locking control system designed for laser frequency stabilization in inter-satellite laser ranging system. The controller has been implemented on field programmable gate arrays and programmed with LabVIEW software. The controller allows initial frequency calibrating and lock-in of a free-running laser to a Fabry-Pérot cavity. Since it allows automatic recovery from unlocked conditions, benefit derives to automated in-orbit operations. Program design and experimental results are demonstrated.

  2. PKA, novel PKC isoforms, and ERK is mediating PACAP auto-regulation via PAC1R in human neuroblastoma NB-1 cells.

    PubMed

    Georg, Birgitte; Falktoft, Birgitte; Fahrenkrug, Jan

    2016-12-01

    The neuropeptide PACAP is expressed throughout the central and peripheral nervous system where it modulates diverse physiological functions including neuropeptide gene expression. We here report that in human neuroblastoma NB-1 cells PACAP transiently induces its own expression. Maximal PACAP mRNA expression was found after stimulation with PACAP for 3h. PACAP auto-regulation was found to be mediated by activation of PACAP specific PAC 1 Rs as PACAP had >100-fold higher efficacy than VIP, and the PAC 1 R selective agonist Maxadilan potently induced PACAP gene expression. Experiments with pharmacological kinase inhibitors revealed that both PKA and novel but not conventional PKC isozymes were involved in the PACAP auto-regulation. Inhibition of MAPK/ERK kinase (MEK) also impeded the induction, and we found that PKA, novel PKC and ERK acted in parallel and were thus not part of the same pathways. The expression of the transcription factor EGR1 previously ascribed as target of PACAP signalling was found to be transiently induced by PACAP and pharmacological inhibition of either PKC or MEK1/2 abolished PACAP mediated EGR1 induction. In contrast, inhibition of PKA mediated increased PACAP mediated EGR1 induction. Experiments using siRNA against EGR1 to lower the expression did however not affect the PACAP auto-regulation indicating that this immediate early gene product is not part of PACAP auto-regulation in NB-1 cells. We here reveal that in NB-1 neuroblastoma cells, PACAP induces its own expression by activation of PAC 1 R, and that the signalling is different from the PAC 1 R signalling mediating induction of VIP in the same cells. PACAP auto-regulation depends on parallel activation of PKA, novel PKC isoforms, and ERK, while EGR1 does not seem to be part of the PACAP auto-regulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Confirmatory factor analysis of the Autonomy over Tobacco Scale (AUTOS) in adults.

    PubMed

    Wellman, Robert J; DiFranza, Joseph R; O'Loughlin, Jennifer

    2015-11-01

    The Autonomy over Tobacco Scale (AUTOS), a 12-item self-administered questionnaire, was designed to measure autonomy in three correlated lower-order symptom domains: withdrawal, psychological dependence, and cue-induced craving. The factor structure of the AUTOS remains an open question; confirmatory analyses in adolescents supported the hierarchical structure, while exploratory analyses in adolescents and adults yield single-factor solutions. Here we seek to determine whether the hypothesized hierarchical structure is valid in adult smokers. The AUTOS was administered to two independent convenience samples of adult current smokers: a calibration sample recruited in the US for online studies, and a confirmation sample drawn from the prospective Nicotine Dependence in Teens study in Montreal. We tested competing hierarchical and single-factor models using the robust weighted least-squares (WLSMV) estimation method. A single-factor model that allowed correlated error variances between theoretically related items fit well in the calibration sample (n = 434), χ(2)SB(52) = 165.71; χ(2)/df = 3.19; SRMR = 0.03; CFI = 0.96; NNFI = 0.95; RMSEA = 0.07 (95% CI: 0.06, 0.08). Reliability of the single factor was high (ωB = 0.92) and construct validity was adequate. In the confirmation sample (n = 335), a similar model fit well:χ(2)SB(53) = 126.94; χ(2)/df = 2.44; SRMR = 0.04; CFI = 0.95; NNFI = 0.93; RMSEA = 0.07 (95% CI: 0.05, 0.08). Reliability of the single factor was again high (ωB = 0.92) and construct validity was adequate. The AUTOS is unidimensional in adult smokers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. DOVIS 2.0: an efficient and easy to use parallel virtual screening tool based on AutoDock 4.0.

    PubMed

    Jiang, Xiaohui; Kumar, Kamal; Hu, Xin; Wallqvist, Anders; Reifman, Jaques

    2008-09-08

    Small-molecule docking is an important tool in studying receptor-ligand interactions and in identifying potential drug candidates. Previously, we developed a software tool (DOVIS) to perform large-scale virtual screening of small molecules in parallel on Linux clusters, using AutoDock 3.05 as the docking engine. DOVIS enables the seamless screening of millions of compounds on high-performance computing platforms. In this paper, we report significant advances in the software implementation of DOVIS 2.0, including enhanced screening capability, improved file system efficiency, and extended usability. To keep DOVIS up-to-date, we upgraded the software's docking engine to the more accurate AutoDock 4.0 code. We developed a new parallelization scheme to improve runtime efficiency and modified the AutoDock code to reduce excessive file operations during large-scale virtual screening jobs. We also implemented an algorithm to output docked ligands in an industry standard format, sd-file format, which can be easily interfaced with other modeling programs. Finally, we constructed a wrapper-script interface to enable automatic rescoring of docked ligands by arbitrarily selected third-party scoring programs. The significance of the new DOVIS 2.0 software compared with the previous version lies in its improved performance and usability. The new version makes the computation highly efficient by automating load balancing, significantly reducing excessive file operations by more than 95%, providing outputs that conform to industry standard sd-file format, and providing a general wrapper-script interface for rescoring of docked ligands. The new DOVIS 2.0 package is freely available to the public under the GNU General Public License.

  5. Online PH measurement technique in seawater desalination

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Wu, Kaihua; Hu, Shaopeng

    2009-11-01

    The measurement technology of pH is essential in seawater desalination. Glass electrode is the main pH sensor in seawater desalination. Because the internal impedance of glass electrode is high and the signal of pH sensor is easy to be disturbed, a signal processing circuit with high input impedance was designed. Because of high salinity of seawater and the characteristic of glass electrode, ultrasonic cleaning technology was used to online clean pH sensor. Temperature compensation was also designed to reduce the measurement error caused by variety of environment temperature. Additionally, the potential drift of pH sensor was analyzed and an automatic calibration method was proposed. In order to online monitor the variety of pH in seawater desalination, three operating modes were designed. The three modes are online monitoring mode, ultrasonic cleaning mode and auto-calibration mode. The current pH in seawater desalination was measured and displayed in online monitoring mode. The cleaning process of pH sensor was done in ultrasonic cleaning mode. The calibration of pH sensor was finished in auto-calibration mode. The result of experiments showed that the measurement technology of pH could meet the technical requirements for desalination. The glass electrode could be promptly and online cleaned and its service life was lengthened greatly.

  6. Salicylic acid deposition from wash-off products: comparison of in vivo and porcine deposition models.

    PubMed

    Davies, M A

    2015-10-01

    Salicylic acid (SA) is a widely used active in anti-acne face wash products. Only about 1-2% of the total dose is actually deposited on skin during washing, and more efficient deposition systems are sought. The objective of this work was to develop an improved method, including data analysis, to measure deposition of SA from wash-off formulae. Full fluorescence excitation-emission matrices (EEMs) were acquired for non-invasive measurement of deposition of SA from wash-off products. Multivariate data analysis methods - parallel factor analysis and N-way partial least-squares regression - were used to develop and compare deposition models on human volunteers and porcine skin. Although both models are useful, there are differences between them. First, the range of linear response to dosages of SA was 60 μg cm(-2) in vivo compared to 25 μg cm(-2) on porcine skin. Second, the actual shape of the SA band was different between substrates. The methods employed in this work highlight the utility of the use of EEMs, in conjunction with multivariate analysis tools such as parallel factor analysis and multiway partial least-squares calibration, in determining sources of spectral variability in skin and quantification of exogenous species deposited on skin. The human model exhibited the widest range of linearity, but porcine model is still useful up to deposition levels of 25 μg cm(-2) or used with nonlinear calibration models. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  7. Characteristics of hydrogen produced by partial oxidation and auto-thermal reforming in a small methanol reformer

    NASA Astrophysics Data System (ADS)

    Horng, Rong-Fang; Chou, Huann-Ming; Lee, Chiou-Hwang; Tsai, Hsien-Te

    This paper investigates experimentally, the transient characteristics of a small methanol reformer using partial oxidation (POX) and auto-thermal reforming (ATR) for fuel cell applications. The parameters varied were heating temperature, methanol supply rate, steady mode shifting temperature, O 2/C (O 2/CH 3OH) and S/C (H 2O/CH 3OH) molar ratios with the main aim of promoting a rapid response and a high flow rate of hydrogen. The experiments showed that a high steady mode shifting temperature resulted in a faster temperature rise at the catalyst outlet and vice versa and that a low steady mode shifting temperature resulted in a lower final hydrogen concentration. However, when the mode shifting temperature was too high, the hydrogen production response was not necessarily improved. It was subsequently shown that the optimum steady mode shifting temperature for this experimental set-up was approximately 75 °C. Further, the hydrogen concentration produced by the auto-thermal process was as high as 49.12% and the volume flow rate up to 23.0 L min -1 compared to 40.0% and 20.5 L min -1 produced by partial oxidation.

  8. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    NASA Astrophysics Data System (ADS)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.

  9. Antenna Calibration and Measurement Equipment

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Cortes, Manuel Vazquez

    2012-01-01

    A document describes the Antenna Calibration & Measurement Equipment (ACME) system that will provide the Deep Space Network (DSN) with instrumentation enabling a trained RF engineer at each complex to perform antenna calibration measurements and to generate antenna calibration data. This data includes continuous-scan auto-bore-based data acquisition with all-sky data gathering in support of 4th order pointing model generation requirements. Other data includes antenna subreflector focus, system noise temperature and tipping curves, antenna efficiency, reports system linearity, and instrument calibration. The ACME system design is based on the on-the-fly (OTF) mapping technique and architecture. ACME has contributed to the improved RF performance of the DSN by approximately a factor of two. It improved the pointing performances of the DSN antennas and productivity of its personnel and calibration engineers.

  10. What factors affect the carriage of epinephrine auto-injectors by teenagers?

    PubMed

    Macadam, Clare; Barnett, Julie; Roberts, Graham; Stiefel, Gary; King, Rosemary; Erlewyn-Lajeunesse, Michel; Holloway, Judith A; Lucas, Jane S

    2012-02-02

    Teenagers with allergies are at particular risk of severe and fatal reactions, but epinephrine auto-injectors are not always carried as prescribed. We investigated barriers to carriage. Patients aged 12-18 years old under a specialist allergy clinic, who had previously been prescribed an auto-injector were invited to participate. Semi-structured interviews explored the factors that positively or negatively impacted on carriage. Twenty teenagers with food or venom allergies were interviewed. Only two patients had used their auto-injector in the community, although several had been treated for severe reactions in hospital. Most teenagers made complex risk assessments to determine whether to carry the auto-injector. Most but not all decisions were rational and were at least partially informed by knowledge. Factors affecting carriage included location, who else would be present, the attitudes of others and physical features of the auto-injector. Teenagers made frequent risk assessments when deciding whether to carry their auto-injectors, and generally wanted to remain safe. Their decisions were complex, multi-faceted and highly individualised. Rather than aiming for 100% carriage of auto-injectors, which remains an ambitious ideal, personalised education packages should aim to empower teenagers to make and act upon informed risk assessments.

  11. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  12. Development of a computer-based automated pure tone hearing screening device: a preliminary clinical trial.

    PubMed

    Gan, Kok Beng; Azeez, Dhifaf; Umat, Cila; Ali, Mohd Alauddin Mohd; Wahab, Noor Alaudin Abdul; Mukari, Siti Zamratol Mai-Sarah

    2012-10-01

    Hearing screening is important for the early detection of hearing loss. The requirements of specialized equipment, skilled personnel, and quiet environments for valid screening results limit its application in schools and health clinics. This study aimed to develop an automated hearing screening kit (auto-kit) with the capability of realtime noise level monitoring to ensure that the screening is performed in an environment that conforms to the standard. The auto-kit consists of a laptop, a 24-bit resolution sound card, headphones, a microphone, and a graphical user interface, which is calibrated according to the American National Standards Institute S3.6-2004 standard. The auto-kit can present four test tones (500, 1000, 2000, and 4000 Hz) at 25 or 40 dB HL screening cut-off level. The clinical results at 40 dB HL screening cut-off level showed that the auto-kit has a sensitivity of 92.5% and a specificity of 75.0%. Because the 500 Hz test tone is not included in the standard hearing screening procedure, it can be excluded from the auto-kit test procedure. The exclusion of 500 Hz test tone improved the specificity of the auto-kit from 75.0% to 92.3%, which suggests that the auto-kit could be a valid hearing screening device. In conclusion, the auto-kit may be a valuable hearing screening tool, especially in countries where resources are limited.

  13. MOLA: a bootable, self-configuring system for virtual screening using AutoDock4/Vina on computer clusters.

    PubMed

    Abreu, Rui Mv; Froufe, Hugo Jc; Queiroz, Maria João Rp; Ferreira, Isabel Cfr

    2010-10-28

    Virtual screening of small molecules using molecular docking has become an important tool in drug discovery. However, large scale virtual screening is time demanding and usually requires dedicated computer clusters. There are a number of software tools that perform virtual screening using AutoDock4 but they require access to dedicated Linux computer clusters. Also no software is available for performing virtual screening with Vina using computer clusters. In this paper we present MOLA, an easy-to-use graphical user interface tool that automates parallel virtual screening using AutoDock4 and/or Vina in bootable non-dedicated computer clusters. MOLA automates several tasks including: ligand preparation, parallel AutoDock4/Vina jobs distribution and result analysis. When the virtual screening project finishes, an open-office spreadsheet file opens with the ligands ranked by binding energy and distance to the active site. All results files can automatically be recorded on an USB-flash drive or on the hard-disk drive using VirtualBox. MOLA works inside a customized Live CD GNU/Linux operating system, developed by us, that bypass the original operating system installed on the computers used in the cluster. This operating system boots from a CD on the master node and then clusters other computers as slave nodes via ethernet connections. MOLA is an ideal virtual screening tool for non-experienced users, with a limited number of multi-platform heterogeneous computers available and no access to dedicated Linux computer clusters. When a virtual screening project finishes, the computers can just be restarted to their original operating system. The originality of MOLA lies on the fact that, any platform-independent computer available can he added to the cluster, without ever using the computer hard-disk drive and without interfering with the installed operating system. With a cluster of 10 processors, and a potential maximum speed-up of 10x, the parallel algorithm of MOLA performed with a speed-up of 8,64× using AutoDock4 and 8,60× using Vina.

  14. Automatic Thread-Level Parallelization in the Chombo AMR Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christen, Matthias; Keen, Noel; Ligocki, Terry

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number ofmore » existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.« less

  15. Method for improving catalyst function in auto-thermal and partial oxidation reformer-based processors

    DOEpatents

    Ahmed, Shabbir; Papadias, Dionissios D.; Lee, Sheldon H.D.; Ahluwalia, Rajesh K.

    2014-08-26

    The invention provides a method for reforming fuel, the method comprising contacting the fuel to an oxidation catalyst so as to partially oxidize the fuel and generate heat; warming incoming fuel with the heat while simultaneously warming a reforming catalyst with the heat; and reacting the partially oxidized fuel with steam using the reforming catalyst.

  16. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow

    NASA Astrophysics Data System (ADS)

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  17. Design and numerical simulation on an auto-cumulative flowmeter in horizontal oil-water two-phase flow.

    PubMed

    Xie, Beibei; Kong, Lingfu; Kong, Deming; Kong, Weihang; Li, Lei; Liu, Xingbin; Chen, Jiliang

    2017-11-01

    In order to accurately measure the flow rate under the low yield horizontal well conditions, an auto-cumulative flowmeter (ACF) was proposed. Using the proposed flowmeter, the oil flow rate in horizontal oil-water two-phase segregated flow can be finely extracted. The computational fluid dynamics software Fluent was used to simulate the fluid of the ACF in oil-water two-phase flow. In order to calibrate the simulation measurement of the ACF, a novel oil flow rate measurement method was further proposed. The models of the ACF were simulated to obtain and calibrate the oil flow rate under different total flow rates and oil cuts. Using the finite-element method, the structure of the seven conductance probes in the ACF was simulated. The response values for the probes of the ACF under the conditions of oil-water segregated flow were obtained. The experiments for oil-water segregated flow under different heights of the oil accumulation in horizontal oil-water two-phase flow were carried out to calibrate the ACF. The validity of the oil flow rate measurement in horizontal oil-water two-phase flow was verified by simulation and experimental results.

  18. Auto-Detection of Partial Discharges in Power Cables by Descrete Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Yasuda, Yoh; Hara, Takehisa; Urano, Koji; Chen, Min

    One of the serious problems that may happen in power XLPE cables is destruction of insulator. The best and conventional way to prevent such a crucial accident is generally supposed to ascertain partial corona discharges occurring at small void in organic insulator. However, there are some difficulties to detect those partial discharges because of existence of external noises in detected data, whose patterns are hardly identified at a glance. By the reason of the problem, there have been a number of researches on the way of development to accomplish detecting partial discharges by employing neural network (NN) system, which is widely known as the system for pattern recognition. We have been developing the NN system of the auto-detection for partial discharges, which we actually input numerical data of waveform itself into and obtained appropriate performance from. In this paper, we employed Descrete Wavelet Transform (DWT) to acquire more detailed transformed data in order to put them into the NN system. Employing DWT, we became able to express the waveform data in time-frequency space, and achieved effective detectiton of partial discharges by NN system. We present here the results using DWT analysis for partial discharges and noise signals which we obtained actually. Moreover, we present results out of the NN system which were dealt with those transformed data.

  19. Hierarchical auto-configuration addressing in mobile ad hoc networks (HAAM)

    NASA Astrophysics Data System (ADS)

    Ram Srikumar, P.; Sumathy, S.

    2017-11-01

    Addressing plays a vital role in networking to identify devices uniquely. A device must be assigned with a unique address in order to participate in the data communication in any network. Different protocols defining different types of addressing are proposed in literature. Address auto-configuration is a key requirement for self organizing networks. Existing auto-configuration based addressing protocols require broadcasting probes to all the nodes in the network before assigning a proper address to a new node. This needs further broadcasts to reflect the status of the acquired address in the network. Such methods incur high communication overheads due to repetitive flooding. To address this overhead, a new partially stateful address allocation scheme, namely Hierarchical Auto-configuration Addressing (HAAM) scheme is extended and proposed. Hierarchical addressing basically reduces latency and overhead caused during address configuration. Partially stateful addressing algorithm assigns addresses without the need for flooding and global state awareness, which in turn reduces the communication overhead and space complexity respectively. Nodes are assigned addresses hierarchically to maintain the graph of the network as a spanning tree which helps in effectively avoiding the broadcast storm problem. Proposed algorithm for HAAM handles network splits and merges efficiently in large scale mobile ad hoc networks incurring low communication overheads.

  20. What factors affect the carriage of epinephrine auto-injectors by teenagers?

    PubMed Central

    2012-01-01

    Background Teenagers with allergies are at particular risk of severe and fatal reactions, but epinephrine auto-injectors are not always carried as prescribed. We investigated barriers to carriage. Methods Patients aged 12-18 years old under a specialist allergy clinic, who had previously been prescribed an auto-injector were invited to participate. Semi-structured interviews explored the factors that positively or negatively impacted on carriage. Results Twenty teenagers with food or venom allergies were interviewed. Only two patients had used their auto-injector in the community, although several had been treated for severe reactions in hospital. Most teenagers made complex risk assessments to determine whether to carry the auto-injector. Most but not all decisions were rational and were at least partially informed by knowledge. Factors affecting carriage included location, who else would be present, the attitudes of others and physical features of the auto-injector. Teenagers made frequent risk assessments when deciding whether to carry their auto-injectors, and generally wanted to remain safe. Their decisions were complex, multi-faceted and highly individualised. Conclusions Rather than aiming for 100% carriage of auto-injectors, which remains an ambitious ideal, personalised education packages should aim to empower teenagers to make and act upon informed risk assessments. PMID:22409884

  1. Real-time dedispersion for fast radio transient surveys, using auto tuning on many-core accelerators

    NASA Astrophysics Data System (ADS)

    Sclocco, A.; van Leeuwen, J.; Bal, H. E.; van Nieuwpoort, R. V.

    2016-01-01

    Dedispersion, the removal of deleterious smearing of impulsive signals by the interstellar matter, is one of the most intensive processing steps in any radio survey for pulsars and fast transients. We here present a study of the parallelization of this algorithm on many-core accelerators, including GPUs from AMD and NVIDIA, and the Intel Xeon Phi. We find that dedispersion is inherently memory-bound. Even in a perfect scenario, hardware limitations keep the arithmetic intensity low, thus limiting performance. We next exploit auto-tuning to adapt dedispersion to different accelerators, observations, and even telescopes. We demonstrate that the optimal settings differ between observational setups, and that auto-tuning significantly improves performance. This impacts time-domain surveys from Apertif to SKA.

  2. Calibration Of Partial-Pressure-Of-Oxygen Sensors

    NASA Technical Reports Server (NTRS)

    Yount, David W.; Heronimus, Kevin

    1995-01-01

    Report and analysis of, and discussion of improvements in, procedure for calibrating partial-pressure-of-oxygen sensors to satisfy Spacelab calibration requirements released. Sensors exhibit fast drift, which results in short calibration period not suitable for Spacelab. By assessing complete process of determining total drift range available, calibration procedure modified to eliminate errors and still satisfy requirements without compromising integrity of system.

  3. Control of the low-load region in partially premixed combustion

    NASA Astrophysics Data System (ADS)

    Ingesson, Gabriel; Yin, Lianhao; Johansson, Rolf; Tunestal, Per

    2016-09-01

    Partially premixed combustion (PPC) is a low temperature, direct-injection combustion concept that has shown to give promising emission levels and efficiencies over a wide operating range. In this concept, high EGR ratios, high octane-number fuels and early injection timings are used to slow down the auto-ignition reactions and to enhance the fuel and are mixing before the start of combustion. A drawback with this concept is the combustion stability in the low-load region where a high octane-number fuel might cause misfire and low combustion efficiency. This paper investigates the problem of low-load PPC controller design for increased engine efficiency. First, low-load PPC data, obtained from a multi-cylinder heavy- duty engine is presented. The data shows that combustion efficiency could be increased by using a pilot injection and that there is a non-linearity in the relation between injection and combustion timing. Furthermore, intake conditions should be set in order to avoid operating points with unfavourable global equivalence ratio and in-cylinder temperature combinations. Model predictive control simulations were used together with a calibrated engine model to find a gas-system controller that fulfilled this task. The findings are then summarized in a suggested engine controller design. Finally, an experimental performance evaluation of the suggested controller is presented.

  4. An Overview of Kinematic and Calibration Models Using Internal/External Sensors or Constraints to Improve the Behavior of Spatial Parallel Mechanisms

    PubMed Central

    Majarena, Ana C.; Santolaria, Jorge; Samper, David; Aguilar, Juan J.

    2010-01-01

    This paper presents an overview of the literature on kinematic and calibration models of parallel mechanisms, the influence of sensors in the mechanism accuracy and parallel mechanisms used as sensors. The most relevant classifications to obtain and solve kinematic models and to identify geometric and non-geometric parameters in the calibration of parallel robots are discussed, examining the advantages and disadvantages of each method, presenting new trends and identifying unsolved problems. This overview tries to answer and show the solutions developed by the most up-to-date research to some of the most frequent questions that appear in the modelling of a parallel mechanism, such as how to measure, the number of sensors and necessary configurations, the type and influence of errors or the number of necessary parameters. PMID:22163469

  5. Evaluation of “Autotune” calibration against manual calibration of building energy models

    DOE PAGES

    Chaudhary, Gaurav; New, Joshua; Sanyal, Jibonananda; ...

    2016-08-26

    Our paper demonstrates the application of Autotune, a methodology aimed at automatically producing calibrated building energy models using measured data, in two case studies. In the first case, a building model is de-tuned by deliberately injecting faults into more than 60 parameters. This model was then calibrated using Autotune and its accuracy with respect to the original model was evaluated in terms of the industry-standard normalized mean bias error and coefficient of variation of root mean squared error metrics set forth in ASHRAE Guideline 14. In addition to whole-building energy consumption, outputs including lighting, plug load profiles, HVAC energy consumption,more » zone temperatures, and other variables were analyzed. In the second case, Autotune calibration is compared directly to experts’ manual calibration of an emulated-occupancy, full-size residential building with comparable calibration results in much less time. Lastly, our paper concludes with a discussion of the key strengths and weaknesses of auto-calibration approaches.« less

  6. An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model

    NASA Astrophysics Data System (ADS)

    Tiernan, E. D.; Hodges, B. R.

    2017-12-01

    The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.

  7. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    USGS Publications Warehouse

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.

  8. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  9. PFMCal : Photonic force microscopy calibration extended for its application in high-frequency microrheology

    NASA Astrophysics Data System (ADS)

    Butykai, A.; Domínguez-García, P.; Mor, F. M.; Gaál, R.; Forró, L.; Jeney, S.

    2017-11-01

    The present document is an update of the previously published MatLab code for the calibration of optical tweezers in the high-resolution detection of the Brownian motion of non-spherical probes [1]. In this instance, an alternative version of the original code, based on the same physical theory [2], but focused on the automation of the calibration of measurements using spherical probes, is outlined. The new added code is useful for high-frequency microrheology studies, where the probe radius is known but the viscosity of the surrounding fluid maybe not. This extended calibration methodology is automatic, without the need of a user's interface. A code for calibration by means of thermal noise analysis [3] is also included; this is a method that can be applied when using viscoelastic fluids if the trap stiffness is previously estimated [4]. The new code can be executed in MatLab and using GNU Octave. Program Files doi:http://dx.doi.org/10.17632/s59f3gz729.1 Licensing provisions: GPLv3 Programming language: MatLab 2016a (MathWorks Inc.) and GNU Octave 4.0 Operating system: Linux and Windows. Supplementary material: A new document README.pdf includes basic running instructions for the new code. Journal reference of previous version: Computer Physics Communications, 196 (2015) 599 Does the new version supersede the previous version?: No. It adds alternative but compatible code while providing similar calibration factors. Nature of problem (approx. 50-250 words): The original code uses a MatLab-provided user's interface, which is not available in GNU Octave, and cannot be used outside of a proprietary software as MatLab. Besides, the process of calibration when using spherical probes needs an automatic method when calibrating big amounts of different data focused to microrheology. Solution method (approx. 50-250 words): The new code can be executed in the latest version of MatLab and using GNU Octave, a free and open-source alternative to MatLab. This code generates an automatic calibration process which requires only to write the input data in the main script. Additionally, we include a calibration method based on thermal noise statistics, which can be used with viscoelastic fluids if the trap stiffness is previously estimated. Reasons for the new version: This version extends the functionality of PFMCal for the particular case of spherical probes and unknown fluid viscosities. The extended code is automatic, works in different operating systems and it is compatible with GNU Octave. Summary of revisions: The original MatLab program in the previous version, which is executed by PFMCal.m, is not changed. Here, we have added two additional main archives named PFMCal_auto.m and PFMCal_histo.m, which implement automatic calculations of the calibration process and calibration through Boltzmann statistics, respectively. The process of calibration using this code for spherical beads is described in the README.pdf file provided in the new code submission. Here, we obtain different calibration factors, β (given in μm/V), according to [2], related to two statistical quantities: the mean-squared displacement (MSD), βMSD, and the velocity autocorrelation function (VAF), βVAF. Using that methodology, the trap stiffness, k, and the zero-shear viscosity of the fluid, η, can be calculated if the value of the particle's radius, a, is previously known. For comparison, we include in the extended code the method of calibration using the corner frequency of the power-spectral density (PSD) [5], providing a calibration factor βPSD. Besides, with the prior estimation of the trap stiffness, along with the known value of the particle's radius, we can use thermal noise statistics to obtain calibration factors, β, according to the quadratic form of the optical potential, βE, and related to the Gaussian distribution of the bead's positions, βσ2. This method has been demonstrated to be applicable to the calibration of optical tweezers when using non-Newtonian viscoelastic polymeric liquids [4]. An example of the results using this calibration process is summarized in Table 1. Using the data provided in the new code submission, for water and acetone fluids, we calculate all the calibration factors by using the original PFMCal.m and by the new non-GUI code PFMCal_auto.m and PFMCal_histo.m. Regarding the new code, PFMCal_auto.m returns η, k, βMSD, βVAF and βPSD, while PFMCal_histo.m provides βσ2 and βE. Table 1 shows how we obtain the expected viscosity of the two fluids at this temperature and how the different methods provide good agreement between trap stiffnesses and calibration factors. Additional comments including Restrictions and Unusual features (approx. 50-250 words): The original code, PFMCal.m, runs under MatLab using the Statistics Toolbox. The extended code, PFMCal_auto.m and PFMCal_histo.m, can be executed without modification using MatLab or GNU Octave. The code has been tested in Linux and Windows operating systems.

  10. A dosimetry study comparing NCS report-5, IAEA TRS-381, AAPM TG-51 and IAEA TRS-398 in three clinical electron beam energies

    NASA Astrophysics Data System (ADS)

    Palmans, Hugo; Nafaa, Laila; de Patoul, Nathalie; Denis, Jean-Marc; Tomsej, Milan; Vynckier, Stefaan

    2003-05-01

    New codes of practice for reference dosimetry in clinical high-energy photon and electron beams have been published recently, to replace the air kerma based codes of practice that have determined the dosimetry of these beams for the past twenty years. In the present work, we compared dosimetry based on the two most widespread absorbed dose based recommendations (AAPM TG-51 and IAEA TRS-398) with two air kerma based recommendations (NCS report-5 and IAEA TRS-381). Measurements were performed in three clinical electron beam energies using two NE2571-type cylindrical chambers, two Markus-type plane-parallel chambers and two NACP-02-type plane-parallel chambers. Dosimetry based on direct calibrations of all chambers in 60Co was investigated, as well as dosimetry based on cross-calibrations of plane-parallel chambers against a cylindrical chamber in a high-energy electron beam. Furthermore, 60Co perturbation factors for plane-parallel chambers were derived. It is shown that the use of 60Co calibration factors could result in deviations of more than 2% for plane-parallel chambers between the old and new codes of practice, whereas the use of cross-calibration factors, which is the first recommendation in the new codes, reduces the differences to less than 0.8% for all situations investigated here. The results thus show that neither the chamber-to-chamber variations, nor the obtained absolute dose values are significantly altered by changing from air kerma based dosimetry to absorbed dose based dosimetry when using calibration factors obtained from the Laboratory for Standard Dosimetry, Ghent, Belgium. The values of the 60Co perturbation factor for plane-parallel chambers (katt . km for the air kerma based and pwall for the absorbed dose based codes of practice) that are obtained from comparing the results based on 60Co calibrations and cross-calibrations are within the experimental uncertainties in agreement with the results from other investigators.

  11. Reflector automatic acquisition and pointing based on auto-collimation theodolite.

    PubMed

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  12. Reflector automatic acquisition and pointing based on auto-collimation theodolite

    NASA Astrophysics Data System (ADS)

    Luo, Jun; Wang, Zhiqian; Wen, Zhuoman; Li, Mingzhu; Liu, Shaojin; Shen, Chengwu

    2018-01-01

    An auto-collimation theodolite (ACT) for reflector automatic acquisition and pointing is designed based on the principle of autocollimators and theodolites. First, the principle of auto-collimation and theodolites is reviewed, and then the coaxial ACT structure is developed. Subsequently, the acquisition and pointing strategies for reflector measurements are presented, which first quickly acquires the target over a wide range and then points the laser spot to the charge coupled device zero position. Finally, experiments are conducted to verify the acquisition and pointing performance, including the calibration of the ACT, the comparison of the acquisition mode and pointing mode, and the accuracy measurement in horizontal and vertical directions. In both directions, a measurement accuracy of ±3″ is achieved. The presented ACT is suitable for automatic pointing and monitoring the reflector over a small scanning area and can be used in a wide range of applications such as bridge structure monitoring and cooperative target aiming.

  13. DOVIS: an implementation for high-throughput virtual screening using AutoDock.

    PubMed

    Zhang, Shuxing; Kumar, Kamal; Jiang, Xiaohui; Wallqvist, Anders; Reifman, Jaques

    2008-02-27

    Molecular-docking-based virtual screening is an important tool in drug discovery that is used to significantly reduce the number of possible chemical compounds to be investigated. In addition to the selection of a sound docking strategy with appropriate scoring functions, another technical challenge is to in silico screen millions of compounds in a reasonable time. To meet this challenge, it is necessary to use high performance computing (HPC) platforms and techniques. However, the development of an integrated HPC system that makes efficient use of its elements is not trivial. We have developed an application termed DOVIS that uses AutoDock (version 3) as the docking engine and runs in parallel on a Linux cluster. DOVIS can efficiently dock large numbers (millions) of small molecules (ligands) to a receptor, screening 500 to 1,000 compounds per processor per day. Furthermore, in DOVIS, the docking session is fully integrated and automated in that the inputs are specified via a graphical user interface, the calculations are fully integrated with a Linux cluster queuing system for parallel processing, and the results can be visualized and queried. DOVIS removes most of the complexities and organizational problems associated with large-scale high-throughput virtual screening, and provides a convenient and efficient solution for AutoDock users to use this software in a Linux cluster platform.

  14. Productive High Performance Parallel Programming with Auto-tuned Domain-Specific Embedded Languages

    DTIC Science & Technology

    2013-01-02

    Compilation JVM Java Virtual Machine KB Kilobyte KDT Knowledge Discovery Toolbox LAPACK Linear Algebra Package LLVM Low-Level Virtual Machine LOC Lines...different starting points. Leo Meyerovich also helped solidify some of the ideas here in discussions during Par Lab retreats. I would also like to thank...multi-timestep computations by blocking in both time and space. 88 Implementation Output Approx DSL Type Language Language Parallelism LoC Graphite

  15. Index to Benet Weapons Laboratory (LCWSL) Technical Reports - 1979

    DTIC Science & Technology

    1980-09-01

    Breech Machanisms Breech Seals Breeches, Cannon Cadmium Sulfide Calibration Cannon Auto Loader Cannon Tube Chemical Vapor Deposition REPORT NUMBER...PAGE(Trh«n Data SntaracQ (Con1t from block #20) Three basic specimen geometries were studied: 1. A symetric specimen; 2. A specimen with +5% wall

  16. Comparison of anterior chamber depth measurements by 3-dimensional optical coherence tomography, partial coherence interferometry biometry, Scheimpflug rotating camera imaging, and ultrasound biomicroscopy.

    PubMed

    Nakakura, Shunsuke; Mori, Etsuko; Nagatomi, Nozomi; Tabuchi, Hitoshi; Kiuchi, Yoshiaki

    2012-07-01

    To evaluate the congruity of anterior chamber depth (ACD) measurements using 4 devices. Saneikai Tsukazaki Hospital, Himeji City, Japan. Comparative case series. In 1 eye of 42 healthy participants, the ACD was measured by 3-dimensional corneal and anterior segment optical coherence tomography (CAS-OCT), partial coherence interferometry (PCI), Scheimpflug imaging, and ultrasound biomicroscopy (UBM). The differences between the measurements were evaluated by 2-way analysis of variance and post hoc analysis. Agreement between the measurements was evaluated using Bland-Altman analysis. To evaluate the true ACD using PCI, the automatically calculated ACD minus the central corneal thickness measured by CAS-OCT was defined as PCI true. Two ACD measurements were also taken with CAS-OCT. The mean ACD was 3.72 mm ± 0.23 (SD) (PCI), 3.18 ± 0.23 mm (PCI true), 3.24 ± 0.25 mm (Scheimpflug), 3.03 ± 0.25 mm (UBM), 3.14 ± 0.24 mm (CAS-OCT auto), and 3.12 ± 0.24 mm (CAS-OCT manual). A significant difference was observed between PCI biometry, Scheimpflug imaging, and UBM measurements and the other methods. Post hoc analysis showed no significant differences between PCI true and CAS-OCT auto or between CAS-OCT auto and CAS-OCT manual. Strong correlations were observed between all measurements; however, Bland-Altman analysis showed good agreement only between PCI true and Scheimpflug imaging and between CAS-OCT auto and CAS OCT manual. The ACD measurements obtained from PCI biometry, Scheimpflug imaging, CAS-OCT, and UBM were significantly different and not interchangeable except for PCI true and CAS-OCT auto and CAS-OCT auto and CAS-OCT manual. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Can we still dream when the mind is blank? Sleep and dream mentations in auto-activation deficit.

    PubMed

    Leu-Semenescu, Smaranda; Uguccioni, Ginevra; Golmard, Jean-Louis; Czernecki, Virginie; Yelnik, Jerome; Dubois, Bruno; Forgeot d'Arc, Baudouin; Grabli, David; Levy, Richard; Arnulf, Isabelle

    2013-10-01

    Bilateral damage to the basal ganglia causes auto-activation deficit, a neuropsychological syndrome characterized by striking apathy, with a loss of self-driven behaviour that is partially reversible with external stimulation. Some patients with auto-activation deficit also experience a mental emptiness, which is defined as an absence of any self-reported thoughts. We asked whether this deficit in spontaneous activation of mental processing may be reversed during REM sleep, when dreaming activity is potentially elicited by bottom-up brainstem stimulation on the cortex. Sleep and video monitoring over two nights and cognitive tests were performed on 13 patients with auto-activation deficit secondary to bilateral striato-pallidal lesions and 13 healthy subjects. Dream mentations were collected from home diaries and after forced awakenings in non-REM and REM sleep. The home diaries were blindly analysed for length, complexity and bizarreness. A mental blank during wakefulness was complete in six patients and partial in one patient. Four (31%) patients with auto-activation deficit (versus 92% of control subjects) reported mentations when awakened from REM sleep, even when they demonstrated a mental blank during the daytime (n = 2). However, the patients' dream reports were infrequent, short, devoid of any bizarre or emotional elements and tended to be less complex than the dream mentations of control subjects. The sleep duration, continuity and stages were similar between the groups, except for a striking absence of sleep spindles in 6 of 13 patients with auto-activation deficit, despite an intact thalamus. The presence of spontaneous dreams in REM sleep in the absence of thoughts during wakefulness in patients with auto-activation deficit supports the idea that simple dream imagery is generated by brainstem stimulation and is sent to the sensory cortex. However, the lack of complexity in these dream mentations suggests that the full dreaming process (scenario, emotions, etc.) require these sensations to be interpreted by higher-order cortical areas. The absence of sleep spindles in localized lesions in the basal ganglia highlights the role of the pallidum and striatum in spindling activity during non-REM sleep.

  18. A flux calibration device for the SuperNova Integral Field Spectrograph (SNIFS)

    NASA Astrophysics Data System (ADS)

    Lombardo, Simona; Aldering, Greg; Hoffmann, Akos; Kowalski, Marek; Kuesters, Daniel; Reif, Klaus; Rigault, Michael

    2014-07-01

    Observational cosmology employing optical surveys often require precise flux calibration. In this context we present SNIFS Calibration Apparatus (SCALA), a flux calibration system developed for the SuperNova Integral Field Spectrograph (SNIFS), operating at the University of Hawaii 2.2 m telescope. SCALA consists of a hexagonal array of 18 small parabolic mirrors distributed over the face of, and feeding parallel light to, the telescope entrance pupil. The mirrors are illuminated by integrating spheres and a wavelength-tunable (from UV to IR) light source, generating light beams with opening angles of 1°. These nearly parallel beams are flat and flux-calibrated at a subpercent level, enabling us to calibrate our "telescope + SNIFS system" at the required precision.

  19. Tandem High-Dose Chemotherapy and Autologous Stem Cell Transplantation for High-Grade Gliomas in Children and Adolescents

    PubMed Central

    2017-01-01

    With the aim to investigate the outcome of tandem high-dose chemotherapy and autologous stem cell transplantation (HDCT/auto-SCT) for high-grade gliomas (HGGs), we retrospectively reviewed the medical records of 30 patients with HGGs (16 glioblastomas, 7 anaplastic astrocytomas, and 7 other HGGs) between 2006 and 2015. Gross or near total resection was possible in 11 patients. Front-line treatment after surgery was radiotherapy (RT) in 14 patients and chemotherapy in the remaining 16 patients including 3 patients less than 3 years of age. Eight of 12 patients who remained progression free and 5 of the remaining 18 patients who experienced progression during induction treatment underwent the first HDCT/auto-SCT with carboplatin + thiotepa + etoposide (CTE) regimen and 11 of them proceeded to the second HDCT/auto-SCT with cyclophosphamide + melphalan (CyM) regimen. One patient died from hepatic veno-occlusive disease (VOD) during the second HDCT/auto-SCT; otherwise, toxicities were manageable. Four patients in complete response (CR) and 3 of 7 patients in partial response (PR) or second PR at the first HDCT/auto-SCT remained event free: however, 2 patients with progressive tumor experienced progression again. The probabilities of 3-year overall survival (OS) after the first HDCT/auto-SCT in 11 patients in CR, PR, or second PR was 58.2% ± 16.9%. Tumor status at the first HDCT/auto-SCT was the only significant factor for outcome after HDCT/auto-SCT. There was no difference in survival between glioblastoma and other HGGs. This study suggests that the outcome of HGGs in children and adolescents after HDCT/auto-SCT is encouraging if the patient could achieve CR or PR before HDCT/auto-SCT. PMID:28049229

  20. Calibration of EBT2 film by the PDD method with scanner non-uniformity correction.

    PubMed

    Chang, Liyun; Chui, Chen-Shou; Ding, Hueisch-Jy; Hwang, Ing-Ming; Ho, Sheng-Yow

    2012-09-21

    The EBT2 film together with a flatbed scanner is a convenient dosimetry QA tool for verification of clinical radiotherapy treatments. However, it suffers from a relatively high degree of uncertainty and a tedious film calibration process for every new lot of films, including cutting the films into several small pieces, exposing with different doses, restoring them back and selecting the proper region of interest (ROI) for each piece for curve fitting. In this work, we present a percentage depth dose (PDD) method that can accurately calibrate the EBT2 film together with the scanner non-uniformity correction and provide an easy way to perform film dosimetry. All films were scanned before and after the irradiation in one of the two homemade 2 mm thick acrylic frames (one portrait and the other landscape), which was located at a fixed position on the scan bed of an Epson 10 000XL scanner. After the pre-irradiated scan, the film was placed parallel to the beam central axis and sandwiched between six polystyrene plates (5 cm thick each), followed by irradiation of a 20 × 20 cm² 6 MV photon beam. Two different beams on times were used on two different films to deliver a dose to the film ranging from 32 to 320 cGy. After the post-irradiated scan, the net optical densities for a total of 235 points on the beam central axis on the films were auto-extracted and compared with the corresponding depth doses that were calculated through the measurement of a 0.6 cc farmer chamber and the related PDD table to perform the curve fitting. The portrait film location was selected for routine calibration, since the central beam axis on the film is parallel to the scanning direction, where non-uniformity correction is not needed (Ferreira et al 2009 Phys. Med. Biol. 54 1073-85). To perform the scanner non-uniformity calibration, the cross-beam profiles of the film were analysed by referencing the measured profiles from a Profiler™. Finally, to verify our method, the films were exposed to 60° physical wedge fields and the compositive fields, and their relative dose profiles were compared with those from the water phantom measurement. The fitting uncertainty was less than 0.5% due to the many calibration points, and the overall calibration uncertainty was within 3% for doses above 50 cGy, when the average of four films were used for the calibration. According to our study, the non-uniformity calibration factor was found to be independent of the given dose for the EBT2 film and the relative dose differences between the profiles measured by the film and the Profiler were within 1.5% after applying the non-uniformity correction. For the verification tests, the relative dose differences between the measurements by films and in the water phantom, when the average of three films were used, were generally within 3% for the 60° wedge fields and compositive fields, respectively. In conclusion, our method is convenient, time-saving and cost-effective, since no film cutting is needed and only two films with two exposures are needed.

  1. SU-F-J-93: Automated Segmentation of High-Resolution 3D WholeBrain Spectroscopic MRI for Glioblastoma Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, E; Shu, H; Cordova, J

    Purpose: We report on an automated segmentation algorithm for defining radiation therapy target volumes using spectroscopic MR images (sMRI) acquired at nominal voxel resolution of 100 microliters. Methods: Wholebrain sMRI combining 3D echo-planar spectroscopic imaging, generalized auto-calibrating partially-parallel acquisitions, and elliptical k-space encoding were conducted on 3T MRI scanner with 32-channel head coil array creating images. Metabolite maps generated include choline (Cho), creatine (Cr), and N-acetylaspartate (NAA), as well as Cho/NAA, Cho/Cr, and NAA/Cr ratio maps. Automated segmentation was achieved by concomitantly considering sMRI metabolite maps with standard contrast enhancing (CE) imaging in a pipeline that first uses the watermore » signal for skull stripping. Subsequently, an initial blob of tumor region is identified by searching for regions of FLAIR abnormalities that also display reduced NAA activity using a mean ratio correlation and morphological filters. These regions are used as starting point for a geodesic level-set refinement that adapts the initial blob to the fine details specific to each metabolite. Results: Accuracy of the segmentation model was tested on a cohort of 12 patients that had sMRI datasets acquired pre, mid and post-treatment, providing a broad range of enhancement patterns. Compared to classical imaging, where heterogeneity in the tumor appearance and shape across posed a greater challenge to the algorithm, sMRI’s regions of abnormal activity were easily detected in the sMRI metabolite maps when combining the detail available in the standard imaging with the local enhancement produced by the metabolites. Results can be imported in the treatment planning, leading in general increase in the target volumes (GTV60) when using sMRI+CE MRI compared to the standard CE MRI alone. Conclusion: Integration of automated segmentation of sMRI metabolite maps into planning is feasible and will likely streamline acceptance of this new acquisition modality in clinical practice.« less

  2. Autologous Transplantation in Follicular Lymphoma with Early Therapy Failure: A National LymphoCare Study and Center for International Blood and Marrow Transplant Research Analysis.

    PubMed

    Casulo, Carla; Friedberg, Jonathan W; Ahn, Kwang W; Flowers, Christopher; DiGilio, Alyssa; Smith, Sonali M; Ahmed, Sairah; Inwards, David; Aljurf, Mahmoud; Chen, Andy I; Choe, Hannah; Cohen, Jonathon; Copelan, Edward; Farooq, Umar; Fenske, Timothy S; Freytes, Cesar; Gaballa, Sameh; Ganguly, Siddhartha; Jethava, Yogesh; Kamble, Rammurti T; Kenkre, Vaishalee P; Lazarus, Hillard; Lazaryan, Aleksandr; Olsson, Richard F; Rezvani, Andrew R; Rizzieri, David; Seo, Sachiko; Shah, Gunjan L; Shah, Nina; Solh, Melham; Sureda, Anna; William, Basem; Cumpston, Aaron; Zelenetz, Andrew D; Link, Brian K; Hamadani, Mehdi

    2018-06-01

    Patients with follicular lymphoma (FL) experiencing early therapy failure (ETF) within 2 years of frontline chemoimmunotherapy have poor overall survival (OS). We analyzed data from the Center for International Blood and Marrow Transplant Research (CIBMTR) and the National LymphoCare Study (NLCS) to determine whether autologous hematopoietic cell transplant (autoHCT) can improve outcomes in this high-risk FL subgroup. ETF was defined as failure to achieve at least partial response after frontline chemoimmunotherapy or lymphoma progression within 2 years of frontline chemoimmunotherapy. We identified 2 groups: the non-autoHCT cohort (patients from the NLCS with ETF not undergoing autoHCT) and the autoHCT cohort (CIBMTR patients with ETF undergoing autoHCT). All patients received rituximab-based chemotherapy as frontline treatment; 174 non-autoHCT patients and 175 autoHCT patients were identified and analyzed. There was no difference in 5-year OS between the 2 groups (60% versus 67%, respectively; P = .16). A planned subgroup analysis showed that patients with ETF receiving autoHCT soon after treatment failure (≤1 year of ETF; n = 123) had higher 5-year OS than those without autoHCT (73% versus 60%, P = .05). On multivariate analysis, early use of autoHCT was associated with significantly reduced mortality (hazard ratio, .63; 95% confidence interval, .42 to .94; P = .02). Patients with FL experiencing ETF after frontline chemoimmunotherapy lack optimal therapy. We demonstrate improved OS when receiving autoHCT within 1 year of treatment failure. Results from this unique collaboration between the NLCS and CIBMTR support consideration of early consolidation with autoHCT in select FL patients experiencing ETF. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  3. Autoantibody Signaling in Pemphigus Vulgaris: Development of an Integrated Model

    PubMed Central

    Sajda, Thomas; Sinha, Animesh A.

    2018-01-01

    Pemphigus vulgaris (PV) is an autoimmune skin blistering disease effecting both cutaneous and mucosal epithelia. Blister formation in PV is known to result from the binding of autoantibodies (autoAbs) to keratinocyte antigens. The primary antigenic targets of pathogenic autoAbs are known to be desmoglein 3, and to a lesser extent, desmoglein 1, cadherin family proteins that partially comprise the desmosome, a protein structure responsible for maintaining cell adhesion, although additional autoAbs, whose role in blister formation is still unclear, are also known to be present in PV patients. Nevertheless, there remain large gaps in knowledge concerning the precise mechanisms through which autoAb binding induces blister formation. Consequently, the primary therapeutic interventions for PV focus on systemic immunosuppression, whose side effects represent a significant health risk to patients. In an effort to identify novel, disease-specific therapeutic targets, a multitude of studies attempting to elucidate the pathogenic mechanisms downstream of autoAb binding, have led to significant advancements in the understanding of autoAb-mediated blister formation. Despite this enhanced characterization of disease processes, a satisfactory explanation of autoAb-induced acantholysis still does not exist. Here, we carefully review the literature investigating the pathogenic disease mechanisms in PV and, taking into account the full scope of results from these studies, provide a novel, comprehensive theory of blister formation in PV. PMID:29755451

  4. Effect of Group-III precursors on unintentional gallium incorporation during epitaxial growth of InAlN layers by metalorganic chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Kim, Jeomoh; Ji, Mi-Hee; Detchprohm, Theeradetch; Dupuis, Russell D.; Fischer, Alec M.; Ponce, Fernando A.; Ryou, Jae-Hyun

    2015-09-01

    Unintentional incorporation of gallium (Ga) in InAlN layers grown with different molar flow rates of Group-III precursors by metalorganic chemical vapor deposition has been experimentally investigated. The Ga mole fraction in the InAl(Ga)N layer was increased significantly with the trimethylindium (TMIn) flow rate, while the trimethylaluminum flow rate controls the Al mole fraction. The evaporation of metallic Ga from the liquid phase eutectic system between the pyrolized In from injected TMIn and pre-deposited metallic Ga was responsible for the Ga auto-incorporation into the InAl(Ga)N layer. The theoretical calculation on the equilibrium vapor pressure of liquid phase Ga and the effective partial pressure of Group-III precursors based on growth parameters used in this study confirms the influence of Group-III precursors on Ga auto-incorporation. More Ga atoms can be evaporated from the liquid phase Ga on the surrounding surfaces in the growth chamber and then significant Ga auto-incorporation can occur due to the high equilibrium vapor pressure of Ga comparable to effective partial pressure of input Group-III precursors during the growth of InAl(Ga)N layer.

  5. Internal motions of HII regions and giant HII regions

    NASA Technical Reports Server (NTRS)

    Chu, You-Hua; Kennicutt, Robert C., Jr.

    1994-01-01

    We report new echelle observations of the kinematics of 30 HII regions in the Large Magellanic Clouds (LMC), including the 30 Doradus giant HII region. All of the HII regions possess supersonic velocity dispersions, which can be attributed to a combination of turbulent motions and discrete velocity splitting produced by stellar winds and/or embedded supernova remnants (SNRs). The core of 30 Dor is unique, with a complex velocity structure that parallels its chaotic optical morphology. We use our calibrated echelle data to measure the physical properties and energetic requirements of these velocity structures. The most spectacular structures in 30 Dor are several fast expanding shells, which appear to be produced at least partially by SNRs.

  6. Automatic Camera Orientation and Structure Recovery with Samantha

    NASA Astrophysics Data System (ADS)

    Gherardi, R.; Toldo, R.; Garro, V.; Fusiello, A.

    2011-09-01

    SAMANTHA is a software capable of computing camera orientation and structure recovery from a sparse block of casual images without human intervention. It can process both calibrated images or uncalibrated, in which case an autocalibration routine is run. Pictures are organized into a hierarchical tree which has single images as leaves and partial reconstructions as internal nodes. The method proceeds bottom up until it reaches the root node, corresponding to the final result. This framework is one order of magnitude faster than sequential approaches, inherently parallel, less sensitive to the error accumulation causing drift. We have verified the quality of our reconstructions both qualitatively producing compelling point clouds and quantitatively, comparing them with laser scans serving as ground truth.

  7. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    NASA Astrophysics Data System (ADS)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS framework to discover key tradeoffs within the LSRB system.

  8. How does higher frequency monitoring data affect the calibration of a process-based water quality model?

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, Leah; Helliwell, Rachel

    2015-04-01

    Process-based catchment water quality models are increasingly used as tools to inform land management. However, for such models to be reliable they need to be well calibrated and shown to reproduce key catchment processes. Calibration can be challenging for process-based models, which tend to be complex and highly parameterised. Calibrating a large number of parameters generally requires a large amount of monitoring data, spanning all hydrochemical conditions. However, regulatory agencies and research organisations generally only sample at a fortnightly or monthly frequency, even in well-studied catchments, often missing peak flow events. The primary aim of this study was therefore to investigate how the quality and uncertainty of model simulations produced by a process-based, semi-distributed catchment model, INCA-P (the INtegrated CAtchment model of Phosphorus dynamics), were improved by calibration to higher frequency water chemistry data. Two model calibrations were carried out for a small rural Scottish catchment: one using 18 months of daily total dissolved phosphorus (TDP) concentration data, another using a fortnightly dataset derived from the daily data. To aid comparability, calibrations were carried out automatically using the Markov Chain Monte Carlo - DiffeRential Evolution Adaptive Metropolis (MCMC-DREAM) algorithm. Calibration to daily data resulted in improved simulation of peak TDP concentrations and improved model performance statistics. Parameter-related uncertainty in simulated TDP was large when fortnightly data was used for calibration, with a 95% credible interval of 26 μg/l. This uncertainty is comparable in size to the difference between Water Framework Directive (WFD) chemical status classes, and would therefore make it difficult to use this calibration to predict shifts in WFD status. The 95% credible interval reduced markedly with the higher frequency monitoring data, to 6 μg/l. The number of parameters that could be reliably auto-calibrated was lower for the fortnightly data, with a physically unrealistic TDP simulation being produced when too many parameters were allowed to vary during model calibration. Parameters should not therefore be varied spatially for models such as INCA-P unless there is solid evidence that this is appropriate, or there is a real need to do so for the model to fulfil its purpose. This study highlights the potential pitfalls of using low frequency timeseries of observed water quality to calibrate complex process-based models. For reliable model calibrations to be produced, monitoring programmes need to be designed which capture system variability, in particular nutrient dynamics during high flow events. In addition, there is a need for simpler models, so that all model parameters can be included in auto-calibration and uncertainty analysis, and to reduce the data needs during calibration.

  9. Calibration of the ARID robot

    NASA Technical Reports Server (NTRS)

    Doty, Keith L

    1992-01-01

    The author has formulated a new, general model for specifying the kinematic properties of serial manipulators. The new model kinematic parameters do not suffer discontinuities when nominally parallel adjacent axes deviate from exact parallelism. From this new theory the author develops a first-order, lumped-parameter, calibration-model for the ARID manipulator. Next, the author develops a calibration methodology for the ARID based on visual and acoustic sensing. A sensor platform, consisting of a camera and four sonars attached to the ARID end frame, performs calibration measurements. A calibration measurement consists of processing one visual frame of an accurately placed calibration image and recording four acoustic range measurements. A minimum of two measurement protocols determine the kinematics calibration-model of the ARID for a particular region: assuming the joint displacements are accurately measured, the calibration surface is planar, and the kinematic parameters do not vary rapidly in the region. No theoretical or practical limitations appear to contra-indicate the feasibility of the calibration method developed here.

  10. Auto-calibrated scanning-angle prism-type total internal reflection microscopy for nanometer-precision axial position determination and optional variable-illumination-depth pseudo total internal reflection microscopy

    DOEpatents

    Fang, Ning; Sun, Wei

    2015-04-21

    A method, apparatus, and system for improved VA-TIRFM microscopy. The method comprises automatically controlled calibration of one or more laser sources by precise control of presentation of each laser relative a sample for small incremental changes of incident angle over a range of critical TIR angles. The calibration then allows precise scanning of the sample for any of those calibrated angles for higher and more accurate resolution, and better reconstruction of the scans for super resolution reconstruction of the sample. Optionally the system can be controlled for incident angles of the excitation laser at sub-critical angles for pseudo TIRFM. Optionally both above-critical angle and sub critical angle measurements can be accomplished with the same system.

  11. A domain specific language for performance portable molecular dynamics algorithms

    NASA Astrophysics Data System (ADS)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  12. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  13. Calibration of Wide-Field Deconvolution Microscopy for Quantitative Fluorescence Imaging

    PubMed Central

    Lee, Ji-Sook; Wee, Tse-Luen (Erika); Brown, Claire M.

    2014-01-01

    Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure. PMID:24688321

  14. Concerted regulation of ISWI by an autoinhibitory domain and the H4 N-terminal tail

    PubMed Central

    Ludwigsen, Johanna; Pfennig, Sabrina; Singh, Ashish K; Schindler, Christina; Harrer, Nadine; Forné, Ignasi; Zacharias, Martin; Mueller-Planitz, Felix

    2017-01-01

    ISWI-family nucleosome remodeling enzymes need the histone H4 N-terminal tail to mobilize nucleosomes. Here we mapped the H4-tail binding pocket of ISWI. Surprisingly the binding site was adjacent to but not overlapping with the docking site of an auto-regulatory motif, AutoN, in the N-terminal region (NTR) of ISWI, indicating that AutoN does not act as a simple pseudosubstrate as suggested previously. Rather, AutoN cooperated with a hitherto uncharacterized motif, termed AcidicN, to confer H4-tail sensitivity and discriminate between DNA and nucleosomes. A third motif in the NTR, ppHSA, was functionally required in vivo and provided structural stability by clamping the NTR to Lobe 2 of the ATPase domain. This configuration is reminiscent of Chd1 even though Chd1 contains an unrelated NTR. Our results shed light on the intricate structural and functional regulation of ISWI by the NTR and uncover surprising parallels with Chd1. DOI: http://dx.doi.org/10.7554/eLife.21477.001 PMID:28109157

  15. Experimental verification of internal parameter in magnetically coupled boost used as PV optimizer in parallel association

    NASA Astrophysics Data System (ADS)

    Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel

    2017-02-01

    This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.

  16. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond.

    PubMed

    Perdikaris, Paris; Karniadakis, George Em

    2016-05-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. © 2016 The Author(s).

  17. Model inversion via multi-fidelity Bayesian optimization: a new paradigm for parameter estimation in haemodynamics, and beyond

    PubMed Central

    Perdikaris, Paris; Karniadakis, George Em

    2016-01-01

    We present a computational framework for model inversion based on multi-fidelity information fusion and Bayesian optimization. The proposed methodology targets the accurate construction of response surfaces in parameter space, and the efficient pursuit to identify global optima while keeping the number of expensive function evaluations at a minimum. We train families of correlated surrogates on available data using Gaussian processes and auto-regressive stochastic schemes, and exploit the resulting predictive posterior distributions within a Bayesian optimization setting. This enables a smart adaptive sampling procedure that uses the predictive posterior variance to balance the exploration versus exploitation trade-off, and is a key enabler for practical computations under limited budgets. The effectiveness of the proposed framework is tested on three parameter estimation problems. The first two involve the calibration of outflow boundary conditions of blood flow simulations in arterial bifurcations using multi-fidelity realizations of one- and three-dimensional models, whereas the last one aims to identify the forcing term that generated a particular solution to an elliptic partial differential equation. PMID:27194481

  18. A Co-Adaptive Brain-Computer Interface for End Users with Severe Motor Impairment

    PubMed Central

    Faller, Josef; Scherer, Reinhold; Costa, Ursula; Opisso, Eloy; Medina, Josep; Müller-Putz, Gernot R.

    2014-01-01

    Co-adaptive training paradigms for event-related desynchronization (ERD) based brain-computer interfaces (BCI) have proven effective for healthy users. As of yet, it is not clear whether co-adaptive training paradigms can also benefit users with severe motor impairment. The primary goal of our paper was to evaluate a novel cue-guided, co-adaptive BCI training paradigm with severely impaired volunteers. The co-adaptive BCI supports a non-control state, which is an important step toward intuitive, self-paced control. A secondary aim was to have the same participants operate a specifically designed self-paced BCI training paradigm based on the auto-calibrated classifier. The co-adaptive BCI analyzed the electroencephalogram from three bipolar derivations (C3, Cz, and C4) online, while the 22 end users alternately performed right hand movement imagery (MI), left hand MI and relax with eyes open (non-control state). After less than five minutes, the BCI auto-calibrated and proceeded to provide visual feedback for the MI task that could be classified better against the non-control state. The BCI continued to regularly recalibrate. In every calibration step, the system performed trial-based outlier rejection and trained a linear discriminant analysis classifier based on one auto-selected logarithmic band-power feature. In 24 minutes of training, the co-adaptive BCI worked significantly (p = 0.01) better than chance for 18 of 22 end users. The self-paced BCI training paradigm worked significantly (p = 0.01) better than chance in 11 of 20 end users. The presented co-adaptive BCI complements existing approaches in that it supports a non-control state, requires very little setup time, requires no BCI expert and works online based on only two electrodes. The preliminary results from the self-paced BCI paradigm compare favorably to previous studies and the collected data will allow to further improve self-paced BCI systems for disabled users. PMID:25014055

  19. InGaAs/InP SPAD photon-counting module with auto-calibrated gate-width generation and remote control

    NASA Astrophysics Data System (ADS)

    Tosi, Alberto; Ruggeri, Alessandro; Bahgat Shehata, Andrea; Della Frera, Adriano; Scarcella, Carmelo; Tisa, Simone; Giudice, Andrea

    2013-01-01

    We present a photon-counting module based on InGaAs/InP SPAD (Single-Photon Avalanche Diode) for detecting single photons up to 1.7 μm. The module exploits a novel architecture for generating and calibrating the gate width, along with other functions (such as module supervision, counting and processing of detected photons, etc.). The gate width, i.e. the time interval when the SPAD is ON, is user-programmable in the range from 500 ps to 1.5 μs, by means of two different delay generation methods implemented with an FPGA (Field-Programmable Gate Array). In order to compensate chip-to-chip delay variation, an auto-calibration circuit picks out a combination of delays in order to match at best the selected gate width. The InGaAs/InP module accepts asynchronous and aperiodic signals and introduces very low timing jitter. Moreover the photon counting module provides other new features like a microprocessor for system supervision, a touch-screen for local user interface, and an Ethernet link for smart remote control. Thanks to the fullyprogrammable and configurable architecture, the overall instrument provides high system flexibility and can easily match all requirements set by many different applications requiring single photon-level sensitivity in the near infrared with very low photon timing jitter.

  20. Final report on EURAMET.L-S21: `Supplementary comparison of parallel thread gauges'

    NASA Astrophysics Data System (ADS)

    Mudronja, Vedran; Šimunovic, Vedran; Acko, Bojan; Matus, Michael; Bánréti, Edit; István, Dicso; Thalmann, Rudolf; Lassila, Antti; Lillepea, Lauri; Bartolo Picotto, Gian; Bellotti, Roberto; Pometto, Marco; Ganioglu, Okhan; Meral, Ilker; Salgado, José Antonio; Georges, Vailleau

    2015-01-01

    The results of the comparison of parallel thread gauges between ten European countries are presented. Three thread plugs and three thread rings were calibrated in one loop. Croatian National Laboratory for Length (HMI/FSB-LPMD) acted as the coordinator and pilot laboratory of the comparison. Thread angle, thread pitch, simple pitch diameter and pitch diameter were measured. Pitch diameters were calibrated within 1a, 2a, 1b and 2b calibration categories in accordance with the EURAMET cg-10 calibration guide. A good agreement between the measurement results and differences due to different calibration categories are analysed in this paper. This comparison was a first EURAMET comparison of parallel thread gauges based on the EURAMET ctg-10 calibration guide, and has made a step towards the harmonization of future comparisons with the registration of CMC values for thread gauges. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCL, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  1. Power-MOSFET Voltage Regulator

    NASA Technical Reports Server (NTRS)

    Miller, W. N.; Gray, O. E.

    1982-01-01

    Ninety-six parallel MOSFET devices with two-stage feedback circuit form a high-current dc voltage regulator that also acts as fully-on solid-state switch when fuel-cell out-put falls below regulated voltage. Ripple voltage is less than 20 mV, transient recovery time is less than 50 ms. Parallel MOSFET's act as high-current dc regulator and switch. Regulator can be used wherever large direct currents must be controlled. Can be applied to inverters, industrial furnaces photovoltaic solar generators, dc motors, and electric autos.

  2. Sensing Impacts of the Fate of Trace Explosives Signatures Under Environmental Conditions

    DTIC Science & Technology

    2010-01-01

    vial with a pair of clean metal tweezers. A 10 mL aliquot of CHROMASOLV® Plus HPLC -grade acetone was dispensed on the wide surfaces of the sample...Evaporator Workstation under a nitrogen purge stream in a 50 ºC water bath and reconstituted with CHROMASOLV® HPLC -grade acetonitrile to 500 L... simultaneously on the two parallel GC columns, using a refrigerated (ɠ °C) 100-vial autosampler and two parallel auto-injectors. Column 1 (Restek 562719

  3. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  4. Design of an Auto-zeroed, Differential, Organic Thin-film Field-effect Transistor Amplifier for Sensor Applications

    NASA Technical Reports Server (NTRS)

    Binkley, David M.; Verma, Nikhil; Crawford, Robert L.; Brandon, Erik; Jackson, Thomas N.

    2004-01-01

    Organic strain gauge and other sensors require high-gain, precision dc amplification to process their low-level output signals. Ideally, amplifiers would be fabricated using organic thin-film field-effect transistors (OTFT's) adjacent to the sensors. However, OTFT amplifiers exhibit low gain and high input-referred dc offsets that must be effectively managed. This paper presents a four-stage, cascaded differential OTFT amplifier utilizing switched capacitor auto-zeroing. Each stage provides a nominal voltage gain of four through a differential pair driving low-impedance active loads, which provide common-mode output voltage control. p-type pentacence OTFT's are used for the amplifier devices and auto-zero switches. Simulations indicate the amplifier provides a nominal voltage gain of 280 V/V and effectively amplifies a 1-mV dc signal in the presence of 500-mV amplifier input-referred dc offset voltages. Future work could include the addition of digital gain calibration and offset correction of residual offsets associated with charge injection imbalance in the differential circuits.

  5. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  6. Decoupling Principle Analysis and Development of a Parallel Three-Dimensional Force Sensor

    PubMed Central

    Zhao, Yanzhi; Jiao, Leihao; Weng, Dacheng; Zhang, Dan; Zheng, Rencheng

    2016-01-01

    In the development of the multi-dimensional force sensor, dimension coupling is the ubiquitous factor restricting the improvement of the measurement accuracy. To effectively reduce the influence of dimension coupling on the parallel multi-dimensional force sensor, a novel parallel three-dimensional force sensor is proposed using a mechanical decoupling principle, and the influence of the friction on dimension coupling is effectively reduced by making the friction rolling instead of sliding friction. In this paper, the mathematical model is established by combining with the structure model of the parallel three-dimensional force sensor, and the modeling and analysis of mechanical decoupling are carried out. The coupling degree (ε) of the designed sensor is defined and calculated, and the calculation results show that the mechanical decoupling parallel structure of the sensor possesses good decoupling performance. A prototype of the parallel three-dimensional force sensor was developed, and FEM analysis was carried out. The load calibration and data acquisition experiment system are built, and then calibration experiments were done. According to the calibration experiments, the measurement accuracy is less than 2.86% and the coupling accuracy is less than 3.02%. The experimental results show that the sensor system possesses high measuring accuracy, which provides a basis for the applied research of the parallel multi-dimensional force sensor. PMID:27649194

  7. Auto- and hetero-associative memory using a 2-D optical logic gate

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1989-01-01

    An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.

  8. Auto- and hetero-associative memory using a 2-D optical logic gate

    NASA Astrophysics Data System (ADS)

    Chao, Tien-Hsin

    1989-06-01

    An optical associative memory system suitable for both auto- and hetero-associative recall is demonstrated. This system utilizes Hamming distance as the similarity measure between a binary input and a memory image with the aid of a two-dimensional optical EXCLUSIVE OR (XOR) gate and a parallel electronics comparator module. Based on the Hamming distance measurement, this optical associative memory performs a nearest neighbor search and the result is displayed in the output plane in real-time. This optical associative memory is fast and noniterative and produces no output spurious states as compared with that of the Hopfield neural network model.

  9. The classification of the patients with pulmonary diseases using breath air samples spectral analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yury V.; Borisov, Alexey V.; Kuzmin, Dmitry A.; Bulanova, Anna A.

    2016-08-01

    Technique of exhaled breath sampling is discussed. The procedure of wavelength auto-calibration is proposed and tested. Comparison of the experimental data with the model absorption spectra of 5% CO2 is conducted. The classification results of three study groups obtained by using support vector machine and principal component analysis methods are presented.

  10. Multi-projector auto-calibration and placement optimization for non-planar surfaces

    NASA Astrophysics Data System (ADS)

    Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong

    2015-10-01

    Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.

  11. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  12. Development and application of an automated precision solar radiometer

    NASA Astrophysics Data System (ADS)

    Qiu, Gang-gang; Li, Xin; Zhang, Quan; Zheng, Xiao-bing; Yan, Jing

    2016-10-01

    Automated filed vicarious calibration is becoming a growing trend for satellite remote sensor, which require a solar radiometer have to automatic measure reliable data for a long time whatever the weather conditions and transfer measurement data to the user office. An automated precision solar radiometer has been developed. It is used in measuring the solar spectral irradiance received at the Earth surface. The instrument consists of 8 parallel separate silicon-photodiode-based channels with narrow band-pass filters from the visible to near-IR regions. Each channel has a 2.0° full-angle Filed of View (FOV). The detectors and filters are temperature stabilized using a Thermal Energy Converter at 30+/-0.2°. The instrument is pointed toward the sun via an auto-tracking system that actively tracks the sun within a +/-0.1°. It collects data automatically and communicates with user terminal through BDS (China's BeiDou Navigation Satellite System) while records data as a redundant in internal memory, including working state and error. The solar radiometer is automated in the sense that it requires no supervision throughout the whole process of working. It calculates start-time and stop-time every day matched with the time of sunrise and sunset, and stop working once the precipitation. Calibrated via Langley curves and simultaneous observed with CE318, the different of Aerosol Optical Depth (AOD) is within 5%. The solar radiometer had run in all kinds of harsh weather condition in Gobi in Dunhuang and obtain the AODs nearly eight months continuously. This paper presents instrument design analysis, atmospheric optical depth retrievals as well as the experiment result.

  13. Rocket measurement of auroral partial parallel distribution functions

    NASA Astrophysics Data System (ADS)

    Lin, C.-A.

    1980-01-01

    The auroral partial parallel distribution functions are obtained by using the observed energy spectra of electrons. The experiment package was launched by a Nike-Tomahawk rocket from Poker Flat, Alaska over a bright auroral band and covered an altitude range of up to 180 km. Calculated partial distribution functions are presented with emphasis on their slopes. The implications of the slopes are discussed. It should be pointed out that the slope of the partial parallel distribution function obtained from one energy spectra will be changed by superposing another energy spectra on it.

  14. Effect of Group-III precursors on unintentional gallium incorporation during epitaxial growth of InAlN layers by metalorganic chemical vapor deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeomoh, E-mail: jkim610@gatech.edu; Ji, Mi-Hee; Detchprohm, Theeradetch

    2015-09-28

    Unintentional incorporation of gallium (Ga) in InAlN layers grown with different molar flow rates of Group-III precursors by metalorganic chemical vapor deposition has been experimentally investigated. The Ga mole fraction in the InAl(Ga)N layer was increased significantly with the trimethylindium (TMIn) flow rate, while the trimethylaluminum flow rate controls the Al mole fraction. The evaporation of metallic Ga from the liquid phase eutectic system between the pyrolized In from injected TMIn and pre-deposited metallic Ga was responsible for the Ga auto-incorporation into the InAl(Ga)N layer. The theoretical calculation on the equilibrium vapor pressure of liquid phase Ga and the effectivemore » partial pressure of Group-III precursors based on growth parameters used in this study confirms the influence of Group-III precursors on Ga auto-incorporation. More Ga atoms can be evaporated from the liquid phase Ga on the surrounding surfaces in the growth chamber and then significant Ga auto-incorporation can occur due to the high equilibrium vapor pressure of Ga comparable to effective partial pressure of input Group-III precursors during the growth of InAl(Ga)N layer.« less

  15. Autologous or Reduced-Intensity Conditioning Allogeneic Hematopoietic Cell Transplantation for Chemotherapy-Sensitive Mantle-Cell Lymphoma: Analysis of Transplantation Timing and Modality

    PubMed Central

    Fenske, Timothy S.; Zhang, Mei-Jie; Carreras, Jeanette; Ayala, Ernesto; Burns, Linda J.; Cashen, Amanda; Costa, Luciano J.; Freytes, César O.; Gale, Robert P.; Hamadani, Mehdi; Holmberg, Leona A.; Inwards, David J.; Lazarus, Hillard M.; Maziarz, Richard T.; Munker, Reinhold; Perales, Miguel-Angel; Rizzieri, David A.; Schouten, Harry C.; Smith, Sonali M.; Waller, Edmund K.; Wirk, Baldeep M.; Laport, Ginna G.; Maloney, David G.; Montoto, Silvia; Hari, Parameswaran N.

    2014-01-01

    Purpose To examine the outcomes of patients with chemotherapy-sensitive mantle-cell lymphoma (MCL) following a first hematopoietic stem-cell transplantation (HCT), comparing outcomes with autologous (auto) versus reduced-intensity conditioning allogeneic (RIC allo) HCT and with transplantation applied at different times in the disease course. Patients and Methods In all, 519 patients who received transplantations between 1996 and 2007 and were reported to the Center for International Blood and Marrow Transplant Research were analyzed. The early transplantation cohort was defined as those patients in first partial or complete remission with no more than two lines of chemotherapy. The late transplantation cohort was defined as all the remaining patients. Results Auto-HCT and RIC allo-HCT resulted in similar overall survival from transplantation for both the early (at 5 years: 61% auto-HCT v 62% RIC allo-HCT; P = .951) and late cohorts (at 5 years: 44% auto-HCT v 31% RIC allo-HCT; P = .202). In both early and late transplantation cohorts, progression/relapse was lower and nonrelapse mortality was higher in the allo-HCT group. Overall survival and progression-free survival were highest in patients who underwent auto-HCT in first complete response. Multivariate analysis of survival from diagnosis identified a survival benefit favoring early HCT for both auto-HCT and RIC allo-HCT. Conclusion For patients with chemotherapy-sensitive MCL, the optimal timing for HCT is early in the disease course. Outcomes are particularly favorable for patients undergoing auto-HCT in first complete remission. For those unable to achieve complete remission after two lines of chemotherapy or those with relapsed disease, either auto-HCT or RIC allo-HCT may be effective, although the chance for long-term remission and survival is lower. PMID:24344210

  16. Autologous or reduced-intensity conditioning allogeneic hematopoietic cell transplantation for chemotherapy-sensitive mantle-cell lymphoma: analysis of transplantation timing and modality.

    PubMed

    Fenske, Timothy S; Zhang, Mei-Jie; Carreras, Jeanette; Ayala, Ernesto; Burns, Linda J; Cashen, Amanda; Costa, Luciano J; Freytes, César O; Gale, Robert P; Hamadani, Mehdi; Holmberg, Leona A; Inwards, David J; Lazarus, Hillard M; Maziarz, Richard T; Munker, Reinhold; Perales, Miguel-Angel; Rizzieri, David A; Schouten, Harry C; Smith, Sonali M; Waller, Edmund K; Wirk, Baldeep M; Laport, Ginna G; Maloney, David G; Montoto, Silvia; Hari, Parameswaran N

    2014-02-01

    To examine the outcomes of patients with chemotherapy-sensitive mantle-cell lymphoma (MCL) following a first hematopoietic stem-cell transplantation (HCT), comparing outcomes with autologous (auto) versus reduced-intensity conditioning allogeneic (RIC allo) HCT and with transplantation applied at different times in the disease course. In all, 519 patients who received transplantations between 1996 and 2007 and were reported to the Center for International Blood and Marrow Transplant Research were analyzed. The early transplantation cohort was defined as those patients in first partial or complete remission with no more than two lines of chemotherapy. The late transplantation cohort was defined as all the remaining patients. Auto-HCT and RIC allo-HCT resulted in similar overall survival from transplantation for both the early (at 5 years: 61% auto-HCT v 62% RIC allo-HCT; P = .951) and late cohorts (at 5 years: 44% auto-HCT v 31% RIC allo-HCT; P = .202). In both early and late transplantation cohorts, progression/relapse was lower and nonrelapse mortality was higher in the allo-HCT group. Overall survival and progression-free survival were highest in patients who underwent auto-HCT in first complete response. Multivariate analysis of survival from diagnosis identified a survival benefit favoring early HCT for both auto-HCT and RIC allo-HCT. For patients with chemotherapy-sensitive MCL, the optimal timing for HCT is early in the disease course. Outcomes are particularly favorable for patients undergoing auto-HCT in first complete remission. For those unable to achieve complete remission after two lines of chemotherapy or those with relapsed disease, either auto-HCT or RIC allo-HCT may be effective, although the chance for long-term remission and survival is lower.

  17. Design and Calibration of a X-Ray Millibeam

    DTIC Science & Technology

    2005-12-01

    developed for use in Fricke dosimetry , parallel-plate ionization chambers, Lithium Fluoride thermoluminescent dosimetry ( TLD ), and EBT GafChromic...thermoluminescent dosimetry ( TLD ), and EBT GafChromic film to characterize the spatial distribution and accuracy of the doses produced by the Faxitron. A...absorbed dose calibration factors for use in Fricke dosimetry , parallel-plate ionization chambers, Lithium Fluoride (LiF) TLD , and EBT GafChromic film. The

  18. Polarization Imaging Apparatus with Auto-Calibration

    NASA Technical Reports Server (NTRS)

    Zou, Yingyin Kevin (Inventor); Zhao, Hongzhi (Inventor); Chen, Qiushui (Inventor)

    2013-01-01

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5 deg, a second variable phase retarder with its optical axis aligned 45 deg, a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I(sub 0), I(sub 1), I(sub 2) and I(sub 3) of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (pi,0), (pi,pi) and (pi/2,pi), respectively. Then four Stokes components of a Stokes image, S(sub 0), S(sub 1), S(sub 2) and S(sub 3) were calculated using the four intensity images.

  19. Polarization imaging apparatus with auto-calibration

    DOEpatents

    Zou, Yingyin Kevin; Zhao, Hongzhi; Chen, Qiushui

    2013-08-20

    A polarization imaging apparatus measures the Stokes image of a sample. The apparatus consists of an optical lens set, a first variable phase retarder (VPR) with its optical axis aligned 22.5.degree., a second variable phase retarder with its optical axis aligned 45.degree., a linear polarizer, a imaging sensor for sensing the intensity images of the sample, a controller and a computer. Two variable phase retarders were controlled independently by a computer through a controller unit which generates a sequential of voltages to control the phase retardations of the first and second variable phase retarders. A auto-calibration procedure was incorporated into the polarization imaging apparatus to correct the misalignment of first and second VPRs, as well as the half-wave voltage of the VPRs. A set of four intensity images, I.sub.0, I.sub.1, I.sub.2 and I.sub.3 of the sample were captured by imaging sensor when the phase retardations of VPRs were set at (0,0), (.pi.,0), (.pi.,.pi.) and (.pi./2,.pi.), respectively. Then four Stokes components of a Stokes image, S.sub.0, S.sub.1, S.sub.2 and S.sub.3 were calculated using the four intensity images.

  20. TH-CD-201-03: A Real-Time Method to Simultaneously Measure Linear Energy Transfer and Dose for Proton Therapy Using Organic Scintillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsanea, F; Therriault-Proulx, F; Sawakuchi, G

    Purpose: The light generated in organic scintillators depends on both the radiation dose and the linear energy transfer (LET). The LET dependence leads to an under-response of the detector in the Bragg peak of proton beams. This phenomenon, called ionization quenching, must be corrected to obtain accurate dose measurements of proton beams. This work exploits the ionization quenching phenomenon to provide a method of measuring LET and auto correcting quenching. Methods: We exposed simultaneously four different organic scintillators (BCF-12, PMMA, PVT, and LSD; 1mm in diameter) and a plane parallel ionization chamber in passively scattered proton beams to doses betweenmore » 32 and 43 cGy and fluence averaged LET values from 0.47 to 1.26 keV/µm. The LET values for each irradiation condition were determined using a validated Monte Carlo model of the beam line. We determined the quenching parameter in the Birk’s equation for scintillation in BCF-12 for dose measurements. One set of irradiation conditions was used to correlate the scintillation response ratio to the LET values and plot a scintillation response ratio versus LET calibration curve. Irradiation conditions independent from the calibration ones were used to validate this method. Comparisons to the expected values were made on both the basis of dose and LET. Results: Among all the scintillators investigated, the ratio of PMMA to BCF-12 provided the best correlation to LET values and was used as the LET calibration curve. The expected LET values in the validation set were within 2%±6%, which resulted in dose accuracy of 1.5%±5.8% for the range of LET values investigated in this work. Conclusion: We have demonstrated the feasibility of using the ratio between the light output of two organic scintillators to simultaneously measure LET and dose of therapeutic proton beams. Further studies are needed to verify the response in higher LET values.« less

  1. The method of parallel-hierarchical transformation for rapid recognition of dynamic images using GPGPU technology

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Yarovyi, Andrii; Kokriatskaya, Nataliya; Nakonechna, Svitlana; Abramenko, Ludmila; Ławicki, Tomasz; Popiel, Piotr; Yesmakhanova, Laura

    2016-09-01

    The paper presents a method of parallel-hierarchical transformations for rapid recognition of dynamic images using GPU technology. Direct parallel-hierarchical transformations based on cluster CPU-and GPU-oriented hardware platform. Mathematic models of training of the parallel hierarchical (PH) network for the transformation are developed, as well as a training method of the PH network for recognition of dynamic images. This research is most topical for problems on organizing high-performance computations of super large arrays of information designed to implement multi-stage sensing and processing as well as compaction and recognition of data in the informational structures and computer devices. This method has such advantages as high performance through the use of recent advances in parallelization, possibility to work with images of ultra dimension, ease of scaling in case of changing the number of nodes in the cluster, auto scan of local network to detect compute nodes.

  2. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  3. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  4. Domain-Invariant Partial-Least-Squares Regression.

    PubMed

    Nikzad-Langerodi, Ramin; Zellinger, Werner; Lughofer, Edwin; Saminger-Platz, Susanne

    2018-05-11

    Multivariate calibration models often fail to extrapolate beyond the calibration samples because of changes associated with the instrumental response, environmental condition, or sample matrix. Most of the current methods used to adapt a source calibration model to a target domain exclusively apply to calibration transfer between similar analytical devices, while generic methods for calibration-model adaptation are largely missing. To fill this gap, we here introduce domain-invariant partial-least-squares (di-PLS) regression, which extends ordinary PLS by a domain regularizer in order to align the source and target distributions in the latent-variable space. We show that a domain-invariant weight vector can be derived in closed form, which allows the integration of (partially) labeled data from the source and target domains as well as entirely unlabeled data from the latter. We test our approach on a simulated data set where the aim is to desensitize a source calibration model to an unknown interfering agent in the target domain (i.e., unsupervised model adaptation). In addition, we demonstrate unsupervised, semisupervised, and supervised model adaptation by di-PLS on two real-world near-infrared (NIR) spectroscopic data sets.

  5. The calibration of plane parallel ionisation chambers for the measurement of absorbed dose in electron beams of low to medium energies. Part 2: The PTW/MARKUS chamber.

    PubMed

    Cross, P; Freeman, N

    1997-06-01

    The purpose of Part 2 study of calibration methods for plane parallel ionisation chambers was to determine the feasibility of using beams of calibration of the MARKUS chamber other than the standard AAPM TG39 reference beams of 60Co and a high energy electron beam (E0 > or = 15 MeV). A previous study of the NACP chamber had demonstrated an acceptable level of accuracy with corresponding spread of -0.5% to +0.8% for its calibration in non-standard situations (medium to low energy electron and photon beams). For non-standard situations the spread in NDMARKUS values was found to be +/-2.5%. The results suggest that user calibrations of the MARKUS chamber in non-standard situations are associated with more uncertainties than is the case with the NACP chamber.

  6. A Novel Therapy for Chronic Sleep-Onset Insomnia: A Retrospective, Nonrandomized Controlled Study of Auto-Adjusting, Dual-Level, Positive Airway Pressure Technology.

    PubMed

    Krakow, Barry; Ulibarri, Victor A; McIver, Natalia D; Nadorff, Michael R

    2016-09-29

    Evidence indicates that behavioral or drug therapy may not target underlying pathophysiologic mechanisms for chronic insomnia, possibly due to previously unrecognized high rates (30%-90%) of sleep apnea in chronic insomnia patients. Although treatment studies with positive airway pressure (PAP) demonstrate decreased severity of chronic sleep maintenance insomnia in patients with co-occurring sleep apnea, sleep-onset insomnia has not shown similar results. We hypothesized advanced PAP technology would be associated with decreased sleep-onset insomnia severity in a sample of predominantly psychiatric patients with comorbid sleep apnea. We reviewed charts of 74 severe sleep-onset insomnia patients seen from March 2011 to August 2015, all meeting American Academy of Sleep Medicine Work Group criteria for a chronic insomnia disorder and all affirming behavioral and psychological origins for insomnia (averaging 10 of 18 indicators/patient), as well as averaging 2 or more psychiatric symptoms or conditions: depression (65.2%), anxiety (41.9%), traumatic exposure (35.1%), claustrophobia (29.7%), panic attacks (28.4%), and posttraumatic stress disorder (20.3%). All patients failed continuous or bilevel PAP and were manually titrated with auto-adjusting PAP modes (auto-bilevel and adaptive-servo ventilation). At 1-year follow-up, patients were compared through nonrandom assignment on the basis of a PAP compliance metric of > 20 h/wk (56 PAP users) versus < 20 h/wk (18 partial PAP users). PAP users showed significantly greater decreases in global insomnia severity (Hedges' g = 1.72) and sleep-onset insomnia (g = 2.07) compared to partial users (g = 1.04 and 0.91, respectively). Both global and sleep-onset insomnia severity decreased below moderate levels in PAP users compared to partial users whose outcomes persisted at moderately severe levels. In a nonrandomized controlled retrospective study, advanced PAP technology (both auto-bilevel and adaptive servo-ventilation) were associated with large decreases in insomnia severity for sleep-onset insomnia patients who strongly believed psychological factors caused their sleeplessness. PAP treatment of sleep-onset insomnia merits further investigation. © Copyright 2016 Physicians Postgraduate Press, Inc.

  7. Note: A simple image processing based fiducial auto-alignment method for sample registration.

    PubMed

    Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne

    2015-08-01

    A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.

  8. Responses of sun-induced chlorophyll fluorescence to biological and environmental variations measured with a versatile Fluorescence Auto-Measurement Equipment (FAME)

    NASA Astrophysics Data System (ADS)

    Gu, L.

    2017-12-01

    In this study, we examine responses of sun-induced chlorophyll fluorescence to biological and environmental variations measured with a versatile Fluorescence Auto-Measurement Equipment (FAME). FAME was developed to automatically and continuously measure chlorophyll fluorescence (F) of a leaf, plant or canopy in both laboratory and field environments, excited by either artificial light source or sunlight. FAME is controlled by a datalogger and allows simultaneous measurements of environmental variables complementary to the F signals. A built-in communication system allows FAME to be remotely monitored and data-downloaded. Radiance and irradiance calibrations can be done online. FAME has been applied in a variety of environments, allowing an investigation of biological and environmental controls on F emission.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jornet, N; Carrasco de Fez, P; Jordi, O

    Purpose: To evaluate the accuracy in total scatter factor (Sc,p) determination for small fields using commercial plastic scintillator detector (PSD). The manufacturer's spectral discrimination method to subtract Cerenkov light from the signal is discussed. Methods: Sc,p for field sizes ranging from 0.5 to 10 cm were measured using PSD Exradin (Standard Imaging) connected to two channel electrometer measuring the signals in two different spectral regions to subtract the Cerenkov signal from the PSD signal. A Pinpoint ionisation chamber 31006 (PTW) and a non-shielded semiconductor detector EFD (Scanditronix) were used for comparison. Measures were performed for a 6 MV X-ray beam.more » The Sc,p are measured at 10 cm depth in water for a SSD=100 cm and normalized to a 10'10 cm{sup 2} field size at the isocenter. All detectors were placed with their symmetry axis parallel to the beam axis.We followed the manufacturer's recommended calibration methodology to subtract the Cerenkov contribution to the signal as well as a modified method using smaller field sizes. The Sc,p calculated by using both calibration methodologies were compared. Results: Sc,p measured with the semiconductor and the PinPoint detectors agree, within 1.5%, for field sizes between 10'10 and 1'1 cm{sup 2}. Sc,p measured with the PSD using the manufacturer's calibration methodology were systematically 4% higher than those measured with the semiconductor detector for field sizes smaller than 5'5 cm{sup 2}. By using a modified calibration methodology for smalls fields and keeping the manufacturer calibration methodology for fields larger than 5'5cm{sup 2} field Sc,p matched semiconductor results within 2% field sizes larger than 1.5 cm. Conclusion: The calibration methodology proposed by the manufacturer is not appropriate for dose measurements in small fields. The calibration parameters are not independent of the incident radiation spectrum for this PSD. This work was partially financed by grant 2012 of Barcelona board of the AECC.« less

  10. An FPGA Noise Resistant Digital Temperature Sensor with Auto Calibration

    DTIC Science & Technology

    2012-03-01

    temperature sensor [6] . . . . . . . . . . . . . . 14 9 Two different digital temperature sensor placement algorithms: (a) Grid placement (b) Optimal...create a grid over the FPGA. While this method works reasonably well, it requires many sensors, some of which are unnecessary. The optimal placement, on...temperature sensor placement algorithms: (a) Grid placement (b) Optimal Placement [7] 16 2.4 Summary Integrated circuits’ sensitivity to temperatures has

  11. Digital tomosynthesis mammography using a parallel maximum-likelihood reconstruction method

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Zhang, Juemin; Moore, Richard; Rafferty, Elizabeth; Kopans, Daniel; Meleis, Waleed; Kaeli, David

    2004-05-01

    A parallel reconstruction method, based on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital tomosynthesis mammography. Tomosynthesis mammography acquires 11 low-dose projections of a breast by moving an x-ray tube over a 50° angular range. In parallel reconstruction, each projection is divided into multiple segments along the chest-to-nipple direction. Using the 11 projections, segments located at the same distance from the chest wall are combined to compute a partial reconstruction of the total breast volume. The shape of the partial reconstruction forms a thin slab, angled toward the x-ray source at a projection angle 0°. The reconstruction of the total breast volume is obtained by merging the partial reconstructions. The overlap region between neighboring partial reconstructions and neighboring projection segments is utilized to compensate for the incomplete data at the boundary locations present in the partial reconstructions. A serial execution of the reconstruction is compared to a parallel implementation, using clinical data. The serial code was run on a PC with a single PentiumIV 2.2GHz CPU. The parallel implementation was developed using MPI and run on a 64-node Linux cluster using 800MHz Itanium CPUs. The serial reconstruction for a medium-sized breast (5cm thickness, 11cm chest-to-nipple distance) takes 115 minutes, while a parallel implementation takes only 3.5 minutes. The reconstruction time for a larger breast using a serial implementation takes 187 minutes, while a parallel implementation takes 6.5 minutes. No significant differences were observed between the reconstructions produced by the serial and parallel implementations.

  12. Multivariate methods on the excitation emission matrix fluorescence spectroscopic data of diesel-kerosene mixtures: a comparative study.

    PubMed

    Divya, O; Mishra, Ashok K

    2007-05-29

    Quantitative determination of kerosene fraction present in diesel has been carried out based on excitation emission matrix fluorescence (EEMF) along with parallel factor analysis (PARAFAC) and N-way partial least squares regression (N-PLS). EEMF is a simple, sensitive and nondestructive method suitable for the analysis of multifluorophoric mixtures. Calibration models consisting of varying compositions of diesel and kerosene were constructed and their validation was carried out using leave-one-out cross validation method. The accuracy of the model was evaluated through the root mean square error of prediction (RMSEP) for the PARAFAC, N-PLS and unfold PLS methods. N-PLS was found to be a better method compared to PARAFAC and unfold PLS method because of its low RMSEP values.

  13. Binocular optical axis parallelism detection precision analysis based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Ying, Jiaju; Liu, Bingqi

    2018-02-01

    According to the working principle of the binocular photoelectric instrument optical axis parallelism digital calibration instrument, and in view of all components of the instrument, the various factors affect the system precision is analyzed, and then precision analysis model is established. Based on the error distribution, Monte Carlo method is used to analyze the relationship between the comprehensive error and the change of the center coordinate of the circle target image. The method can further guide the error distribution, optimize control the factors which have greater influence on the comprehensive error, and improve the measurement accuracy of the optical axis parallelism digital calibration instrument.

  14. A Hybrid MPI/OpenMP Approach for Parallel Groundwater Model Calibration on Multicore Computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan

    2010-01-01

    Groundwater model calibration is becoming increasingly computationally time intensive. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelism in software and hardware to reduce calibration time on multicore computers with minimal parallelization effort. At first, HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for a uranium transport model with over a hundred species involving nearly a hundred reactions, and a field scale coupled flow and transport model. In the first application, a single parallelizable loop is identified to consume over 97% of the total computational time. With a few lines of OpenMP compiler directives inserted into the code,more » the computational time reduces about ten times on a compute node with 16 cores. The performance is further improved by selectively parallelizing a few more loops. For the field scale application, parallelizable loops in 15 of the 174 subroutines in HGC5 are identified to take more than 99% of the execution time. By adding the preconditioned conjugate gradient solver and BICGSTAB, and using a coloring scheme to separate the elements, nodes, and boundary sides, the subroutines for finite element assembly, soil property update, and boundary condition application are parallelized, resulting in a speedup of about 10 on a 16-core compute node. The Levenberg-Marquardt (LM) algorithm is added into HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, compute nodes at the number of adjustable parameters (when the forward difference is used for Jacobian approximation), or twice that number (if the center difference is used), are used to reduce the calibration time from days and weeks to a few hours for the two applications. This approach can be extended to global optimization scheme and Monte Carol analysis where thousands of compute nodes can be efficiently utilized.« less

  15. Understanding Rasch Measurement: Partial Credit Model and Pivot Anchoring.

    ERIC Educational Resources Information Center

    Bode, Rita K.

    2001-01-01

    Describes the Rasch measurement partial credit model, what it is, how it differs from other Rasch models, and when and how to use it. Also describes the calibration of instruments with increasingly complex items. Explains pivot anchoring and illustrates its use and describes the effect of pivot anchoring on step calibrations, item hierarchy, and…

  16. Auto-Gopher-II: an autonomous wireline rotary-hammer ultrasonic drill

    NASA Astrophysics Data System (ADS)

    Badescu, Mircea; Lee, Hyeong Jae; Sherrit, Stewart; Bao, Xiaoqi; Bar-Cohen, Yoseph; Jackson, Shannon; Chesin, Jacob; Zacny, Kris; Paulsen, Gale L.; Mellerowicz, Bolek; Kim, Daniel

    2017-04-01

    Developing technologies that would enable future NASA exploration missions to penetrate deeper into the subsurface of planetary bodies for sample collection is of great importance. Performing these tasks while using minimal mass/volume systems and with low energy consumption is another set of requirements imposed on such technologies. A deep drill, called Auto-Gopher II, is currently being developed as a joint effort between JPL's NDEAA laboratory and Honeybee Robotics Corp. The Auto-Gopher II is a wireline rotary-hammer drill that combines formation breaking by hammering using an ultrasonic actuator and cuttings removal by rotating a fluted auger bit. The hammering mechanism is based on the Ultrasonic/Sonic Drill/Corer (USDC) mechanism that has been developed as an adaptable tool for many drilling and coring applications. The USDC uses an intermediate free-flying mass to transform high frequency vibrations of a piezoelectric transducer horn tip into sonic hammering of the drill bit. The USDC concept was used in a previous task to develop an Ultrasonic/Sonic Ice Gopher and then integrated into a rotary hammer device to develop the Auto-Gopher-I. The lessons learned from these developments are being integrated into the development of the Auto- Gopher-II, an autonomous deep wireline drill with integrated cuttings and sample management and drive electronics. Subsystems of the wireline drill are being developed in parallel at JPL and Honeybee Robotics Ltd. This paper presents the development efforts of the piezoelectric actuator, cuttings removal and retention flutes and drive electronics.

  17. Highly parameterized model calibration with cloud computing: an example of regional flow model calibration in northeast Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Hayley, Kevin; Schumacher, J.; MacMillan, G. J.; Boutin, L. C.

    2014-05-01

    Expanding groundwater datasets collected by automated sensors, and improved groundwater databases, have caused a rapid increase in calibration data available for groundwater modeling projects. Improved methods of subsurface characterization have increased the need for model complexity to represent geological and hydrogeological interpretations. The larger calibration datasets and the need for meaningful predictive uncertainty analysis have both increased the degree of parameterization necessary during model calibration. Due to these competing demands, modern groundwater modeling efforts require a massive degree of parallelization in order to remain computationally tractable. A methodology for the calibration of highly parameterized, computationally expensive models using the Amazon EC2 cloud computing service is presented. The calibration of a regional-scale model of groundwater flow in Alberta, Canada, is provided as an example. The model covers a 30,865-km2 domain and includes 28 hydrostratigraphic units. Aquifer properties were calibrated to more than 1,500 static hydraulic head measurements and 10 years of measurements during industrial groundwater use. Three regionally extensive aquifers were parameterized (with spatially variable hydraulic conductivity fields), as was the aerial recharge boundary condition, leading to 450 adjustable parameters in total. The PEST-based model calibration was parallelized on up to 250 computing nodes located on Amazon's EC2 servers.

  18. 29 CFR 779.320 - Partial list of establishments whose sales or service may be recognized as retail.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Antique shops. Auto courts. Automobile dealers' establishments. Automobile laundries. Automobile repair shops. Barber shops. Beauty shops. Bicycle shops. Billiard parlors. Book stores. Bowling alleys. Butcher shops. Cafeterias. Cemeteries. China, glassware stores. Cigar stores. Clothing stores. Coal yards...

  19. 29 CFR 779.320 - Partial list of establishments whose sales or service may be recognized as retail.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... Antique shops. Auto courts. Automobile dealers' establishments. Automobile laundries. Automobile repair shops. Barber shops. Beauty shops. Bicycle shops. Billiard parlors. Book stores. Bowling alleys. Butcher shops. Cafeterias. Cemeteries. China, glassware stores. Cigar stores. Clothing stores. Coal yards...

  20. 29 CFR 779.320 - Partial list of establishments whose sales or service may be recognized as retail.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Antique shops. Auto courts. Automobile dealers' establishments. Automobile laundries. Automobile repair shops. Barber shops. Beauty shops. Bicycle shops. Billiard parlors. Book stores. Bowling alleys. Butcher shops. Cafeterias. Cemeteries. China, glassware stores. Cigar stores. Clothing stores. Coal yards...

  1. 29 CFR 779.320 - Partial list of establishments whose sales or service may be recognized as retail.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Antique shops. Auto courts. Automobile dealers' establishments. Automobile laundries. Automobile repair shops. Barber shops. Beauty shops. Bicycle shops. Billiard parlors. Book stores. Bowling alleys. Butcher shops. Cafeterias. Cemeteries. China, glassware stores. Cigar stores. Clothing stores. Coal yards...

  2. 29 CFR 779.320 - Partial list of establishments whose sales or service may be recognized as retail.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... Antique shops. Auto courts. Automobile dealers' establishments. Automobile laundries. Automobile repair shops. Barber shops. Beauty shops. Bicycle shops. Billiard parlors. Book stores. Bowling alleys. Butcher shops. Cafeterias. Cemeteries. China, glassware stores. Cigar stores. Clothing stores. Coal yards...

  3. Calibration-free in vivo transverse blood flowmetry based on cross correlation of slow-time profiles from photoacoustic microscopy

    PubMed Central

    Zhou, Yong; Liang, Jinyang; Maslov, Konstantin I.; Wang, Lihong V.

    2013-01-01

    We propose a cross-correlation-based method to measure blood flow velocity by using photoacoustic microscopy. Unlike in previous auto-correlation-based methods, the measured flow velocity here is independent of particle size. Thus, an absolute flow velocity can be obtained without calibration. We first measured the flow velocity ex vivo, using defibrinated bovine blood. Then, flow velocities in vessels with different structures in a mouse ear were quantified in vivo. We further measured the flow variation in the same vessel and at a vessel bifurcation. All the experimental results indicate that our method can be used to accurately quantify blood velocity in vivo. PMID:24081077

  4. Considerations for automated machine learning in clinical metabolic profiling: Altered homocysteine plasma concentration associated with metformin exposure.

    PubMed

    Orlenko, Alena; Moore, Jason H; Orzechowski, Patryk; Olson, Randal S; Cairns, Junmei; Caraballo, Pedro J; Weinshilboum, Richard M; Wang, Liewei; Breitenstein, Matthew K

    2018-01-01

    With the maturation of metabolomics science and proliferation of biobanks, clinical metabolic profiling is an increasingly opportunistic frontier for advancing translational clinical research. Automated Machine Learning (AutoML) approaches provide exciting opportunity to guide feature selection in agnostic metabolic profiling endeavors, where potentially thousands of independent data points must be evaluated. In previous research, AutoML using high-dimensional data of varying types has been demonstrably robust, outperforming traditional approaches. However, considerations for application in clinical metabolic profiling remain to be evaluated. Particularly, regarding the robustness of AutoML to identify and adjust for common clinical confounders. In this study, we present a focused case study regarding AutoML considerations for using the Tree-Based Optimization Tool (TPOT) in metabolic profiling of exposure to metformin in a biobank cohort. First, we propose a tandem rank-accuracy measure to guide agnostic feature selection and corresponding threshold determination in clinical metabolic profiling endeavors. Second, while AutoML, using default parameters, demonstrated potential to lack sensitivity to low-effect confounding clinical covariates, we demonstrated residual training and adjustment of metabolite features as an easily applicable approach to ensure AutoML adjustment for potential confounding characteristics. Finally, we present increased homocysteine with long-term exposure to metformin as a potentially novel, non-replicated metabolite association suggested by TPOT; an association not identified in parallel clinical metabolic profiling endeavors. While warranting independent replication, our tandem rank-accuracy measure suggests homocysteine to be the metabolite feature with largest effect, and corresponding priority for further translational clinical research. Residual training and adjustment for a potential confounding effect by BMI only slightly modified the suggested association. Increased homocysteine is thought to be associated with vitamin B12 deficiency - evaluation for potential clinical relevance is suggested. While considerations for clinical metabolic profiling are recommended, including adjustment approaches for clinical confounders, AutoML presents an exciting tool to enhance clinical metabolic profiling and advance translational research endeavors.

  5. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    NASA Technical Reports Server (NTRS)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  6. Certain (-)-epigallocatechin-3-gallate (EGCG) auto-oxidation products (EAOPs) retain the cytotoxic activities of EGCG.

    PubMed

    Wei, Yaqing; Chen, Pingping; Ling, Tiejun; Wang, Yijun; Dong, Ruixia; Zhang, Chen; Zhang, Longjie; Han, Manman; Wang, Dongxu; Wan, Xiaochun; Zhang, Jinsong

    2016-08-01

    (-)-Epigallocatechin-3-gallate (EGCG) from green tea has anti-cancer effect. The cytotoxic actions of EGCG are associated with its auto-oxidation, leading to the production of hydrogen peroxide and formation of numerous EGCG auto-oxidation products (EAOPs), the structures and bioactivities of them remain largely unclear. In the present study, we compared several fundamental properties of EGCG and EAOPs, which were prepared using 5mg/mL EGCG dissolved in 200mM phosphate buffered saline (pH 8.0 at 37°C) and normal oxygen partial pressure for different periods of time. Despite the complete disappearance of EGCG after the 4-h auto-oxidation, 4-h EAOPs gained an enhanced capacity to deplete cysteine thiol groups, and retained the cytotoxic effects of EGCG as well as the capacity to produce hydrogen peroxide and inhibit thioredoxin reductase, a putative target for cancer prevention and treatment. The results indicate that certain EAOPs possess equivalent cytotoxic activities to EGCG, while exhibiting simultaneously enhanced capacity for cysteine depletion. These results imply that EGCG and EAOPs formed extracellularly function in concert to exhibit cytotoxic effects, which previously have been ascribed to EGCG alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Algorithm for automatic analysis of electro-oculographic data

    PubMed Central

    2013-01-01

    Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372

  8. Algorithm for automatic analysis of electro-oculographic data.

    PubMed

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  9. Adsorptive Desulfurization of JP-8 Fuel Using Ag+/Silica Based Adsorbents at Room Temperature

    DTIC Science & Technology

    2012-09-01

    promising because it is accomplished at ambient temperature and pressure. Erkey and co-workers used carbon aerogels (CAs) as adsorbents for their sulfur...aluminum oxide ATR auto-thermal reforming BET Brunauer, Emmett and Teller theory BT benzothiophene CAs carbon aerogels CPOX catalytic partial

  10. A Structure-Toxicity Study of Aß42 Reveals a New Anti-Parallel Aggregation Pathway

    PubMed Central

    Vignaud, Hélène; Bobo, Claude; Lascu, Ioan; Sörgjerd, Karin Margareta; Zako, Tamotsu; Maeda, Mizuo; Salin, Benedicte; Lecomte, Sophie; Cullin, Christophe

    2013-01-01

    Amyloid beta (Aβ) peptides produced by APP cleavage are central to the pathology of Alzheimer’s disease. Despite widespread interest in this issue, the relationship between the auto-assembly and toxicity of these peptides remains controversial. One intriguing feature stems from their capacity to form anti-parallel ß-sheet oligomeric intermediates that can be converted into a parallel topology to allow the formation of protofibrillar and fibrillar Aβ. Here, we present a novel approach to determining the molecular aspects of Aß assembly that is responsible for its in vivo toxicity. We selected Aß mutants with varying intracellular toxicities. In vitro, only toxic Aß (including wild-type Aß42) formed urea-resistant oligomers. These oligomers were able to assemble into fibrils that are rich in anti-parallel ß-sheet structures. Our results support the existence of a new pathway that depends on the folding capacity of Aß . PMID:24244667

  11. Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement.

    PubMed

    Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi

    2015-03-01

    Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Photographs were found to have high reliability coefficient (P > 0.05). The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement.

  12. Electrical Conductivity of Partially Molten Peridotite Analogue Under Shear: Supporting Evidence to the Partial Melting Hypothesis for the Oceanic Plate Motion

    NASA Astrophysics Data System (ADS)

    Manthilake, G.; Matsuzaki, T.; Yoshino, T.; Yamazaki, D.; Yoneda, A.; Ito, E.; Katsura, T.

    2008-12-01

    So far, two hypotheses have been proposed to explain softening of the oceanic asthenosphere allowing smooth motion of the oceanic lithosphere. One is partial melting, and the other is hydraulitic weakening. Although the hydraulitic weakening hypothesis is popular recently, Yoshino et al. [2006] suggested that this hypothesis cannot explain the high and anisotropic conductivity at the top of the asthenosphere near East Pacific Rise observed by Evans et al. [2005]. In order to explain the conductivity anisotropy over one order of magnitude by the partial melting hypothesis, we measured conductivity of partially molten peridotite analogue under shear conditions. The measured samples were mixtures of forsterite and chemically simplified basalt. The samples were pre- synthesized using a piston-cylinder apparatus at 1600 K and 2 GPa to obtain textural equilibrium. The pre- synthesized samples were formed to a disk with 3 mm in diameter and 1 mm in thickness. Conductivity measurement was carried out also at 1600 K and 2 GPa in a cubic-anvil apparatus with an additional uniaxial piston. The sample was sandwiched by two alumina pistons whose top was cut to 45 degree slope to generate shear. The shear strain rates of the sample were calibrated using a Mo strain marker in separate runs. The lower alumina piston was pushed by a tungsten carbide piston embedded in a bottom anvil with a constant speed. Conductivity was measured in the directions normal and parallel to the shear direction simultaneously. We mainly studied the sample with 1.6 volume percent of basaltic component. The shear strain rates were 0, 1.2x10(-6) and 5.2x10(-6) /s. The sample without shear did not show conductivity anisotropy. In contrast, the samples with shear showed one order of magnitude higher conductivity in the direction parallel to the shear than that normal to the shear. After the total strains reached 0.3, the magnitude of anisotropy became almost constant for both of the strain rates. The magnitude is thus independent of the strain rate. This study demonstrates that the anisotropy at the top of the asthenosphere can be explained based on the partially molten asthenosphere sheared by the plate motion.

  13. Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh

    Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less

  14. Development and Application of a Process-based River System Model at a Continental Scale

    NASA Astrophysics Data System (ADS)

    Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.

    2014-12-01

    Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.

  15. Analysis on detection accuracy of binocular photoelectric instrument optical axis parallelism digital calibration instrument

    NASA Astrophysics Data System (ADS)

    Ying, Jia-ju; Yin, Jian-ling; Wu, Dong-sheng; Liu, Jie; Chen, Yu-dan

    2017-11-01

    Low-light level night vision device and thermal infrared imaging binocular photoelectric instrument are used widely. The maladjustment of binocular instrument ocular axises parallelism will cause the observer the symptom such as dizziness, nausea, when use for a long time. Binocular photoelectric equipment digital calibration instrument is developed for detecting ocular axises parallelism. And the quantitative value of optical axis deviation can be quantitatively measured. As a testing instrument, the precision must be much higher than the standard of test instrument. Analyzes the factors that influence the accuracy of detection. Factors exist in each testing process link which affect the precision of the detecting instrument. They can be divided into two categories, one category is factors which directly affect the position of reticle image, the other category is factors which affect the calculation the center of reticle image. And the Synthesize error is calculated out. And further distribute the errors reasonably to ensure the accuracy of calibration instruments.

  16. The effect of mixed dopants on the stability of Fricke gel dosimeters

    NASA Astrophysics Data System (ADS)

    Penev, K.; Mequanint, K.

    2013-06-01

    Auto-oxidation and fast diffusion in Fricke gels are major drawbacks to wide-spread application of these gels in 3D dosimetry. Aiming to limit both processes, we used mixed dopants: the ferric-specific ligand xylenol orange with a ferrous-specific ligand (1,10-phenanthroline) and/or a bi-functional cross-linking agent (glyoxal). Markedly improved auto-oxidation stability was observed in the xylenol orange and phenanthroline doped gel at the expense of increased background absorbance and faster diffusion. Addition of glyoxal limited the diffusion rate and led to a partial bleaching of the gel. It is conceivable that these two new compositions may find useful practical application.

  17. Use of partial least squares regression for the multivariate calibration of hazardous air pollutants in open-path FT-IR spectrometry

    NASA Astrophysics Data System (ADS)

    Hart, Brian K.; Griffiths, Peter R.

    1998-06-01

    Partial least squares (PLS) regression has been evaluated as a robust calibration technique for over 100 hazardous air pollutants (HAPs) measured by open path Fourier transform infrared (OP/FT-IR) spectrometry. PLS has the advantage over the current recommended calibration method of classical least squares (CLS), in that it can look at the whole useable spectrum (700-1300 cm-1, 2000-2150 cm-1, and 2400-3000 cm-1), and detect several analytes simultaneously. Up to one hundred HAPs synthetically added to OP/FT-IR backgrounds have been simultaneously calibrated and detected using PLS. PLS also has the advantage in requiring less preprocessing of spectra than that which is required in CLS calibration schemes, allowing PLS to provide user independent real-time analysis of OP/FT-IR spectra.

  18. Distributed parameter system coupled ARMA expansion identification and adaptive parallel IIR filtering - A unified problem statement. [Auto Regressive Moving-Average

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Balas, M. J.

    1980-01-01

    A novel interconnection of distributed parameter system (DPS) identification and adaptive filtering is presented, which culminates in a common statement of coupled autoregressive, moving-average expansion or parallel infinite impulse response configuration adaptive parameterization. The common restricted complexity filter objectives are seen as similar to the reduced-order requirements of the DPS expansion description. The interconnection presents the possibility of an exchange of problem formulations and solution approaches not yet easily addressed in the common finite dimensional lumped-parameter system context. It is concluded that the shared problems raised are nevertheless many and difficult.

  19. Experimental determination of pCo perturbation factors for plane-parallel chambers

    NASA Astrophysics Data System (ADS)

    Kapsch, R. P.; Bruggmoser, G.; Christ, G.; Dohm, O. S.; Hartmann, G. H.; Schüle, E.

    2007-12-01

    For plane-parallel chambers used in electron dosimetry, modern dosimetry protocols recommend a cross-calibration against a calibrated cylindrical chamber. The rationale for this is the unacceptably large (up to 3-4%) chamber-to-chamber variations of the perturbation factors (pwall)Co, which have been reported for plane-parallel chambers of a given type. In some recent publications, it was shown that this is no longer the case for modern plane-parallel chambers. The aims of the present study are to obtain reliable information about the variation of the perturbation factors for modern types of plane-parallel chambers, and—if this variation is found to be acceptably small—to determine type-specific mean values for these perturbation factors which can be used for absorbed dose measurements in electron beams using plane-parallel chambers. In an extensive multi-center study, the individual perturbation factors pCo (which are usually assumed to be equal to (pwall)Co) for a total of 35 plane-parallel chambers of the Roos type, 15 chambers of the Markus type and 12 chambers of the Advanced Markus type were determined. From a total of 188 cross-calibration measurements, variations of the pCo values for different chambers of the same type of at most 1.0%, 0.9% and 0.6% were found for the chambers of the Roos, Markus and Advanced Markus types, respectively. The mean pCo values obtained from all measurements are \\bar{p}^Roos_Co = 1.0198, \\bar{p}^Markus_Co = 1.0175 and \\bar{p}^Advanced_Co = 1.0155 ; the relative experimental standard deviation of the individual pCo values is less than 0.24% for all chamber types; the relative standard uncertainty of the mean pCo values is 1.1%.

  20. Using multiple calibration sets to improve the quantitative accuracy of partial least squares (PLS) regression on open-path fourier transform infrared (OP/FT-IR) spectra of ammonia over wide concentration ranges

    USDA-ARS?s Scientific Manuscript database

    A technique of using multiple calibration sets in partial least squares regression (PLS) was proposed to improve the quantitative determination of ammonia from open-path Fourier transform infrared spectra. The spectra were measured near animal farms, and the path-integrated concentration of ammonia...

  1. High-Dose Chemotherapy and Autologous Stem Cell Transplantation in Children with High-Risk or Recurrent Bone and Soft Tissue Sarcomas

    PubMed Central

    2016-01-01

    Despite increasing evidence that high-dose chemotherapy and autologous stem cell transplantation (HDCT/auto-SCT) might improve the survival of patients with high-risk or recurrent solid tumors, therapy effectiveness for bone and soft tissue sarcoma treatment remains unclear. This study retrospectively investigated the feasibility and effectiveness of HDCT/auto-SCT for high-risk or recurrent bone and soft tissue sarcoma. A total of 28 patients (18 high-risk and 10 recurrent) underwent single or tandem HDCT/auto-SCT between October 2004 and September 2014. During follow-up of a median 15.3 months, 18 patients exhibited disease progression and 2 died of treatment-related toxicities (1 veno-occlusive disease and 1 sepsis). Overall, 8 patients remained alive and progression-free. The 3-year overall survival (OS) and event-free survival (EFS) rates for all 28 patients were 28.7% and 26.3%, respectively. In the subgroup analysis, OS and EFS rates were higher in patients with complete or partial remission prior to HDCT/auto-SCT than in those with worse responses (OS, 39.1% vs. 0.0%, P = 0.002; EFS, 36.8% vs. 0.0%, P < 0.001). Therefore, careful selection of patients who can benefit from HDCT/auto-SCT and maximal effort to reduce tumor burden prior to treatment will be important to achieve favorable outcomes in patients with high-risk or recurrent bone and soft tissue sarcomas. PMID:27366002

  2. Cross index for improving cloning selectivity by partially filling in 5'-extensions of DNA produced by type II restriction endonucleases.

    PubMed Central

    Korch, C

    1987-01-01

    A cross index is presented for using the improved selectivity offered by the Hung and Wensink (Nucl. Acids Res. 12, 1863-1874, 1984) method of partially filling in 5'-extensions produced by type II restriction endonucleases. After this treatment, DNA fragments which normally cannot be ligated to one another, can be joined providing that complementary cohesive ends have been generated. The uses of this technique, which include the prevention of DNA fragments (both vector and insert) auto-annealing, are discussed. PMID:3033600

  3. Learning in Neural Networks: VLSI Implementation Strategies

    NASA Technical Reports Server (NTRS)

    Duong, Tuan Anh

    1995-01-01

    Fully-parallel hardware neural network implementations may be applied to high-speed recognition, classification, and mapping tasks in areas such as vision, or can be used as low-cost self-contained units for tasks such as error detection in mechanical systems (e.g. autos). Learning is required not only to satisfy application requirements, but also to overcome hardware-imposed limitations such as reduced dynamic range of connections.

  4. Resin Characterization

    DTIC Science & Technology

    2015-06-01

    environmental test chamber attachment to control temperature and disposable parallel plates . The experiment can be stopped when the sample...is auto-stopping when its torque limit is reached and to prevent too high of an extent of cure that could make removal of the disposable plates from...separated by a 0.025-mm-thick Teflon spacer (International Crystal Labs) or pressed with potassium bromide (KBr) powder into pellets. The salt plate

  5. A Fully Integrated Sensor SoC with Digital Calibration Hardware and Wireless Transceiver at 2.4 GHz

    PubMed Central

    Kim, Dong-Sun; Jang, Sung-Joon; Hwang, Tae-Ho

    2013-01-01

    A single-chip sensor system-on-a-chip (SoC) that implements radio for 2.4 GHz, complete digital baseband physical layer (PHY), 10-bit sigma-delta analog-to-digital converter and dedicated sensor calibration hardware for industrial sensing systems has been proposed and integrated in a 0.18-μm CMOS technology. The transceiver's building block includes a low-noise amplifier, mixer, channel filter, receiver signal-strength indicator, frequency synthesizer, voltage-controlled oscillator, and power amplifier. In addition, the digital building block consists of offset quadrature phase-shift keying (OQPSK) modulation, demodulation, carrier frequency offset compensation, auto-gain control, digital MAC function, sensor calibration hardware and embedded 8-bit microcontroller. The digital MAC function supports cyclic redundancy check (CRC), inter-symbol timing check, MAC frame control, and automatic retransmission. The embedded sensor signal processing block consists of calibration coefficient calculator, sensing data calibration mapper and sigma-delta analog-to-digital converter with digital decimation filter. The sensitivity of the overall receiver and the error vector magnitude (EVM) of the overall transmitter are −99 dBm and 18.14%, respectively. The proposed calibration scheme has a reduction of errors by about 45.4% compared with the improved progressive polynomial calibration (PPC) method and the maximum current consumption of the SoC is 16 mA. PMID:23698271

  6. Computerized Analysis of Digital Photographs for Evaluation of Tooth Movement

    PubMed Central

    Toodehzaeim, Mohammad Hossein; Karandish, Maryam; Karandish, Mohammad Nabi

    2015-01-01

    Objectives: Various methods have been introduced for evaluation of tooth movement in orthodontics. The challenge is to adopt the most accurate and most beneficial method for patients. This study was designed to introduce analysis of digital photographs with AutoCAD software as a method to evaluate tooth movement and assess the reliability of this method. Materials and Methods: Eighteen patients were evaluated in this study. Three intraoral digital images from the buccal view were captured from each patient in half an hour interval. All the photos were sent to AutoCAD software 2011, calibrated and the distance between canine and molar hooks were measured. The data was analyzed using intraclass correlation coefficient. Results: Photographs were found to have high reliability coefficient (P > 0.05). Conclusion: The introduced method is an accurate, efficient and reliable method for evaluation of tooth movement. PMID:26622272

  7. Partial Overhaul and Initial Parallel Optimization of KINETICS, a Coupled Dynamics and Chemistry Atmosphere Model

    NASA Technical Reports Server (NTRS)

    Nguyen, Howard; Willacy, Karen; Allen, Mark

    2012-01-01

    KINETICS is a coupled dynamics and chemistry atmosphere model that is data intensive and computationally demanding. The potential performance gain from using a supercomputer motivates the adaptation from a serial version to a parallelized one. Although the initial parallelization had been done, bottlenecks caused by an abundance of communication calls between processors led to an unfavorable drop in performance. Before starting on the parallel optimization process, a partial overhaul was required because a large emphasis was placed on streamlining the code for user convenience and revising the program to accommodate the new supercomputers at Caltech and JPL. After the first round of optimizations, the partial runtime was reduced by a factor of 23; however, performance gains are dependent on the size of the data, the number of processors requested, and the computer used.

  8. Using the cloud to speed-up calibration of watershed-scale hydrologic models (Invited)

    NASA Astrophysics Data System (ADS)

    Goodall, J. L.; Ercan, M. B.; Castronova, A. M.; Humphrey, M.; Beekwilder, N.; Steele, J.; Kim, I.

    2013-12-01

    This research focuses on using the cloud to address computational challenges associated with hydrologic modeling. One example is calibration of a watershed-scale hydrologic model, which can take days of execution time on typical computers. While parallel algorithms for model calibration exist and some researchers have used multi-core computers or clusters to run these algorithms, these solutions do not fully address the challenge because (i) calibration can still be too time consuming even on multicore personal computers and (ii) few in the community have the time and expertise needed to manage a compute cluster. Given this, another option for addressing this challenge that we are exploring through this work is the use of the cloud for speeding-up calibration of watershed-scale hydrologic models. The cloud used in this capacity provides a means for renting a specific number and type of machines for only the time needed to perform a calibration model run. The cloud allows one to precisely balance the duration of the calibration with the financial costs so that, if the budget allows, the calibration can be performed more quickly by renting more machines. Focusing specifically on the SWAT hydrologic model and a parallel version of the DDS calibration algorithm, we show significant speed-up time across a range of watershed sizes using up to 256 cores to perform a model calibration. The tool provides a simple web-based user interface and the ability to monitor the calibration job submission process during the calibration process. Finally this talk concludes with initial work to leverage the cloud for other tasks associated with hydrologic modeling including tasks related to preparing inputs for constructing place-based hydrologic models.

  9. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    NASA Astrophysics Data System (ADS)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the computed tomography dose index (CTDI), a concept normally used in diagnostic radiology. This involved experimental determination of the fan beam thickness using the ion chambers to acquire fan beam profiles and extrapolation to a 'zero-size' detector. In conclusion, improvements have been made in the accuracy of small field dosimetry measurements in stereotactic radiotherapy and helical tomotherapy. This was completed through introduction of an original technique involving micro-CT imaging for sensitive volume determination and potentially ion chamber calibration coefficients, the use of appropriate Monte Carlo derived correction factors for RDF measurement, and the exploitation of the partial volume effect for helical tomotherapy fan beam dosimetry. With improved dosimetry for a wide range of challenging small x-ray field situations, it is expected that the patient's radiation safety will be maintained, and that clinical trials will adopt calibration protocols specialized for modern radiotherapy with small fields or beamlets. Keywords. radiation therapy, ionization chambers, small field dosimetry, stereotactic radiotherapy, helical tomotherapy, micro-CT.

  10. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  11. Highly accelerated cardiac cine parallel MRI using low-rank matrix completion and partial separability model

    NASA Astrophysics Data System (ADS)

    Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie

    2016-05-01

    This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.

  12. Rapid quantification of casein in skim milk using Fourier transform infrared spectroscopy, enzymatic perturbation, and multiway partial least squares regression: Monitoring chymosin at work.

    PubMed

    Baum, A; Hansen, P W; Nørgaard, L; Sørensen, John; Mikkelsen, J D

    2016-08-01

    In this study, we introduce enzymatic perturbation combined with Fourier transform infrared (FTIR) spectroscopy as a concept for quantifying casein in subcritical heated skim milk using chemometric multiway analysis. Chymosin is a protease that cleaves specifically caseins. As a result of hydrolysis, all casein proteins clot to form a creamy precipitate, and whey proteins remain in the supernatant. We monitored the cheese-clotting reaction in real time using FTIR and analyzed the resulting evolution profiles to establish calibration models using parallel factor analysis and multiway partial least squares regression. Because we observed casein-specific kinetic changes, the retrieved models were independent of the chemical background matrix and were therefore robust against possible covariance effects. We tested the robustness of the models by spiking the milk solutions with whey, calcium, and cream. This method can be used at different stages in the dairy production chain to ensure the quality of the delivered milk. In particular, the cheese-making industry can benefit from such methods to optimize production control. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. KINKFOLD—an AutoLISP program for construction of geological cross-sections using borehole image data

    NASA Astrophysics Data System (ADS)

    Özkaya, Sait Ismail

    2002-04-01

    KINKFOLD is an AutoLISP program designed to construct geological cross-sections from borehole image or dip meter logs. The program uses the kink-fold method for cross-section construction. Beds are folded around hinge lines as angle bisectors so that bedding thickness remains unchanged. KINKFOLD may be used to model a wide variety of parallel fold structures, including overturned and faulted folds, and folds truncated by unconformities. The program accepts data from vertical or inclined boreholes. The KINKFOLD program cannot be used to model fault drag, growth folds, inversion structures or disharmonic folds where the bed thickness changes either because of deformation or deposition. Faulted structures and similar folds can be modelled by KINKFOLD by omitting dip measurements within fault drag zones and near axial planes of similar folds.

  14. Flight Investigation of Effects of Selected Operating Conditions on the Bending and Torsional Moments Encountered by a Helicopter Rotor Blade

    NASA Technical Reports Server (NTRS)

    Ludi, LeRoy H.

    1961-01-01

    Flight tests have been conducted with a single-rotor helicopter to determine the effects of partial-power descents with forward speed, high-speed level turns, pull-outs from autorotation, and high-forward-speed high-rotor-speed autorotation on the flapwise bending and torsional moments of the rotor blade. One blade of the helicopter was equipped at 14 percent and 40 percent of the blade radius with strain gages calibrated to measure moments rather than stresses. The results indicate that the maximum moments encountered in partial-power descents with forward speed tend to be generally reduced from the maximum moments encountered during partid-power descents at zero forward speed. High-speed level turns and pull-outs from auto-rotation caused retreating-blade stall which produced torsional moments (values up to 2,400 inch-pounds). at the 14-percent-radius station that were as large as those encountered during the previous investigations of retreating-blade stall (values up t o 2,500 inch-pounds). High-forward- speed high-rotor-speed autorotation produced flapwise bending moments (values up to 7,200 inch-pounds) at the 40-percent-radius station which were as large as the flapwise bending moments (values up to 7,800 inch-pounds) a t the 14-percent-radius station encountered during partial - power vertical descents. The results of the present investigation (tip-speed ratios up to 0.325 and an unaccelerated level-flight mean lift coefficient of about 0.6), in combination with the related results of at zero forward speed produce the largest rotor-blade vibratory moments. However, inasmuch as these large moments occur only during 1 percent of the cycles and 88 percent of the cycles are at moment values less than 70 percent of these maximum values in partial-power descents, other conditions, such as high-speed flight where the large moments are combined with large percentages of time spent,must not be neglected in any rotor-blade service-life assessment.

  15. Acceleration of atmospheric Cherenkov telescope signal processing to real-time speed with the Auto-Pipe design system

    NASA Astrophysics Data System (ADS)

    Tyson, Eric J.; Buckley, James; Franklin, Mark A.; Chamberlain, Roger D.

    2008-10-01

    The imaging atmospheric Cherenkov technique for high-energy gamma-ray astronomy is emerging as an important new technique for studying the high energy universe. Current experiments have data rates of ≈20TB/year and duty cycles of about 10%. In the future, more sensitive experiments may produce up to 1000 TB/year. The data analysis task for these experiments requires keeping up with this data rate in close to real-time. Such data analysis is a classic example of a streaming application with very high performance requirements. This class of application often benefits greatly from the use of non-traditional approaches for computation including using special purpose hardware (FPGAs and ASICs), or sophisticated parallel processing techniques. However, designing, debugging, and deploying to these architectures is difficult and thus they are not widely used by the astrophysics community. This paper presents the Auto-Pipe design toolset that has been developed to address many of the difficulties in taking advantage of complex streaming computer architectures for such applications. Auto-Pipe incorporates a high-level coordination language, functional and performance simulation tools, and the ability to deploy applications to sophisticated architectures. Using the Auto-Pipe toolset, we have implemented the front-end portion of an imaging Cherenkov data analysis application, suitable for real-time or offline analysis. The application operates on data from the VERITAS experiment, and shows how Auto-Pipe can greatly ease performance optimization and application deployment of a wide variety of platforms. We demonstrate a performance improvement over a traditional software approach of 32x using an FPGA solution and 3.6x using a multiprocessor based solution.

  16. Arc-Free High-Power dc Switch

    NASA Technical Reports Server (NTRS)

    Miller, W. N.; Gray, O. E.

    1982-01-01

    Hybrid switch allows high-power direct current to be turned on and off without arcing or erosion. Switch consists of bank of transistors in parallel with mechanical contacts. Transistor bank makes and breaks switched circuit; contacts carry current only during steady-state "on" condition. Designed for Space Shuttle orbiter, hybrid switch can be used also in high-power control circuits in aircraft, electric autos, industrial furnaces, and solar-cell arrays.

  17. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  18. Analysis of the Laser Calibration System for the CMS HCAL at CERN's Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Lebolo, Luis

    2005-11-01

    The European Organization for Nuclear Physics' (CERN) Large Hadron Collider uses the Compact Muon Solenoid (CMS) detector to measure collision products from proton-proton interactions. CMS uses a hadron calorimeter (HCAL) to measure the energy and position of quarks and gluons by reconstructing their hadronic decay products. An essential component of the detector is the calibration system, which was evaluated in terms of its misalignment, linearity, and resolution. In order to analyze the data, the authors created scripts in ROOT 5.02/00 and C++. The authors also used Mathematica 5.1 to perform complex mathematics and AutoCAD 2006 to produce optical ray traces. The misalignment of the optical components was found to be satisfactory; the Hybrid Photodiodes (HPDs) were confirmed to be linear; the constant, noise and stochastic contributions to its resolution were analyzed; and the quantum efficiency of most HPDs was determined to be approximately 40%. With a better understanding of the laser calibration system, one can further understand and improve the HCAL.

  19. Calibrators measurement system for headlamp tester of motor vehicle base on machine vision

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Zhang, Fan; Xu, Xi-ping; Zheng, Zhe

    2014-09-01

    With the development of photoelectric detection technology, machine vision has a wider use in the field of industry. The paper mainly introduces auto lamps tester calibrator measuring system, of which CCD image sampling system is the core. Also, it shows the measuring principle of optical axial angle and light intensity, and proves the linear relationship between calibrator's facula illumination and image plane illumination. The paper provides an important specification of CCD imaging system. Image processing by MATLAB can get flare's geometric midpoint and average gray level. By fitting the statistics via the method of the least square, we can get regression equation of illumination and gray level. It analyzes the error of experimental result of measurement system, and gives the standard uncertainty of synthesis and the resource of optical axial angle. Optical axial angle's average measuring accuracy is controlled within 40''. The whole testing process uses digital means instead of artificial factors, which has higher accuracy, more repeatability and better mentality than any other measuring systems.

  20. A Data-driven Approach for Forecasting Next-day River Discharge

    NASA Astrophysics Data System (ADS)

    Sharif, H. O.; Billah, K. S.

    2017-12-01

    This study focuses on evaluating the performance of the Soil and Water Assessment Tool (SWAT) eco-hydrological model, a simple Auto-Regressive with eXogenous input (ARX) model, and a Gene expression programming (GEP)-based model in one-day-ahead forecasting of discharge of a subtropical basin (the upper Kentucky River Basin). The three models were calibrated with daily flow at the US Geological Survey (USGS) stream gauging station not affected by flow regulation for the period of 2002-2005. The calibrated models were then validated at the same gauging station as well as another USGS gauge 88 km downstream for the period of 2008-2010. The results suggest that simple models outperform a sophisticated hydrological model with GEP having the advantage of being able to generate functional relationships that allow scientific investigation of the complex nonlinear interrelationships among input variables. Unlike SWAT, GEP, and to some extent, ARX are less sensitive to the length of the calibration time series and do not require a spin-up period.

  1. Impact of automatic calibration techniques on HMD life cycle costs and sustainable performance

    NASA Astrophysics Data System (ADS)

    Speck, Richard P.; Herz, Norman E., Jr.

    2000-06-01

    Automatic test and calibration has become a valuable feature in many consumer products--ranging from antilock braking systems to auto-tune TVs. This paper discusses HMDs (Helmet Mounted Displays) and how similar techniques can reduce life cycle costs and increase sustainable performance if they are integrated into a program early enough. Optical ATE (Automatic Test Equipment) is already zeroing distortion in the HMDs and thereby making binocular displays a practical reality. A suitcase sized, field portable optical ATE unit could re-zero these errors in the Ready Room to cancel the effects of aging, minor damage and component replacement. Planning on this would yield large savings through relaxed component specifications and reduced logistic costs. Yet, the sustained performance would far exceed that attained with fixed calibration strategies. Major tactical benefits can come from reducing display errors, particularly in information fusion modules and virtual `beyond visual range' operations. Some versions of the ATE described are in production and examples of high resolution optical test data will be discussed.

  2. Comparison of the IAEA TRS-398 and AAPM TG-51 absorbed dose to water protocols in the dosimetry of high-energy photon and electron beams

    NASA Astrophysics Data System (ADS)

    Saiful Huq, M.; Andreo, Pedro; Song, Haijun

    2001-11-01

    The International Atomic Energy Agency (IAEA TRS-398) and the American Association of Physicists in Medicine (AAPM TG-51) have published new protocols for the calibration of radiotherapy beams. These protocols are based on the use of an ionization chamber calibrated in terms of absorbed dose to water in a standards laboratory's reference quality beam. This paper compares the recommendations of the two protocols in two ways: (i) by analysing in detail the differences in the basic data included in the two protocols for photon and electron beam dosimetry and (ii) by performing measurements in clinical photon and electron beams and determining the absorbed dose to water following the recommendations of the two protocols. Measurements were made with two Farmer-type ionization chambers and three plane-parallel ionization chamber types in 6, 18 and 25 MV photon beams and 6, 8, 10, 12, 15 and 18 MeV electron beams. The Farmer-type chambers used were NE 2571 and PTW 30001, and the plane-parallel chambers were a Scanditronix-Wellhöfer NACP and Roos, and a PTW Markus chamber. For photon beams, the measured ratios TG-51/TRS-398 of absorbed dose to water Dw ranged between 0.997 and 1.001, with a mean value of 0.999. The ratios for the beam quality correction factors kQ were found to agree to within about +/-0.2% despite significant differences in the method of beam quality specification for photon beams and in the basic data entering into kQ. For electron beams, dose measurements were made using direct ND,w calibrations of cylindrical and plane-parallel chambers in a 60Co gamma-ray beam, as well as cross-calibrations of plane-parallel chambers in a high-energy electron beam. For the direct ND,w calibrations the ratios TG-51/TRS-398 of absorbed dose to water Dw were found to lie between 0.994 and 1.018 depending upon the chamber and electron beam energy used, with mean values of 0.996, 1.006, and 1.017, respectively, for the cylindrical, well-guarded and not well-guarded plane-parallel chambers. The Dw ratios measured for the cross-calibration procedures varied between 0.993 and 0.997. The largest discrepancies for electron beams between the two protocols arise from the use of different data for the perturbation correction factors pwall and pdis of cylindrical and plane-parallel chambers, all in 60Co. A detailed analysis of the reasons for the discrepancies is made which includes comparing the formalisms, correction factors and the quantities in the two protocols.

  3. Real time high frequency monitoring of water quality in river streams using a UV-visible spectrometer: interest, limits and consequences for monitoring strategies

    NASA Astrophysics Data System (ADS)

    Faucheux, Mikaël; Fovet, Ophélie; Gruau, Gérard; Jaffrézic, Anne; Petitjean, Patrice; Gascuel-Odoux, Chantal; Ruiz, Laurent

    2013-04-01

    Stream water chemistry is highly variable in space and time, therefore high frequency water quality measurement methods are likely to lead to conceptual advances in the hydrological sciences. Sub-daily data on water quality improve the characterization of pollutant sources and pathways during flood events as well as during long-term periods [1]. However, real time, high frequency monitoring devices needs to be properly calibrated and validated in real streams. This study analyses data from in situ monitoring of a stream water quality. During two hydrological years (2010-11, 2011-12), a submersible UV-visible spectrometer (Scan Spectrolyser) was used for surface water quality measurement at the outlet of a headwater catchment located at Kervidy-Naizin, Western France (AgrHys long-term hydrological observatory, http://www.inra.fr/ore_agrhys/). The spectrometer is reagentless and equipped with an auto-cleaning system. It allows real time, in situ and high frequency (20 min) measurements and uses a multiwavelengt spectral (200-750 nm) for simultaneous measurement of nitrate, dissolved organic carbon (DOC) and total suspended solids (TSS). A global calibration based on a PLS (Partial Least Squares) regression is provided by the manufacturer as default configuration of the UV-visible spectrometer. We carried out a local calibration of the spectrometer based on nitrates and DOC concentrations analysed in the laboratory from daily manual sampling and sub-daily automatic sampling of flood events. TSS results are compared with 15 min turbidity records from a continuous turdidimeter (Ponsel). The results show a good correlation between laboratory data and spectrometer data both during basis flows periods and flood events. However, the local calibration gives better results than the global one. Nutrient fluxes estimates based on high and different low frequency time series (daily to monthly) are compared to discuss the implication for environmental monitoring strategies. Such monitoring methods can therefore be interesting for designing monitoring strategy of environmental observatory and provide dense time series likely to highlight patterns or trends using appropriate approaches such as spectral analysis [2]. 1. Wade, A.J. et al., HESS Discuss., 2012. 9(5), p.6458- 6506. 2. Aubert, A. et al., submitted to EGU 2013-4745 vol. 15.

  4. Intraoperative Near-infrared Imaging for Parathyroid Gland Identification by Auto-fluorescence: A Feasibility Study.

    PubMed

    De Leeuw, Frederic; Breuskin, Ingrid; Abbaci, Muriel; Casiraghi, Odile; Mirghani, Haïtham; Ben Lakhdar, Aïcha; Laplace-Builhé, Corinne; Hartl, Dana

    2016-09-01

    Parathyroid glands (PGs) can be particularly hard to distinguish from surrounding tissue and thus can be damaged or removed during thyroidectomy. Postoperative hypoparathyroidism is the most common complication after thyroidectomy. Very recently, it has been found that the parathyroid tissue shows near-infrared (NIR) auto-fluorescence which could be used for intraoperative detection, without any use of contrast agents. The work described here presents a histological validation ex vivo of the NIR imaging procedure and evaluates intraoperative PG detection by NIR auto-fluorescence using for the first time to our knowledge a commercially available clinical NIR imaging device. Ex vivo study on resected operative specimens combined with a prospective in vivo study of consecutive patients who underwent total or partial thyroid, or parathyroid surgery at a comprehensive cancer center. During surgery, any tissue suspected to be a potential PG by the surgeon was imaged with the Fluobeam 800 (®) system. NIR imaging was compared to conventional histology (ex vivo) and/or visual identification by the surgeon (in vivo). We have validated NIR auto-fluorescence with an ex vivo study including 28 specimens. Sensitivity and specificity were 94.1 and 80 %, respectively. Intraoperative NIR imaging was performed in 35 patients and 81 parathyroids were identified. In 80/81 cases, the fluorescence signal was subjectively obvious on real-time visualization. We determined that PG fluorescence is 2.93 ± 1.59 times greater than thyroid fluorescence in vivo. Real-time NIR imaging based on parathyroid auto-fluorescence is fast, safe, and non-invasive and shows very encouraging results, for intraoperative parathyroid identification.

  5. Long-term dynamic and pseudo-state modeling of complete partial nitrification process at high nitrogen loading rates in a sequential batch reactor (SBR).

    PubMed

    Soliman, Moomen; Eldyasti, Ahmed

    2017-06-01

    Recently, partial nitrification has been adopted widely either for the nitrite shunt process or intermediate nitrite generation step for the Anammox process. However, partial nitrification has been hindered by the complexity of maintaining stable nitrite accumulation at high nitrogen loading rates (NLR) which affect the feasibility of the process for high nitrogen content wastewater. Thus, the operational data of a lab scale SBR performing complete partial nitrification as a first step of nitrite shunt process at NLRs of 0.3-1.2kg/(m 3 d) have been used to calibrate and validate a process model developed using BioWin® in order to describe the long-term dynamic behavior of the SBR. Moreover, an identifiability analysis step has been introduced to the calibration protocol to eliminate the needs of the respirometric analysis for SBR models. The calibrated model was able to predict accurately the daily effluent ammonia, nitrate, nitrite, alkalinity concentrations and pH during all different operational conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A new predictive multi-zone model for HCCI engine combustion

    DOE PAGES

    Bissoli, Mattia; Frassoldati, Alessio; Cuoci, Alberto; ...

    2016-06-30

    Here, this work introduces a new predictive multi-zone model for the description of combustion in Homogeneous Charge Compression Ignition (HCCI) engines. The model exploits the existing OpenSMOKE++ computational suite to handle detailed kinetic mechanisms, providing reliable predictions of the in-cylinder auto-ignition processes. All the elements with a significant impact on the combustion performances and emissions, like turbulence, heat and mass exchanges, crevices, residual burned gases, thermal and feed stratification are taken into account. Compared to other computational approaches, this model improves the description of mixture stratification phenomena by coupling a wall heat transfer model derived from CFD application with amore » proper turbulence model. Furthermore, the calibration of this multi-zone model requires only three parameters, which can be derived from a non-reactive CFD simulation: these adaptive variables depend only on the engine geometry and remain fixed across a wide range of operating conditions, allowing the prediction of auto-ignition, pressure traces and pollutants. This computational framework enables the use of detail kinetic mechanisms, as well as Rate of Production Analysis (RoPA) and Sensitivity Analysis (SA) to investigate the complex chemistry involved in the auto-ignition and the pollutants formation processes. In the final sections of the paper, these capabilities are demonstrated through the comparison with experimental data.« less

  7. Weighted partial least squares based on the error and variance of the recovery rate in calibration set.

    PubMed

    Yu, Shaohui; Xiao, Xue; Ding, Hong; Xu, Ge; Li, Haixia; Liu, Jing

    2017-08-05

    The quantitative analysis is very difficult for the emission-excitation fluorescence spectroscopy of multi-component mixtures whose fluorescence peaks are serious overlapping. As an effective method for the quantitative analysis, partial least squares can extract the latent variables from both the independent variables and the dependent variables, so it can model for multiple correlations between variables. However, there are some factors that usually affect the prediction results of partial least squares, such as the noise, the distribution and amount of the samples in calibration set etc. This work focuses on the problems in the calibration set that are mentioned above. Firstly, the outliers in the calibration set are removed by leave-one-out cross-validation. Then, according to two different prediction requirements, the EWPLS method and the VWPLS method are proposed. The independent variables and dependent variables are weighted in the EWPLS method by the maximum error of the recovery rate and weighted in the VWPLS method by the maximum variance of the recovery rate. Three organic matters with serious overlapping excitation-emission fluorescence spectroscopy are selected for the experiments. The step adjustment parameter, the iteration number and the sample amount in the calibration set are discussed. The results show the EWPLS method and the VWPLS method are superior to the PLS method especially for the case of small samples in the calibration set. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Online Calibration of Polytomous Items Under the Generalized Partial Credit Model

    PubMed Central

    Zheng, Yi

    2016-01-01

    Online calibration is a technology-enhanced architecture for item calibration in computerized adaptive tests (CATs). Many CATs are administered continuously over a long term and rely on large item banks. To ensure test validity, these item banks need to be frequently replenished with new items, and these new items need to be pretested before being used operationally. Online calibration dynamically embeds pretest items in operational tests and calibrates their parameters as response data are gradually obtained through the continuous test administration. This study extends existing formulas, procedures, and algorithms for dichotomous item response theory models to the generalized partial credit model, a popular model for items scored in more than two categories. A simulation study was conducted to investigate the developed algorithms and procedures under a variety of conditions, including two estimation algorithms, three pretest item selection methods, three seeding locations, two numbers of score categories, and three calibration sample sizes. Results demonstrated acceptable estimation accuracy of the two estimation algorithms in some of the simulated conditions. A variety of findings were also revealed for the interacted effects of included factors, and recommendations were made respectively. PMID:29881063

  9. Evaluating Otto the Auto: Does Engagement in an Interactive Website Improve Young Children's Transportation Safety?

    PubMed

    Schwebel, David C; Johnston, Anna; Shen, Jiabin; Li, Peng

    2017-07-19

    Transportation-related injuries are a leading cause of pediatric death, and effective interventions are limited. Otto the Auto is a website offering engaging, interactive activities. We evaluated Otto among a sample of sixty-nine 4- and 5-year-old children, who participated in a randomized parallel group design study. Following baseline evaluation, children engaged with either Otto or a control website for 2 weeks and then were re-evaluated. Children who used Otto failed to show increases in transportation safety knowledge or behavior compared to the control group, although there was a dosage effect whereby children who engaged in the website more with parents gained safer behavior patterns. We conclude Otto may have some efficacy when engaged by children with their parents, but continued efforts to develop and refine engaging, effective, theory-driven strategies to teach children transportation safety, including via internet, should be pursued.

  10. Evaluating Otto the Auto: Does Engagement in an Interactive Website Improve Young Children’s Transportation Safety?

    PubMed Central

    Johnston, Anna; Shen, Jiabin; Li, Peng

    2017-01-01

    Transportation-related injuries are a leading cause of pediatric death, and effective interventions are limited. Otto the Auto is a website offering engaging, interactive activities. We evaluated Otto among a sample of sixty-nine 4- and 5-year-old children, who participated in a randomized parallel group design study. Following baseline evaluation, children engaged with either Otto or a control website for 2 weeks and then were re-evaluated. Children who used Otto failed to show increases in transportation safety knowledge or behavior compared to the control group, although there was a dosage effect whereby children who engaged in the website more with parents gained safer behavior patterns. We conclude Otto may have some efficacy when engaged by children with their parents, but continued efforts to develop and refine engaging, effective, theory-driven strategies to teach children transportation safety, including via internet, should be pursued. PMID:28753920

  11. SU-E-J-208: Fast and Accurate Auto-Segmentation of Abdominal Organs at Risk for Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, V; Wang, Y; Romero, A

    2014-06-01

    Purpose: Various studies have demonstrated that online adaptive radiotherapy by real-time re-optimization of the treatment plan can improve organs-at-risk (OARs) sparing in the abdominal region. Its clinical implementation, however, requires fast and accurate auto-segmentation of OARs in CT scans acquired just before each treatment fraction. Autosegmentation is particularly challenging in the abdominal region due to the frequently observed large deformations. We present a clinical validation of a new auto-segmentation method that uses fully automated non-rigid registration for propagating abdominal OAR contours from planning to daily treatment CT scans. Methods: OARs were manually contoured by an expert panel to obtain groundmore » truth contours for repeat CT scans (3 per patient) of 10 patients. For the non-rigid alignment, we used a new non-rigid registration method that estimates the deformation field by optimizing local normalized correlation coefficient with smoothness regularization. This field was used to propagate planning contours to repeat CTs. To quantify the performance of the auto-segmentation, we compared the propagated and ground truth contours using two widely used metrics- Dice coefficient (Dc) and Hausdorff distance (Hd). The proposed method was benchmarked against translation and rigid alignment based auto-segmentation. Results: For all organs, the auto-segmentation performed better than the baseline (translation) with an average processing time of 15 s per fraction CT. The overall improvements ranged from 2% (heart) to 32% (pancreas) in Dc, and 27% (heart) to 62% (spinal cord) in Hd. For liver, kidneys, gall bladder, stomach, spinal cord and heart, Dc above 0.85 was achieved. Duodenum and pancreas were the most challenging organs with both showing relatively larger spreads and medians of 0.79 and 2.1 mm for Dc and Hd, respectively. Conclusion: Based on the achieved accuracy and computational time we conclude that the investigated auto-segmentation method overcomes an important hurdle to the clinical implementation of online adaptive radiotherapy. Partial funding for this work was provided by Accuray Incorporated as part of a research collaboration with Erasmus MC Cancer Institute.« less

  12. Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS

    NASA Technical Reports Server (NTRS)

    Mitchell, Christine M.; Thurman, David A.

    1999-01-01

    AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the case base, the user interface (including the category structure), interface with the current DAAC Help System, the development of tools to add new cases, and user testing and evaluation at (perhaps) the Goddard DAAC.

  13. Neural networks for calibration tomography

    NASA Technical Reports Server (NTRS)

    Decker, Arthur

    1993-01-01

    Artificial neural networks are suitable for performing pattern-to-pattern calibrations. These calibrations are potentially useful for facilities operations in aeronautics, the control of optical alignment, and the like. Computed tomography is compared with neural net calibration tomography for estimating density from its x-ray transform. X-ray transforms are measured, for example, in diffuse-illumination, holographic interferometry of fluids. Computed tomography and neural net calibration tomography are shown to have comparable performance for a 10 degree viewing cone and 29 interferograms within that cone. The system of tomography discussed is proposed as a relevant test of neural networks and other parallel processors intended for using flow visualization data.

  14. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  15. Dynamic calibration of fast-response probes in low-pressure shock tubes

    NASA Astrophysics Data System (ADS)

    Persico, G.; Gaetani, P.; Guardone, A.

    2005-09-01

    Shock tube flows resulting from the incomplete burst of the diaphragm are investigated in connection with the dynamic calibration of fast-response pressure probes. As a result of the partial opening of the diaphragm, pressure disturbances are observed past the shock wave and the measured total pressure profile deviates from the envisaged step signal required by the calibration process. Pressure oscillations are generated as the initially normal shock wave diffracts from the diaphragm's orifice and reflects on the shock tube walls, with the lowest local frequency roughly equal to the ratio of the sound speed in the perturbed region to the shock tube diameter. The energy integral of the perturbations decreases with increasing distance from the diaphragm, as the diffracted leading shock and downwind reflections coalesce into a single normal shock. A procedure is proposed to calibrate fast-response pressure probes downwind of a partially opened shock tube diaphragm.

  16. Adaptive and accelerated tracking-learning-detection

    NASA Astrophysics Data System (ADS)

    Guo, Pengyu; Li, Xin; Ding, Shaowen; Tian, Zunhua; Zhang, Xiaohu

    2013-08-01

    An improved online long-term visual tracking algorithm, named adaptive and accelerated TLD (AA-TLD) based on Tracking-Learning-Detection (TLD) which is a novel tracking framework has been introduced in this paper. The improvement focuses on two aspects, one is adaption, which makes the algorithm not dependent on the pre-defined scanning grids by online generating scale space, and the other is efficiency, which uses not only algorithm-level acceleration like scale prediction that employs auto-regression and moving average (ARMA) model to learn the object motion to lessen the detector's searching range and the fixed number of positive and negative samples that ensures a constant retrieving time, but also CPU and GPU parallel technology to achieve hardware acceleration. In addition, in order to obtain a better effect, some TLD's details are redesigned, which uses a weight including both normalized correlation coefficient and scale size to integrate results, and adjusts distance metric thresholds online. A contrastive experiment on success rate, center location error and execution time, is carried out to show a performance and efficiency upgrade over state-of-the-art TLD with partial TLD datasets and Shenzhou IX return capsule image sequences. The algorithm can be used in the field of video surveillance to meet the need of real-time video tracking.

  17. Multivariate analysis of organic acids in fermented food from reversed-phase high-performance liquid chromatography data.

    PubMed

    Mortera, Pablo; Zuljan, Federico A; Magni, Christian; Bortolato, Santiago A; Alarcón, Sergio H

    2018-02-01

    Multivariate calibration coupled to RP-HPLC with diode array detection (HPLC-DAD) was applied to the identification and the quantitative evaluation of the short chain organic acids (malic, oxalic, formic, lactic, acetic, citric, pyruvic, succinic, tartaric, propionic and α-cetoglutaric) in fermented food. The goal of the present study was to get the successful resolution of a system in the combined occurrence of strongly coeluting peaks, of distortions in the time sensors among chromatograms, and of the presence of unexpected compounds not included in the calibration step. Second-order HPLC-DAD data matrices were obtained in a short time (10min) on a C18 column with a chromatographic system operating in isocratic mode (mobile phase was 20mmolL -1 phosphate buffer at pH 2.20) and a flow-rate of 1.0mLmin -1 at room temperature. Parallel factor analysis (PARAFAC) and unfolded partial least-squares combined with residual bilinearization (U-PLS/RBL) were the second-order calibration algorithms select for data processing. The performance of the analytical parameters was good with an outstanding limit of detection (LODs) for acids ranging from 0.15 to 10.0mmolL -1 in the validation samples. The improved method was applied to the analysis of many dairy products (yoghurt, cultured milk and cheese) and wine. The method was shown as an effective means for determining and following acid contents in fermented food and was characterized by reducibility with simple, high resolution and rapid procedure without derivatization of analytes. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Solving Partial Differential Equations in a data-driven multiprocessor environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudiot, J.L.; Lin, C.M.; Hosseiniyar, M.

    1988-12-31

    Partial differential equations can be found in a host of engineering and scientific problems. The emergence of new parallel architectures has spurred research in the definition of parallel PDE solvers. Concurrently, highly programmable systems such as data-how architectures have been proposed for the exploitation of large scale parallelism. The implementation of some Partial Differential Equation solvers (such as the Jacobi method) on a tagged token data-flow graph is demonstrated here. Asynchronous methods (chaotic relaxation) are studied and new scheduling approaches (the Token No-Labeling scheme) are introduced in order to support the implementation of the asychronous methods in a data-driven environment.more » New high-level data-flow language program constructs are introduced in order to handle chaotic operations. Finally, the performance of the program graphs is demonstrated by a deterministic simulation of a message passing data-flow multiprocessor. An analysis of the overhead in the data-flow graphs is undertaken to demonstrate the limits of parallel operations in dataflow PDE program graphs.« less

  19. Space radiation studies

    NASA Technical Reports Server (NTRS)

    Gregory, J. C.

    1986-01-01

    Instrument design and data analysis expertise was provided in support of several space radiation monitoring programs. The Verification of Flight Instrumentation (VFI) program at NASA included both the Active Radiation Detector (ARD) and the Nuclear Radiation Monitor (NRM). Design, partial fabrication, calibration and partial data analysis capability to the ARD program was provided, as well as detector head design and fabrication, software development and partial data analysis capability to the NRM program. The ARD flew on Spacelab-1 in 1983, performed flawlessly and was returned to MSFC after flight with unchanged calibration factors. The NRM, flown on Spacelab-2 in 1985, also performed without fault, not only recording the ambient gamma ray background on the Spacelab, but also recording radiation events of astrophysical significance.

  20. Multiple independent origins of auto-pollination in tropical orchids (Bulbophyllum) in light of the hypothesis of selfing as an evolutionary dead end.

    PubMed

    Gamisch, Alexander; Fischer, Gunter Alexander; Comes, Hans Peter

    2015-09-16

    The transition from outcrossing to selfing has long been portrayed as an 'evolutionary dead end' because, first, reversals are unlikely and, second, selfing lineages suffer from higher rates of extinction owing to a reduced potential for adaptation and the accumulation of deleterious mutations. We tested these two predictions in a clade of Madagascan Bulbophyllum orchids (30 spp.), including eight species where auto-pollinating morphs (i.e., selfers, without a 'rostellum') co-exist with their pollinator-dependent conspecifics (i.e., outcrossers, possessing a rostellum). Specifically, we addressed this issue on the basis of a time-calibrated phylogeny by means of ancestral character reconstructions and within the state-dependent evolution framework of BiSSE (Binary State Speciation and Extinction), which allowed jointly estimating rates of transition, speciation, and extinction between outcrossing and selfing. The eight species capable of selfing occurred in scattered positions across the phylogeny, with two likely originating in the Pliocene (ca. 4.4-3.1 Ma), one in the Early Pleistocene (ca. 2.4 Ma), and five since the mid-Pleistocene (ca. ≤ 1.3 Ma). We infer that this scattered phylogenetic distribution of selfing is best described by models including up to eight independent outcrossing-to-selfing transitions and very low rates of speciation (and either moderate or zero rates of extinction) associated with selfing. The frequent and irreversible outcrossing-to-selfing transitions in Madagascan Bulbophyllum are clearly congruent with the first prediction of the dead end hypothesis. The inability of our study to conclusively reject or support the likewise predicted higher extinction rate in selfing lineages might be explained by a combination of methodological limitations (low statistical power of our BiSSE approach to reliably estimate extinction in small-sized trees) and evolutionary processes (insufficient time elapsed for selfers to go extinct). We suggest that, in these tropical orchids, a simple genetic basis of selfing (via loss of the 'rostellum') is needed to explain the strikingly recurrent transitions to selfing, perhaps reflecting rapid response to parallel and novel selective environments over Late Quaternary (≤ 1.3 Ma) time scales.

  1. Modulation transfer function of partial gating detector by liquid crystal auto-controlling light intensity

    NASA Astrophysics Data System (ADS)

    Yang, Xusan; Tang, Yuanhe; Liu, Kai; Liu, Hanchen; Gao, Haiyang; Li, Qing; Zhang, Ruixia; Ye, Na; Liang, Yuan; Zhao, Gaoxiang

    2008-12-01

    Based on the electro-optical properties of liquid crystal, we have designed a novel partial gating detector. Liquid crystal can be taken to change its own transmission according to the light intensity outside. Every single pixel of the image is real-time modulated by liquid crystal, thus the strong light is weakened and low light goes through the detector normally .The purpose of partial-gating strong light (>105lx) can be achieved by this detector. The modulation transfer function (MTF) equations of the main optical sub-systems are calculated in this paper, they are liquid crystal panels, linear fiber panel and CCD array detector. According to the relevant size, the MTF value of this system is fitted out. The result is MTF= 0.518 at Nyquist frequency.

  2. Mass separation of deuterium and helium with conventional quadrupole mass spectrometer by using varied ionization energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Yaowei; Hu, Jiansheng, E-mail: hujs@ipp.ac.cn; Wan, Zhao

    2016-03-15

    Deuterium pressure in deuterium-helium mixture gas is successfully measured by a common quadrupole mass spectrometer (model: RGA200) with a resolution of ∼0.5 atomic mass unit (AMU), by using varied ionization energy together with new developed software and dedicated calibration for RGA200. The new software is developed by using MATLAB with the new functions: electron energy (EE) scanning, deuterium partial pressure measurement, and automatic data saving. RGA200 with new software is calibrated in pure deuterium and pure helium 1.0 × 10{sup −6}–5.0 × 10{sup −2} Pa, and the relation between pressure and ion current of AMU4 under EE = 25 eVmore » and EE = 70 eV is obtained. From the calibration result and RGA200 scanning with varied ionization energy in deuterium and helium mixture gas, both deuterium partial pressures (P{sub D{sub 2}}) and helium partial pressure (P{sub He}) could be obtained. The result shows that deuterium partial pressure could be measured if P{sub D{sub 2}} > 10{sup −6} Pa (limited by ultimate pressure of calibration vessel), and helium pressure could be measured only if P{sub He}/P{sub D{sub 2}} > 0.45, and the measurement error is evaluated as 15%. This method is successfully employed in EAST 2015 summer campaign to monitor deuterium outgassing/desorption during helium discharge cleaning.« less

  3. Mathematical Model and Calibration Experiment of a Large Measurement Range Flexible Joints 6-UPUR Six-Axis Force Sensor

    PubMed Central

    Zhao, Yanzhi; Zhang, Caifeng; Zhang, Dan; Shi, Zhongpan; Zhao, Tieshi

    2016-01-01

    Nowadays improving the accuracy and enlarging the measuring range of six-axis force sensors for wider applications in aircraft landing, rocket thrust, and spacecraft docking testing experiments has become an urgent objective. However, it is still difficult to achieve high accuracy and large measuring range with traditional parallel six-axis force sensors due to the influence of the gap and friction of the joints. Therefore, to overcome the mentioned limitations, this paper proposed a 6-Universal-Prismatic-Universal-Revolute (UPUR) joints parallel mechanism with flexible joints to develop a large measurement range six-axis force sensor. The structural characteristics of the sensor are analyzed in comparison with traditional parallel sensor based on the Stewart platform. The force transfer relation of the sensor is deduced, and the force Jacobian matrix is obtained using screw theory in two cases of the ideal state and the state of flexibility of each flexible joint is considered. The prototype and loading calibration system are designed and developed. The K value method and least squares method are used to process experimental data, and in errors of kind Ι and kind II linearity are obtained. The experimental results show that the calibration error of the K value method is more than 13.4%, and the calibration error of the least squares method is 2.67%. The experimental results prove the feasibility of the sensor and the correctness of the theoretical analysis which are expected to be adopted in practical applications. PMID:27529244

  4. Fixed-pressure CPAP versus auto-adjusting CPAP: comparison of efficacy on blood pressure in obstructive sleep apnoea, a randomised clinical trial.

    PubMed

    Pépin, J L; Tamisier, R; Baguet, J P; Lepaulle, B; Arbib, F; Arnol, N; Timsit, J F; Lévy, P

    2016-08-01

    Millions of individuals with obstructive sleep apnoea (OSA) are treated by CPAP aimed at reducing blood pressure (BP) and thus cardiovascular risk. However, evidence is scarce concerning the impact of different CPAP modalities on BP evolution. This double-blind, randomised clinical trial of parallel groups of patients with OSA indicated for CPAP treatment compared the efficacy of fixed-pressure CPAP (FP-CPAP) with auto-adjusting CPAP (AutoCPAP) in reducing BP. The primary endpoint was the change in office systolic BP after 4 months. Secondary endpoints included 24 h BP measurements. Patients (322) were randomised to FP-CPAP (n=161) or AutoCPAP (n=161). The mean apnoea+hypopnoea index (AHI) was 43/h (SD, 21); mean age was 57 (SD, 11), with 70% of males; mean body mass index was 31.3 kg/m(2) (SD, 6.6) and median device use was 5.1 h/night. In the intention-to-treat analysis, office systolic blood pressure decreased by 2.2 mm Hg (95% CI -5.8 to 1.4) and 0.4 mm Hg (-4.3 to 3.4) in the FP-CPAP and AutoCPAP group, respectively (group difference: -1.3 mm Hg (95% CI -4.1 to 1.5); p=0.37, adjusted for baseline BP values). 24 h diastolic BP (DBP) decreased by 1.7 mm Hg (95% CI -3.9 to 0.5) and 0.5 mm Hg (95% CI -2.3 to 1.3) in the FP-CPAP and AutoCPAP group, respectively (group difference: -1.4 mm Hg (95% CI -2.7 to -0.01); p=0.048, adjusted for baseline BP values). The result was negative regarding the primary outcome of office BP, while FP-CPAP was more effective in reducing 24 h DBP (a secondary outcome). NCT01090297. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. A revolution in preventing fatal craniovertebral junction injuries: lessons learned from the Head and Neck Support device in professional auto racing.

    PubMed

    Kaul, Anand; Abbas, Ahmed; Smith, Gabriel; Manjila, Sunil; Pace, Jonathan; Steinmetz, Michael

    2016-12-01

    Fatal craniovertebral junction (CVJ) injuries were the most common cause of death in high-speed motor sports prior to 2001. Following the death of a mutual friend and race car driver, Patrick Jacquemart (1946-1981), biomechanical engineer Dr. Robert Hubbard, along with race car driver and brother-in-law Jim Downing, developed the concept for the Head and Neck Support (HANS) device to prevent flexion-distraction injuries during high-velocity impact. Biomechanical testing showed that neck shear and loading forces experienced during collisions were 3 times the required amount for a catastrophic injury. Crash sled testing with and without the HANS device elucidated reductions in neck tension, neck compression, head acceleration, and chest acceleration experienced by dummies during high-energy crashes. Simultaneously, motor sports accidents such as Dale Earnhardt Sr.'s fatal crash in 2001 galvanized public opinion in favor of serious safety reform. Analysis of Earnhardt's accident demonstrated that his car's velocity parallel to the barrier was more than 150 miles per hour (mph), with deceleration upon impact of roughly 43 mph in a total of 0.08 seconds. After careful review, several major racing series such as the National Association for Stock Car Auto Racing (NASCAR) and Championship Auto Racing Team (CART) made major changes to ensure the safety of drivers at the turn of the 21st century. Since the rule requiring the HANS device in professional auto racing series was put in place, there has not been a single reported case of a fatal CVJ injury.

  6. Complexity of parallel implementation of domain decomposition techniques for elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gropp, W.D.; Keyes, D.E.

    1988-03-01

    The authors discuss the parallel implementation of preconditioned conjugate gradient (PCG)-based domain decomposition techniques for self-adjoint elliptic partial differential equations in two dimensions on several architectures. The complexity of these methods is described on a variety of message-passing parallel computers as a function of the size of the problem, number of processors and relative communication speeds of the processors. They show that communication startups are very important, and that even the small amount of global communication in these methods can significantly reduce the performance of many message-passing architectures.

  7. Nondestructive evaluation of soluble solid content in strawberry by near infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zhiming; Huang, Wenqian; Chen, Liping; Wang, Xiu; Peng, Yankun

    This paper indicates the feasibility to use near infrared (NIR) spectroscopy combined with synergy interval partial least squares (siPLS) algorithms as a rapid nondestructive method to estimate the soluble solid content (SSC) in strawberry. Spectral preprocessing methods were optimized selected by cross-validation in the model calibration. Partial least squares (PLS) algorithm was conducted on the calibration of regression model. The performance of the final model was back-evaluated according to root mean square error of calibration (RMSEC) and correlation coefficient (R2 c) in calibration set, and tested by mean square error of prediction (RMSEP) and correlation coefficient (R2 p) in prediction set. The optimal siPLS model was obtained with after first derivation spectra preprocessing. The measurement results of best model were achieved as follow: RMSEC = 0.2259, R2 c = 0.9590 in the calibration set; and RMSEP = 0.2892, R2 p = 0.9390 in the prediction set. This work demonstrated that NIR spectroscopy and siPLS with efficient spectral preprocessing is a useful tool for nondestructively evaluation SSC in strawberry.

  8. International Space Station Columbus Payload SoLACES Degradation Assessment

    NASA Technical Reports Server (NTRS)

    Hartman, William A.; Schmidl, William D.; Mikatarian, Ron; Soares, Carlos; Schmidtke, Gerhard; Erhardt, Christian

    2016-01-01

    SOLAR is a European Space Agency (ESA) payload deployed on the International Space Station (ISS) and located on the Columbus Laboratory. It is located on the Columbus External Payload Facility in a zenith location. The objective of the SOLAR payload is to study the Sun. The SOLAR payload consists of three instruments that allow for measurement of virtually the entire electromagnetic spectrum (17 nm to 2900 nm). The three payload instruments are SOVIM (SOlar Variable and Irradiance Monitor), SOLSPEC (SOLar SPECctral Irradiance measurements), and SolACES (SOLar Auto-Calibrating Extreme UV/UV Spectrophotometers).

  9. Development of one-shot aspheric measurement system with a Shack-Hartmann sensor.

    PubMed

    Furukawa, Yasunori; Takaie, Yuichi; Maeda, Yoshiki; Ohsaki, Yumiko; Takeuchi, Seiji; Hasegawa, Masanobu

    2016-10-10

    We present a measurement system for a rotationally symmetric aspheric surface that is designed for accurate and high-volume measurements. The system uses the Shack-Hartmann sensor and is capable of measuring aspheres with a maximum diameter of 90 mm in one shot. In our system, a reference surface, made with the same aspheric parameter as the test surface, is prepared. The test surface is recovered as the deviation from the reference surface using a figure-error reconstruction algorithm with a ray coordinate and angle variant table. In addition, we developed a method to calibrate the rotationally symmetric system error. These techniques produce stable measurements and high accuracy. For high-throughput measurements, a single measurement scheme and auto alignment are implemented; they produce a 4.5 min measurement time, including calibration and alignment. In this paper, we introduce the principle and calibration method of our system. We also demonstrate that our system achieved an accuracy better than 5.8 nm RMS and a repeatability of 0.75 nm RMS by comparing our system's aspheric measurement results with those of a probe measurement machine.

  10. Predicting ambient aerosol thermal-optical reflectance measurements from infrared spectra: elemental carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-10-01

    Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as thermal-optical reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier transform infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive and nondestructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FT-IR spectra are divided into calibration and test sets. Two calibrations are developed: one developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a uniform distribution of Low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the Low EC calibration to Low EC samples and the Uniform EC calibration to all other samples is used to produce predictions for Low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of determination (R2; 0.96), no bias (0.00 μg m-3, a concentration value based on the nominal IMPROVE sample volume of 32.8 m3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples, providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  11. Predicting ambient aerosol Thermal Optical Reflectance (TOR) measurements from infrared spectra: elemental carbon

    NASA Astrophysics Data System (ADS)

    Dillner, A. M.; Takahama, S.

    2015-06-01

    Elemental carbon (EC) is an important constituent of atmospheric particulate matter because it absorbs solar radiation influencing climate and visibility and it adversely affects human health. The EC measured by thermal methods such as Thermal-Optical Reflectance (TOR) is operationally defined as the carbon that volatilizes from quartz filter samples at elevated temperatures in the presence of oxygen. Here, methods are presented to accurately predict TOR EC using Fourier Transform Infrared (FT-IR) absorbance spectra from atmospheric particulate matter collected on polytetrafluoroethylene (PTFE or Teflon) filters. This method is similar to the procedure tested and developed for OC in prior work (Dillner and Takahama, 2015). Transmittance FT-IR analysis is rapid, inexpensive, and non-destructive to the PTFE filter samples which are routinely collected for mass and elemental analysis in monitoring networks. FT-IR absorbance spectra are obtained from 794 filter samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011. Partial least squares regression is used to calibrate sample FT-IR absorbance spectra to collocated TOR EC measurements. The FTIR spectra are divided into calibration and test sets. Two calibrations are developed, one which is developed from uniform distribution of samples across the EC mass range (Uniform EC) and one developed from a~uniform distribution of low EC mass samples (EC < 2.4 μg, Low Uniform EC). A hybrid approach which applies the low EC calibration to low EC samples and the Uniform EC calibration to all other samples is used to produces predictions for low EC samples that have mean error on par with parallel TOR EC samples in the same mass range and an estimate of the minimum detection limit (MDL) that is on par with TOR EC MDL. For all samples, this hybrid approach leads to precise and accurate TOR EC predictions by FT-IR as indicated by high coefficient of variation (R2; 0.96), no bias (0.00 μg m-3, concentration value based on the nominal IMPROVE sample volume of 32.8 m-3), low error (0.03 μg m-3) and reasonable normalized error (21 %). These performance metrics can be achieved with various degrees of spectral pretreatment (e.g., including or excluding substrate contributions to the absorbances) and are comparable in precision and accuracy to collocated TOR measurements. Only the normalized error is higher for the FT-IR EC measurements than for collocated TOR. FT-IR spectra are also divided into calibration and test sets by the ratios OC/EC and ammonium/EC to determine the impact of OC and ammonium on EC prediction. We conclude that FT-IR analysis with partial least squares regression is a robust method for accurately predicting TOR EC in IMPROVE network samples; providing complementary information to TOR OC predictions (Dillner and Takahama, 2015) and the organic functional group composition and organic matter (OM) estimated previously from the same set of sample spectra (Ruthenburg et al., 2014).

  12. SU-E-T-677: Reproducibility of Production of Ionization Chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kukolowicz, P; Bulski, W; Ulkowski, P

    Purpose: To compare the reproducibility of the production of several cylindrical and plane-parallel chambers popular in Poland in terms of a calibration coefficient. Methods: The investigation was performed for PTW30013 (20 chambers), 30001 (10 chambers), FC65-G (17 chambers) cylindrical chambers and for PPC05 (14 chambers), Roos 34001 (8 chambers) plane parallel chambers. The calibration factors were measured at the same accredited secondary standard laboratory in terms of a dose to water. All the measurements were carried out at the same laboratory, by the same staff, in accordance with the same IAEA recommendations. All the chambers were calibrated in the Co60more » beam. Reproducibility was described in terms of the mean value, its standard deviation and the ratio of the maximum and minimum value of calibration factors for each set of chambers separately. The combined uncertainty budged (1SD) calculated according to the IAEA-TECDOC-1585 of the calibration factor was of 0.25%. Results: The calibration coefficients for PTW30013, 30001, and FC65-G chambers were 5.36±0.03, 5.28±0.06, 4.79±0.015 nC/Gy respectively and for PPC05, and Roos chambers were 59±2, 8.3±0.1 nC/Gy respectively. The maximum/minimum ratio of calibration factors for PTW30013, 30001, FC65-G, and for PPC05, Roos chambers were 1.03, 1.03, 1.01, 1.14 and 1.03 respectively. Conclusion: The production of all ion chambers was very reproducible except the Markus type PPC05 for which the ratio of maximum/minimum calibration coefficients of 1.14 was obtained.« less

  13. Calibration of two passive air samplers for monitoring phthalates and brominated flame-retardants in indoor air.

    PubMed

    Saini, Amandeep; Okeme, Joseph O; Goosey, Emma; Diamond, Miriam L

    2015-10-01

    Two passive air samplers (PAS), polyurethane foam (PUF) disks and Sorbent Impregnated PUF (SIP) disks, were characterized for uptake of phthalates and brominated flame-retardants (BFRs) indoors using fully and partially sheltered housings. Based on calibration against an active low-volume air sampler for gas- and particle-phase compounds, we recommend generic sampling rates of 3.5±0.9 and 1.0±0.4 m(3)/day for partially and fully sheltered housing, respectively, which applies to gas-phase phthalates and BFRs as well as particle-phase DEHP (the later for the partially sheltered PAS). For phthalates, partially sheltered SIPs are recommended. Further, we recommend the use of partially sheltered PAS indoors and a deployment period of one month. The sampling rate for the partially sheltered PUF and SIP of 3.5±0.9 m(3)/day is indistinguishable from that reported for fully sheltered PAS deployed outdoors, indicating the role of the housing outdoors to minimize the effect of variable wind velocities on chemical uptake, versus the partially sheltered PAS deployed indoors to maximize chemical uptake where air flow rates are low. Copyright © 2015. Published by Elsevier Ltd.

  14. DOVIS 2.0: An Efficient and Easy to Use Parallel Virtual Screening Tool Based on AutoDock 4.0

    DTIC Science & Technology

    2008-09-08

    under the GNU General Public License. Background Molecular docking is a computational method that pre- dicts how a ligand interacts with a receptor...Hence, it is an important tool in studying receptor-ligand interactions and plays an essential role in drug design. Particularly, molecular docking has...libraries from OpenBabel and setup a molecular data structure as a C++ object in our program. This makes handling of molecular structures (e.g., atoms

  15. The Confidence-Accuracy Relationship in Diagnostic Assessment: The Case of the Potential Difference in Parallel Electric Circuits

    ERIC Educational Resources Information Center

    Saglam, Murat

    2015-01-01

    This study explored the relationship between accuracy of and confidence in performance of 114 prospective primary school teachers in answering diagnostic questions on potential difference in parallel electric circuits. The participants were required to indicate their confidence in their answers for each question. Bias and calibration indices were…

  16. A Divergence Statistics Extension to VTK for Performance Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical,more » "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.« less

  17. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clegg, Samuel M; Barefield, James E; Wiens, Roger C

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from whichmore » unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.« less

  18. Autocalibrating motion-corrected wave-encoding for highly accelerated free-breathing abdominal MRI.

    PubMed

    Chen, Feiyu; Zhang, Tao; Cheng, Joseph Y; Shi, Xinwei; Pauly, John M; Vasanawala, Shreyas S

    2017-11-01

    To develop a motion-robust wave-encoding technique for highly accelerated free-breathing abdominal MRI. A comprehensive 3D wave-encoding-based method was developed to enable fast free-breathing abdominal imaging: (a) auto-calibration for wave-encoding was designed to avoid extra scan for coil sensitivity measurement; (b) intrinsic butterfly navigators were used to track respiratory motion; (c) variable-density sampling was included to enable compressed sensing; (d) golden-angle radial-Cartesian hybrid view-ordering was incorporated to improve motion robustness; and (e) localized rigid motion correction was combined with parallel imaging compressed sensing reconstruction to reconstruct the highly accelerated wave-encoded datasets. The proposed method was tested on six subjects and image quality was compared with standard accelerated Cartesian acquisition both with and without respiratory triggering. Inverse gradient entropy and normalized gradient squared metrics were calculated, testing whether image quality was improved using paired t-tests. For respiratory-triggered scans, wave-encoding significantly reduced residual aliasing and blurring compared with standard Cartesian acquisition (metrics suggesting P < 0.05). For non-respiratory-triggered scans, the proposed method yielded significantly better motion correction compared with standard motion-corrected Cartesian acquisition (metrics suggesting P < 0.01). The proposed methods can reduce motion artifacts and improve overall image quality of highly accelerated free-breathing abdominal MRI. Magn Reson Med 78:1757-1766, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. An in-line spectrophotometer on a centrifugal microfluidic platform for real-time protein determination and calibration.

    PubMed

    Ding, Zhaoxiong; Zhang, Dongying; Wang, Guanghui; Tang, Minghui; Dong, Yumin; Zhang, Yixin; Ho, Ho-Pui; Zhang, Xuping

    2016-09-21

    In this paper, an in-line, low-cost, miniature and portable spectrophotometric detection system is presented and used for fast protein determination and calibration in centrifugal microfluidics. Our portable detection system is configured with paired emitter and detector diodes (PEDD), where the light beam between both LEDs is collimated with enhanced system tolerance. It is the first time that a physical model of PEDD is clearly presented, which could be modelled as a photosensitive RC oscillator. A portable centrifugal microfluidic system that contains a wireless port in real-time communication with a smartphone has been built to show that PEDD is an effective strategy for conducting rapid protein bioassays with detection performance comparable to that of a UV-vis spectrophotometer. The choice of centrifugal microfluidics offers the unique benefits of highly parallel fluidic actuation at high accuracy while there is no need for a pump, as inertial forces are present within the entire spinning disc and accurately controlled by varying the spinning speed. As a demonstration experiment, we have conducted the Bradford assay for bovine serum albumin (BSA) concentration calibration from 0 to 2 mg mL(-1). Moreover, a novel centrifugal disc with a spiral microchannel is proposed for automatic distribution and metering of the sample to all the parallel reactions at one time. The reported lab-on-a-disc scheme with PEDD detection may offer a solution for high-throughput assays, such as protein density calibration, drug screening and drug solubility measurement that require the handling of a large number of reactions in parallel.

  20. Auto correlation analysis of coda waves from local earthquakes for detecting temporal changes in shallow subsurface structures - The 2011 Tohoku-Oki, Japan, earthquake -

    NASA Astrophysics Data System (ADS)

    Nakahara, H.

    2013-12-01

    For monitoring temporal changes in subsurface structures, I propose to use auto correlation functions of coda waves from local earthquakes recorded at surface receivers, which probably contain more body waves than surface waves. Because the use of coda waves requires earthquakes, time resolution for monitoring decreases. But at regions with high seismicity, it may be possible to monitor subsurface structures in sufficient time resolutions. Studying the 2011 Tohoku-Oki (Mw 9.0), Japan, earthquake for which velocity changes have been already reported by previous studies, I try to validate the method. KiK-net stations in northern Honshu are used in the analysis. For each moderate earthquake, normalized auto correlation functions of surface records are stacked with respect to time windows in S-wave coda. Aligning the stacked normalized auto correlation functions with time, I search for changes in arrival times of phases. The phases at lag times of less than 1s are studied because changes at shallow depths are focused. Based on the stretching method, temporal variations in the arrival times are measured at the stations. Clear phase delays are found to be associated with the mainshock and to gradually recover with time. Amounts of the phase delays are in the order of 10% on average with the maximum of about 50% at some stations. For validation, the deconvolution analysis using surface and subsurface records at the same stations are conducted. The results show that the phase delays from the deconvolution analysis are slightly smaller than those from the auto correlation analysis, which implies that the phases on the auto correlations are caused by larger velocity changes at shallower depths. The auto correlation analysis seems to have an accuracy of about several percents, which is much larger than methods using earthquake doublets and borehole array data. So this analysis might be applicable to detect larger changes. In spite of these disadvantages, this analysis is still attractive because it can be applied to many records on the surface in regions where no boreholes are available. Acknowledgements: Seismograms recorded by KiK-net managed by National Research Institute for Earth Science and Disaster Prevention (NIED) were used in this study. This study was partially supported by JST J-RAPID program and JSPS KAKENHI Grant Numbers 24540449 and 23540449.

  1. Asteroids as Calibration Standards in the Thermal Infrared -- Applications and Results from ISO

    NASA Astrophysics Data System (ADS)

    Müller, T. G.; Lagerros, J. S. V.

    Asteroids have been used extensively as calibration sources for ISO. We summarise the asteroid observational parameters in the thermal infrared and explain the important modelling aspects. Ten selected asteroids were extensively used for the absolute photometric calibration of ISOPHOT in the far-IR. Additionally, the point-like and bright asteroids turned out to be of great interest for many technical tests and calibration aspects. They have been used for testing the calibration for SWS and LWS, the validation of relative spectral response functions of different bands, for colour correction and filter leak tests. Currently, there is a strong emphasis on ISO cross-calibration, where the asteroids contribute in many fields. Well known asteroids have also been seen serendipitously in the CAM Parallel Mode and the PHT Serendipity Mode, allowing for validation and improvement of the photometric calibration of these special observing modes.

  2. Sol-gel auto-combustion synthesis and properties of Co2Z-type hexagonal ferrite ultrafine powders

    NASA Astrophysics Data System (ADS)

    Liu, Junliang; Yang, Min; Wang, Shengyun; Lv, Jingqing; Li, Yuqing; Zhang, Ming

    2018-05-01

    Z-type hexagonal ferrite ultrafine powders with chemical formulations of (BaxSr1-x)3Co2Fe24O41 (x varied from 0.0 to 1.0) have been synthesized by a sol-gel auto-combustion technique. The average particle sizes of the synthesized powders ranged from 2 to 5 μm. The partial substitution of Ba2+ by Sr2+ led to the shrinkage of the crystal lattices and resulted in changes in the magnetic sub-lattices, which tailored the static and dynamic magnetic properties of the as-synthesized powders. As the substitution ratio of Ba2+ by Sr2+, the saturation magnetization of the synthesized powders almost consistently increased from 43.3 to 56.1 emu/g, while the real part of permeability approached to a relatively high value about 2.2 owing to the balance of the saturation magnetization and magnetic anisotropy field.

  3. Probabilistic density function method for nonlinear dynamical systems driven by colored noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barajas-Solano, David A.; Tartakovsky, Alexandre M.

    2016-05-01

    We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time.more » We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.« less

  4. Fuel cell serves as oxygen level detector

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Monitoring the oxygen level in the air is accomplished by a fuel cell detector whose voltage output is proportional to the partial pressure of oxygen in the sampled gas. The relationship between output voltage and partial pressure of oxygen can be calibrated.

  5. Characteristics of AZ31 Mg alloy joint using automatic TIG welding

    NASA Astrophysics Data System (ADS)

    Liu, Hong-tao; Zhou, Ji-xue; Zhao, Dong-qing; Liu, Yun-teng; Wu, Jian-hua; Yang, Yuan-sheng; Ma, Bai-chang; Zhuang, Hai-hua

    2017-01-01

    The automatic tungsten-inert gas welding (ATIGW) of AZ31 Mg alloys was performed using a six-axis robot. The evolution of the microstructure and texture of the AZ31 auto-welded joints was studied by optical microscopy, scanning electron microscopy, energy-dispersive X-ray spectroscopy, and electron backscatter diffraction. The ATIGW process resulted in coarse recrystallized grains in the heat affected zone (HAZ) and epitaxial growth of columnar grains in the fusion zone (FZ). Substantial changes of texture between the base material (BM) and the FZ were detected. The {0002} basal plane in the BM was largely parallel to the sheet rolling plane, whereas the c-axis of the crystal lattice in the FZ inclined approximately 25° with respect to the welding direction. The maximum pole density increased from 9.45 in the BM to 12.9 in the FZ. The microhardness distribution, tensile properties, and fracture features of the AZ31 auto-welded joints were also investigated.

  6. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  7. NOTE: Calibration of low-energy electron beams from a mobile linear accelerator with plane-parallel chambers using both TG-51 and TG-21 protocols

    NASA Astrophysics Data System (ADS)

    Beddar, A. S.; Tailor, R. C.

    2004-04-01

    A new approach to intraoperative radiation therapy led to the development of mobile linear electron accelerators that provide lower electron energy beams than the usual conventional accelerators commonly encountered in radiotherapy. Such mobile electron accelerators produce electron beams that have nominal energies of 4, 6, 9 and 12 MeV. This work compares the absorbed dose output calibrations using both the AAPM TG-51 and TG-21 dose calibration protocols for two types of ion chambers: a plane-parallel (PP) ionization chamber and a cylindrical ionization chamber. Our results indicate that the use of a 'Markus' PP chamber causes 2 3% overestimation in dose output determination if accredited dosimetry-calibration laboratory based chamber factors \\big(N_{{\\rm D},{\\rm w}}^{{}^{60}{\\rm Co}}, N_x\\big) are used. However, if the ionization chamber factors are derived using a cross-comparison at a high-energy electron beam, then a good agreement is obtained (within 1%) with a calibrated cylindrical chamber over the entire energy range down to 4 MeV. Furthermore, even though the TG-51 does not recommend using cylindrical chambers at the low energies, our results show that the cylindrical chamber has a good agreement with the PP chamber not only at 6 MeV but also down to 4 MeV electron beams.

  8. Design of k-Space Channel Combination Kernels and Integration with Parallel Imaging

    PubMed Central

    Beatty, Philip J.; Chang, Shaorong; Holmes, James H.; Wang, Kang; Brau, Anja C. S.; Reeder, Scott B.; Brittain, Jean H.

    2014-01-01

    Purpose In this work, a new method is described for producing local k-space channel combination kernels using a small amount of low-resolution multichannel calibration data. Additionally, this work describes how these channel combination kernels can be combined with local k-space unaliasing kernels produced by the calibration phase of parallel imaging methods such as GRAPPA, PARS and ARC. Methods Experiments were conducted to evaluate both the image quality and computational efficiency of the proposed method compared to a channel-by-channel parallel imaging approach with image-space sum-of-squares channel combination. Results Results indicate comparable image quality overall, with some very minor differences seen in reduced field-of-view imaging. It was demonstrated that this method enables a speed up in computation time on the order of 3–16X for 32-channel data sets. Conclusion The proposed method enables high quality channel combination to occur earlier in the reconstruction pipeline, reducing computational and memory requirements for image reconstruction. PMID:23943602

  9. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    PubMed

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)

    DTIC Science & Technology

    2016-09-17

    test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model

  11. Efficient Implementation of Multigrid Solvers on Message-Passing Parrallel Systems

    NASA Technical Reports Server (NTRS)

    Lou, John

    1994-01-01

    We discuss our implementation strategies for finite difference multigrid partial differential equation (PDE) solvers on message-passing systems. Our target parallel architecture is Intel parallel computers: the Delta and Paragon system.

  12. Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture. Part 1: Non-reactive physical mass transfer across the wetted wall column: Original Research Article: Hierarchical calibration and validation for modeling bench-scale solvent-based carbon capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao; Xu, Zhijie; Lai, Canhai

    A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less

  13. Modeling Photo-multiplier Gain and Regenerating Pulse Height Data for Application Development

    NASA Astrophysics Data System (ADS)

    Aspinall, Michael D.; Jones, Ashley R.

    2018-01-01

    Systems that adopt organic scintillation detector arrays often require a calibration process prior to the intended measurement campaign to correct for significant performance variances between detectors within the array. These differences exist because of low tolerances associated with photo-multiplier tube technology and environmental influences. Differences in detector response can be corrected for by adjusting the supplied photo-multiplier tube voltage to control its gain and the effect that this has on the pulse height spectra from a gamma-only calibration source with a defined photo-peak. Automated methods that analyze these spectra and adjust the photo-multiplier tube bias accordingly are emerging for hardware that integrate acquisition electronics and high voltage control. However, development of such algorithms require access to the hardware, multiple detectors and calibration source for prolonged periods, all with associated constraints and risks. In this work, we report on a software function and related models developed to rescale and regenerate pulse height data acquired from a single scintillation detector. Such a function could be used to generate significant and varied pulse height data that can be used to integration-test algorithms that are capable of automatically response matching multiple detectors using pulse height spectra analysis. Furthermore, a function of this sort removes the dependence on multiple detectors, digital analyzers and calibration source. Results show a good match between the real and regenerated pulse height data. The function has also been used successfully to develop auto-calibration algorithms.

  14. Iterative algorithms for large sparse linear systems on parallel computers

    NASA Technical Reports Server (NTRS)

    Adams, L. M.

    1982-01-01

    Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.

  15. Kepler AutoRegressive Planet Search (KARPS)

    NASA Astrophysics Data System (ADS)

    Caceres, Gabriel

    2018-01-01

    One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The Kepler AutoRegressive Planet Search (KARPS) project implements statistical methodology associated with autoregressive processes (in particular, ARIMA and ARFIMA) to model stellar lightcurves in order to improve exoplanet transit detection. We also develop a novel Transit Comb Filter (TCF) applied to the AR residuals which provides a periodogram analogous to the standard Box-fitting Least Squares (BLS) periodogram. We train a random forest classifier on known Kepler Objects of Interest (KOIs) using select features from different stages of this analysis, and then use ROC curves to define and calibrate the criteria to recover the KOI planet candidates with high fidelity. These statistical methods are detailed in a contributed poster (Feigelson et al., this meeting).These procedures are applied to the full DR25 dataset of NASA’s Kepler mission. Using the classification criteria, a vast majority of known KOIs are recovered and dozens of new KARPS Candidate Planets (KCPs) discovered, including ultra-short period exoplanets. The KCPs will be briefly presented and discussed.

  16. Cool Flames in Propane-Oxygen Premixtures at Low and Intermediate Temperatures at Reduced-Gravity

    NASA Technical Reports Server (NTRS)

    Pearlman, Howard; Foster, Michael; Karabacak, Devrez

    2003-01-01

    The Cool Flame Experiment aims to address the role of diffusive transport on the structure and the stability of gas-phase, non-isothermal, hydrocarbon oxidation reactions, cool flames and auto-ignition fronts in an unstirred, static reactor. These reactions cannot be studied on Earth where natural convection due to self-heating during the course of slow reaction dominates diffusive transport and produces spatio-temporal variations in the thermal and thus species concentration profiles. On Earth, reactions with associated Rayleigh numbers (Ra) less than the critical Ra for onset of convection (Ra(sub cr) approx. 600) cannot be achieved in laboratory-scale vessels for conditions representative of nearly all low-temperature reactions. In fact, the Ra at 1g ranges from 10(exp 4) - 10(exp 5) (or larger), while at reduced-gravity, these values can be reduced two to six orders of magnitude (below Ra(sub cr)), depending on the reduced-gravity test facility. Currently, laboratory (1g) and NASA s KC-135 reduced-gravity (g) aircraft studies are being conducted in parallel with the development of a detailed chemical kinetic model that includes thermal and species diffusion. Select experiments have also been conducted at partial gravity (Martian, 0.3gearth) aboard the KC-135 aircraft. This paper discusses these preliminary results for propane-oxygen premixtures in the low to intermediate temperature range (310- 350 C) at reduced-gravity.

  17. An HF coaxial bridge for measuring impedance ratios up to 1 MHz

    NASA Astrophysics Data System (ADS)

    Kucera, J.; Sedlacek, R.; Bohacek, J.

    2012-08-01

    A four-terminal pair coaxial ac bridge developed for calibrating both resistance and capacitance ratios and working in the frequency range from 100 kHz up to 1 MHz is described. A reference inductive voltage divider (IVD) makes it possible to calibrate ratios 1:1 and 10:1 with uncertainty of a few parts in 105. The IVD is calibrated by means of a series-parallel capacitance device (SPCD). Use of the same ac bridge with minimal changes for calibrating the SPCD, IVD and unknown impedances simplifies the whole calibration process. The bridge balance conditions are fulfilled with simple capacitance and resistance decades and by injecting voltage supplied from the auxiliary direct digital synthesizer. Bridge performance was checked on the basis of resistance ratio measurements and also capacitance ratio measurements.

  18. A parallel calibration utility for WRF-Hydro on high performance computers

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, C.; Kotamarthi, V. R.

    2017-12-01

    A successful modeling of complex hydrological processes comprises establishing an integrated hydrological model which simulates the hydrological processes in each water regime, calibrates and validates the model performance based on observation data, and estimates the uncertainties from different sources especially those associated with parameters. Such a model system requires large computing resources and often have to be run on High Performance Computers (HPC). The recently developed WRF-Hydro modeling system provides a significant advancement in the capability to simulate regional water cycles more completely. The WRF-Hydro model has a large range of parameters such as those in the input table files — GENPARM.TBL, SOILPARM.TBL and CHANPARM.TBL — and several distributed scaling factors such as OVROUGHRTFAC. These parameters affect the behavior and outputs of the model and thus may need to be calibrated against the observations in order to obtain a good modeling performance. Having a parameter calibration tool specifically for automate calibration and uncertainty estimates of WRF-Hydro model can provide significant convenience for the modeling community. In this study, we developed a customized tool using the parallel version of the model-independent parameter estimation and uncertainty analysis tool, PEST, to enabled it to run on HPC with PBS and SLURM workload manager and job scheduler. We also developed a series of PEST input file templates that are specifically for WRF-Hydro model calibration and uncertainty analysis. Here we will present a flood case study occurred in April 2013 over Midwest. The sensitivity and uncertainties are analyzed using the customized PEST tool we developed.

  19. A fast combination calibration of foreground and background for pipelined ADCs

    NASA Astrophysics Data System (ADS)

    Kexu, Sun; Lenian, He

    2012-06-01

    This paper describes a fast digital calibration scheme for pipelined analog-to-digital converters (ADCs). The proposed method corrects the nonlinearity caused by finite opamp gain and capacitor mismatch in multiplying digital-to-analog converters (MDACs). The considered calibration technique takes the advantages of both foreground and background calibration schemes. In this combination calibration algorithm, a novel parallel background calibration with signal-shifted correlation is proposed, and its calibration cycle is very short. The details of this technique are described in the example of a 14-bit 100 Msample/s pipelined ADC. The high convergence speed of this background calibration is achieved by three means. First, a modified 1.5-bit stage is proposed in order to allow the injection of a large pseudo-random dithering without missing code. Second, before correlating the signal, it is shifted according to the input signal so that the correlation error converges quickly. Finally, the front pipeline stages are calibrated simultaneously rather than stage by stage to reduce the calibration tracking constants. Simulation results confirm that the combination calibration has a fast startup process and a short background calibration cycle of 2 × 221 conversions.

  20. Method and apparatus for obtaining stack traceback data for multiple computing nodes of a massively parallel computer system

    DOEpatents

    Gooding, Thomas Michael; McCarthy, Patrick Joseph

    2010-03-02

    A data collector for a massively parallel computer system obtains call-return stack traceback data for multiple nodes by retrieving partial call-return stack traceback data from each node, grouping the nodes in subsets according to the partial traceback data, and obtaining further call-return stack traceback data from a representative node or nodes of each subset. Preferably, the partial data is a respective instruction address from each node, nodes having identical instruction address being grouped together in the same subset. Preferably, a single node of each subset is chosen and full stack traceback data is retrieved from the call-return stack within the chosen node.

  1. Acidity measurement of iron ore powders using laser-induced breakdown spectroscopy with partial least squares regression.

    PubMed

    Hao, Z Q; Li, C M; Shen, M; Yang, X Y; Li, K H; Guo, L B; Li, X Y; Lu, Y F; Zeng, X Y

    2015-03-23

    Laser-induced breakdown spectroscopy (LIBS) with partial least squares regression (PLSR) has been applied to measuring the acidity of iron ore, which can be defined by the concentrations of oxides: CaO, MgO, Al₂O₃, and SiO₂. With the conventional internal standard calibration, it is difficult to establish the calibration curves of CaO, MgO, Al₂O₃, and SiO₂ in iron ore due to the serious matrix effects. PLSR is effective to address this problem due to its excellent performance in compensating the matrix effects. In this work, fifty samples were used to construct the PLSR calibration models for the above-mentioned oxides. These calibration models were validated by the 10-fold cross-validation method with the minimum root-mean-square errors (RMSE). Another ten samples were used as a test set. The acidities were calculated according to the estimated concentrations of CaO, MgO, Al₂O₃, and SiO₂ using the PLSR models. The average relative error (ARE) and RMSE of the acidity achieved 3.65% and 0.0048, respectively, for the test samples.

  2. Discrimination of healthy and osteoarthritic articular cartilages by Fourier transform infrared imaging and partial least squares-discriminant analysis

    PubMed Central

    Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang

    2015-01-01

    Abstract. Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens. PMID:26057029

  3. Discrimination of healthy and osteoarthritic articular cartilages by Fourier transform infrared imaging and partial least squares-discriminant analysis.

    PubMed

    Zhang, Xue-Xi; Yin, Jian-Hua; Mao, Zhi-Hua; Xia, Yang

    2015-06-01

    Fourier transform infrared imaging (FTIRI) combined with chemometrics algorithm has strong potential to obtain complex chemical information from biology tissues. FTIRI and partial least squares-discriminant analysis (PLS-DA) were used to differentiate healthy and osteoarthritic (OA) cartilages for the first time. A PLS model was built on the calibration matrix of spectra that was randomly selected from the FTIRI spectral datasets of healthy and lesioned cartilage. Leave-one-out cross-validation was performed in the PLS model, and the fitting coefficient between actual and predicted categorical values of the calibration matrix reached 0.95. In the calibration and prediction matrices, the successful identifying percentages of healthy and lesioned cartilage spectra were 100% and 90.24%, respectively. These results demonstrated that FTIRI combined with PLS-DA could provide a promising approach for the categorical identification of healthy and OA cartilage specimens.

  4. Calibrated complex impedance of CHO cells and E. coli bacteria at GHz frequencies using scanning microwave microscopy

    NASA Astrophysics Data System (ADS)

    Tuca, Silviu-Sorin; Badino, Giorgio; Gramse, Georg; Brinciotti, Enrico; Kasper, Manuel; Oh, Yoo Jin; Zhu, Rong; Rankl, Christian; Hinterdorfer, Peter; Kienberger, Ferry

    2016-04-01

    The application of scanning microwave microscopy (SMM) to extract calibrated electrical properties of cells and bacteria in air is presented. From the S 11 images, after calibration, complex impedance and admittance images of Chinese hamster ovary cells and E. coli bacteria deposited on a silicon substrate have been obtained. The broadband capabilities of SMM have been used to characterize the bio-samples between 2 GHz and 20 GHz. The resulting calibrated cell and bacteria admittance at 19 GHz were Y cell = 185 μS + j285 μS and Y bacteria = 3 μS + j20 μS, respectively. A combined circuitry-3D finite element method EMPro model has been developed and used to investigate the frequency response of the complex impedance and admittance of the SMM setup. Based on a proposed parallel resistance-capacitance model, the equivalent conductance and parallel capacitance of the cells and bacteria were obtained from the SMM images. The influence of humidity and frequency on the cell conductance was experimentally studied. To compare the cell conductance with bulk water properties, we measured the imaginary part of the bulk water loss with a dielectric probe kit in the same frequency range resulting in a high level of agreement.

  5. Membrane Introduction Mass Spectrometry Combined with an Orthogonal Partial-Least Squares Calibration Model for Mixture Analysis.

    PubMed

    Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu

    2017-01-01

    The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.

  6. A comparison of scope for growth (SFG) and dynamic energy budget (DEB) models applied to the blue mussel ( Mytilus edulis)

    NASA Astrophysics Data System (ADS)

    Filgueira, Ramón; Rosland, Rune; Grant, Jon

    2011-11-01

    Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.

  7. A 2D MTF approach to evaluate and guide dynamic imaging developments.

    PubMed

    Chao, Tzu-Cheng; Chung, Hsiao-Wen; Hoge, W Scott; Madore, Bruno

    2010-02-01

    As the number and complexity of partially sampled dynamic imaging methods continue to increase, reliable strategies to evaluate performance may prove most useful. In the present work, an analytical framework to evaluate given reconstruction methods is presented. A perturbation algorithm allows the proposed evaluation scheme to perform robustly without requiring knowledge about the inner workings of the method being evaluated. A main output of the evaluation process consists of a two-dimensional modulation transfer function, an easy-to-interpret visual rendering of a method's ability to capture all combinations of spatial and temporal frequencies. Approaches to evaluate noise properties and artifact content at all spatial and temporal frequencies are also proposed. One fully sampled phantom and three fully sampled cardiac cine datasets were subsampled (R = 4 and 8) and reconstructed with the different methods tested here. A hybrid method, which combines the main advantageous features observed in our assessments, was proposed and tested in a cardiac cine application, with acceleration factors of 3.5 and 6.3 (skip factors of 4 and 8, respectively). This approach combines features from methods such as k-t sensitivity encoding, unaliasing by Fourier encoding the overlaps in the temporal dimension-sensitivity encoding, generalized autocalibrating partially parallel acquisition, sensitivity profiles from an array of coils for encoding and reconstruction in parallel, self, hybrid referencing with unaliasing by Fourier encoding the overlaps in the temporal dimension and generalized autocalibrating partially parallel acquisition, and generalized autocalibrating partially parallel acquisition-enhanced sensitivity maps for sensitivity encoding reconstructions.

  8. Noninvasive pulse contour analysis for determination of cardiac output in patients with chronic heart failure.

    PubMed

    Roth, Sebastian; Fox, Henrik; Fuchs, Uwe; Schulz, Uwe; Costard-Jäckle, Angelika; Gummert, Jan F; Horstkotte, Dieter; Oldenburg, Olaf; Bitter, Thomas

    2018-05-01

    Determination of cardiac output (CO) is essential in diagnosis and management of heart failure (HF). The gold standard to obtain CO is invasive assessment via thermodilution (TD). Noninvasive pulse contour analysis (NPCA) is supposed as a new method of CO determination. However, a validation of this method in HF is pending and performed in the present study. Patients with chronic-stable HF and reduced left ventricular ejection fraction (LVEF ≤ 45%; HF-REF) underwent right heart catheterization including TD. NPCA using the CNAP Monitor (V5.2.14, CNSystems Medizintechnik AG) was performed simultaneously. Three standardized TD measurements were compared with simultaneous auto-calibrated NPCA CO measurements. In total, 84 consecutive HF-REF patients were enrolled prospectively in this study. In 4 patients (5%), TD was not successful and for 22 patients (26%, 18 with left ventricular assist device), no NPCA signal could be obtained. For the remaining 58 patients, Bland-Altman analysis revealed a mean bias of + 1.92 L/min (limits of agreement ± 2.28 L/min, percentage error 47.4%) for CO. With decreasing cardiac index, as determined by the gold standard of TD, there was an increasing gap between CO values obtained by TD and NPCA (r = - 0.75, p < 0.001), resulting in a systematic overestimation of CO in more severe HF. TD-CI classified 52 (90%) patients to have a reduced CI (< 2.5 L/min/m 2 ), while NPCA documented a reduced CI in 18 patients (31%) only. In HF-REF patients, auto-calibrated NPCA systematically overestimates CO with decrease in cardiac function. Therefore, to date, NPCA cannot be recommended in this cohort.

  9. Auto-DR and Pre-cooling of Buildings at Tri-City Corporate Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, Rongxin; Xu, Peng; Kiliccote, Sila

    2008-11-01

    Over the several past years, Lawrence Berkeley National Laboratory (LBNL) has conducted field tests for different pre-cooling strategies in different commercial buildings within California. The test results indicated that pre-cooling strategies were effective in reducing electric demand in these buildings during peak periods. This project studied how to optimize pre-cooling strategies for eleven buildings in the Tri-City Corporate Center, San Bernardino, California with the assistance of a building energy simulation tool -- the Demand Response Quick Assessment Tool (DRQAT) developed by LBNL's Demand Response Research Center funded by the California Energy Commission's Public Interest Energy Research (PIER) Program. From themore » simulation results of these eleven buildings, optimal pre-cooling and temperature reset strategies were developed. The study shows that after refining and calibrating initial models with measured data, the accuracy of the models can be greatly improved and the models can be used to predict load reductions for automated demand response (Auto-DR) events. This study summarizes the optimization experience of the procedure to develop and calibrate building models in DRQAT. In order to confirm the actual effect of demand response strategies, the simulation results were compared to the field test data. The results indicated that the optimal demand response strategies worked well for all buildings in the Tri-City Corporate Center. This study also compares DRQAT with other building energy simulation tools (eQUEST and BEST). The comparison indicate that eQUEST and BEST underestimate the actual demand shed of the pre-cooling strategies due to a flaw in DOE2's simulation engine for treating wall thermal mass. DRQAT is a more accurate tool in predicting thermal mass effects of DR events.« less

  10. A comparison of five standard methods for evaluating image intensity uniformity in partially parallel imaging MRI

    PubMed Central

    Goerner, Frank L.; Duong, Timothy; Stafford, R. Jason; Clarke, Geoffrey D.

    2013-01-01

    Purpose: To investigate the utility of five different standard measurement methods for determining image uniformity for partially parallel imaging (PPI) acquisitions in terms of consistency across a variety of pulse sequences and reconstruction strategies. Methods: Images were produced with a phantom using a 12-channel head matrix coil in a 3T MRI system (TIM TRIO, Siemens Medical Solutions, Erlangen, Germany). Images produced using echo-planar, fast spin echo, gradient echo, and balanced steady state free precession pulse sequences were evaluated. Two different PPI reconstruction methods were investigated, generalized autocalibrating partially parallel acquisition algorithm (GRAPPA) and modified sensitivity-encoding (mSENSE) with acceleration factors (R) of 2, 3, and 4. Additionally images were acquired with conventional, two-dimensional Fourier imaging methods (R = 1). Five measurement methods of uniformity, recommended by the American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) were considered. The methods investigated were (1) an ACR method and a (2) NEMA method for calculating the peak deviation nonuniformity, (3) a modification of a NEMA method used to produce a gray scale uniformity map, (4) determining the normalized absolute average deviation uniformity, and (5) a NEMA method that focused on 17 areas of the image to measure uniformity. Changes in uniformity as a function of reconstruction method at the same R-value were also investigated. Two-way analysis of variance (ANOVA) was used to determine whether R-value or reconstruction method had a greater influence on signal intensity uniformity measurements for partially parallel MRI. Results: Two of the methods studied had consistently negative slopes when signal intensity uniformity was plotted against R-value. The results obtained comparing mSENSE against GRAPPA found no consistent difference between GRAPPA and mSENSE with regard to signal intensity uniformity. The results of the two-way ANOVA analysis suggest that R-value and pulse sequence type produce the largest influences on uniformity and PPI reconstruction method had relatively little effect. Conclusions: Two of the methods of measuring signal intensity uniformity, described by the (NEMA) MRI standards, consistently indicated a decrease in uniformity with an increase in R-value. Other methods investigated did not demonstrate consistent results for evaluating signal uniformity in MR images obtained by partially parallel methods. However, because the spatial distribution of noise affects uniformity, it is recommended that additional uniformity quality metrics be investigated for partially parallel MR images. PMID:23927345

  11. Calibration for single multi-mode fiber digital scanning microscopy imaging system

    NASA Astrophysics Data System (ADS)

    Yin, Zhe; Liu, Guodong; Liu, Bingguo; Gan, Yu; Zhuang, Zhitao; Chen, Fengdong

    2015-11-01

    Single multimode fiber (MMF) digital scanning imaging system is a development tendency of modern endoscope. We concentrate on the calibration method of the imaging system. Calibration method comprises two processes, forming scanning focused spots and calibrating the couple factors varied with positions. Adaptive parallel coordinate algorithm (APC) is adopted to form the focused spots at the multimode fiber (MMF) output. Compare with other algorithm, APC contains many merits, i.e. rapid speed, small amount calculations and no iterations. The ratio of the optics power captured by MMF to the intensity of the focused spots is called couple factor. We setup the calibration experimental system to form the scanning focused spots and calculate the couple factors for different object positions. The experimental result the couple factor is higher in the center than the edge.

  12. An Exercise on Calibration: DRIFTS Study of Binary Mixtures of Calcite and Dolomite with Partially Overlapping Spectral Features

    ERIC Educational Resources Information Center

    De Lorenzi Pezzolo, Alessandra

    2013-01-01

    Unlike most spectroscopic calibrations that are based on the study of well-separated features ascribable to the different components, this laboratory experience is especially designed to exploit spectral features that are nearly overlapping. The investigated system consists of a binary mixture of two commonly occurring minerals, calcite and…

  13. Automatic first-arrival picking based on extended super-virtual interferometry with quality control procedure

    NASA Astrophysics Data System (ADS)

    An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao

    2017-12-01

    Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.

  14. An analysis of short haul air passenger demand, volume 2

    NASA Technical Reports Server (NTRS)

    Blumer, T. P.; Swan, W. M.

    1978-01-01

    Several demand models for short haul air travel are proposed and calibrated on pooled data. The models are designed to predict demand and analyze some of the motivating phenomena behind demand generation. In particular, an attempt is made to include the effects of competing modes and of alternate destinations. The results support three conclusions: (1) the auto mode is the air mode's major competitor; (2) trip time is an overriding factor in intermodal competition, with air fare at its present level appearing unimportant to the typical short haul air traveler; and (3) distance appears to underly several demand generating phenomena, and therefore, must be considered very carefully to any intercity demand model. It may be the cause of the wide range of fare elasticities reported by researchers over the past 15 years. A behavioral demand model is proposed and calibrated. It combines the travel generating effects of income and population, the effects of modal split, the sensitivity of travel to price and time, and the effect of alternative destinations satisfying the trip purpose.

  15. New approach in the treatment of data from an acid-base potentiometric titrationI. Monocomponent systems of monofunctional acids and bases.

    PubMed

    Maslarska, Vania; Tencheva, Jasmina; Budevsky, Omortag

    2003-01-01

    Based on precise analysis of the acid-base equilibrium, a new approach in the treatment of experimental data from a potentiometric titration is proposed. A new general formula giving explicitly the relation V=f([H(+)]) is derived, valid for every acid-base titration, which includes mono- and polyfunctional protolytes and their mixtures. The present study is the first practical application of this formula for the simplest case, the analysis of one monofunctional protolyte. The collected mV data during the titration are converted into pH-values by means of an auto pH-calibration procedure, thus avoiding preliminary preparation of the measuring system. The mentioned pH-calibration method is applicable also in water-organic mixtures and allows the quantitative determination of sparingly soluble substances (particularly pharmaceuticals). The treatment of the data is performed by means of ready-to-use software products, which makes the proposed approach accessible for a wide range of applications.

  16. HESP: Instrument control, calibration and pipeline development

    NASA Astrophysics Data System (ADS)

    Anantha, Ch.; Roy, Jayashree; Mahesh, P. K.; Parihar, P. S.; Sangal, A. K.; Sriram, S.; Anand, M. N.; Anupama, G. C.; Giridhar, S.; Prabhu, T. P.; Sivarani, T.; Sundararajan, M. S.

    Hanle Echelle SPectrograph (HESP) is a fibre-fed, high resolution (R = 30,000 and 60,000) spectrograph being developed for the 2m HCT telescope at IAO, Hanle. The major components of the instrument are a) Cassegrain unit b) Spectrometer instrument. An instrument control system interacting with a guiding unit at Cassegrain interface as well as handling spectrograph functions is being developed. An on-axis auto-guiding using the spill-over angular ring around the input pinhole is also being developed. The stellar light from the Cassegrain unit is taken to the spectrograph using an optical fiber which is being characterized for spectral transmission, focal ratio degradation and scrambling properties. The design of the thermal enclosure and thermal control for the spectrograph housing is presented. A data pipeline for the entire Echelle spectral reduction is being developed. We also plan to implement an instrument physical model based calibration into the main data pipeline and in the maintenance and quality control operations.

  17. Self-calibrated correlation imaging with k-space variant correlation functions.

    PubMed

    Li, Yu; Edalati, Masoud; Du, Xingfu; Wang, Hui; Cao, Jie J

    2018-03-01

    Correlation imaging is a previously developed high-speed MRI framework that converts parallel imaging reconstruction into the estimate of correlation functions. The presented work aims to demonstrate this framework can provide a speed gain over parallel imaging by estimating k-space variant correlation functions. Because of Fourier encoding with gradients, outer k-space data contain higher spatial-frequency image components arising primarily from tissue boundaries. As a result of tissue-boundary sparsity in the human anatomy, neighboring k-space data correlation varies from the central to the outer k-space. By estimating k-space variant correlation functions with an iterative self-calibration method, correlation imaging can benefit from neighboring k-space data correlation associated with both coil sensitivity encoding and tissue-boundary sparsity, thereby providing a speed gain over parallel imaging that relies only on coil sensitivity encoding. This new approach is investigated in brain imaging and free-breathing neonatal cardiac imaging. Correlation imaging performs better than existing parallel imaging techniques in simulated brain imaging acceleration experiments. The higher speed enables real-time data acquisition for neonatal cardiac imaging in which physiological motion is fast and non-periodic. With k-space variant correlation functions, correlation imaging gives a higher speed than parallel imaging and offers the potential to image physiological motion in real-time. Magn Reson Med 79:1483-1494, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. The potential of mid- and near-infrared diffuse reflectance spectroscopy for determining major- and trace-element concentrations in soils from a geochemical survey of North America

    USGS Publications Warehouse

    Reeves, J. B.; Smith, D.B.

    2009-01-01

    In 2004, soils were collected at 220 sites along two transects across the USA and Canada as a pilot study for a planned soil geochemical survey of North America (North American Soil Geochemical Landscapes Project). The objective of the current study was to examine the potential of diffuse reflectance (DR) Fourier Transform (FT) mid-infrared (mid-IR) and near-infrared (NIRS) spectroscopy to reduce the need for conventional analysis for the determination of major and trace elements in such continental-scale surveys. Soil samples (n = 720) were collected from two transects (east-west across the USA, and north-south from Manitoba, Canada to El Paso, Texas (USA), n = 453 and 267, respectively). The samples came from 19 USA states and the province of Manitoba in Canada. They represented 31 types of land use (e.g., national forest, rangeland, etc.), and 123 different land covers (e.g., soybeans, oak forest, etc.). The samples represented a combination of depth-based sampling (0-5 cm) and horizon-based sampling (O, A and C horizons) with 123 different depths identified. The set was very diverse with few samples similar in land use, land cover, etc. All samples were analyzed by conventional means for the near-total concentration of 49 analytes (Ctotal, Ccarbonate and Corganic, and 46 major and trace elements). Spectra were obtained using dried, ground samples using a Digilab FTS-7000 FT spectrometer in the mid- (4000-400 cm-1) and near-infrared (10,000-4000 cm-1) at 4 cm-1 resolution (64 co-added scans per spectrum) using a Pike AutoDIFF DR autosampler. Partial least squares calibrations were develop using: (1) all samples as a calibration set; (2) samples evenly divided into calibration and validation sets based on spectral diversity; and (3) samples divided to have matching analyte concentrations in calibration and validation sets. In general, results supported the conclusion that neither mid-IR nor NIRS would be particularly useful in reducing the need for conventional analysis of soils from this continental-scale geochemical survey. The extreme sample diversity, likely caused by the widely varied parent material, land use at the site of collection (e.g., grazing, recreation, agriculture, etc.), and climate resulted in poor calibrations even for Ctotal, Corganic and Ccarbonate. The results indicated potential for mid-IR and NIRS to differentiate soils containing high concentrations (>100 mg/kg) of some metals (e.g., Co, Cr, Ni) from low-level samples (<50 mg/kg). However, because of the small number of high-level samples, it is possible that differentiation was based on factors other than metal concentration. Results for Mg and Sr were good, but results for other metals examined were fair to poor, at best. In essence, it appears that the great variation in chemical and physical properties seen in soils from this continental-scale survey resulted in each sample being virtually unique. Thus, suitable spectroscopic calibrations were generally not possible.

  19. The mechanisms of hydrothermal deconstruction of lignocellulose: New insights from thermal–analytical and complementary studies

    PubMed Central

    Ibbett, Roger; Gaddipati, Sanyasi; Davies, Scott; Hill, Sandra; Tucker, Greg

    2011-01-01

    Differential Scanning Calorimetry, Dynamic Mechanical Thermal Analysis, gravimetric and chemical techniques have been used to study hydrothermal reactions of straw biomass. Exothermic degradation initiates above 195 °C, due to breakdown of the xylose ring from hemicellulose, which may be similar to reactions occurring during the early stage pyrolysis of dry biomass, though activated at lower temperature through water mediation. The temperature and magnitude of the exotherm reduce with increasing acid concentration, suggesting a reduction in activation energy and a change in the balance of reaction pathways. The presence of xylan oligomers in auto-catalytic hydrolysates is believed to be due to a low rate constant rather than a specific reaction mechanism. The loss of the lignin glass transition indicates that the lignin phase is reorganised under high temperature auto-catalytic conditions, but remains partially intact under lower temperature acid-catalytic conditions. This shows that lignin degradation reactions are activated thermally but are not effectively catalysed by aqueous acid. PMID:21763128

  20. Structure of Dimeric and Tetrameric Complexes of the BAR Domain Protein PICK1 Determined by Small-Angle X-Ray Scattering.

    PubMed

    Karlsen, Morten L; Thorsen, Thor S; Johner, Niklaus; Ammendrup-Johnsen, Ina; Erlendsson, Simon; Tian, Xinsheng; Simonsen, Jens B; Høiberg-Nielsen, Rasmus; Christensen, Nikolaj M; Khelashvili, George; Streicher, Werner; Teilum, Kaare; Vestergaard, Bente; Weinstein, Harel; Gether, Ulrik; Arleth, Lise; Madsen, Kenneth L

    2015-07-07

    PICK1 is a neuronal scaffolding protein containing a PDZ domain and an auto-inhibited BAR domain. BAR domains are membrane-sculpting protein modules generating membrane curvature and promoting membrane fission. Previous data suggest that BAR domains are organized in lattice-like arrangements when stabilizing membranes but little is known about structural organization of BAR domains in solution. Through a small-angle X-ray scattering (SAXS) analysis, we determine the structure of dimeric and tetrameric complexes of PICK1 in solution. SAXS and biochemical data reveal a strong propensity of PICK1 to form higher-order structures, and SAXS analysis suggests an offset, parallel mode of BAR-BAR oligomerization. Furthermore, unlike accessory domains in other BAR domain proteins, the positioning of the PDZ domains is flexible, enabling PICK1 to perform long-range, dynamic scaffolding of membrane-associated proteins. Together with functional data, these structural findings are compatible with a model in which oligomerization governs auto-inhibition of BAR domain function. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Exploring Alternative Characteristic Curve Approaches to Linking Parameter Estimates from the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Roberts, James S.; Bao, Han; Huang, Chun-Wei; Gagne, Phill

    Characteristic curve approaches for linking parameters from the generalized partial credit model were examined for cases in which common (anchor) items are calibrated separately in two groups. Three of these approaches are simple extensions of the test characteristic curve (TCC), item characteristic curve (ICC), and operating characteristic curve…

  2. A universal heliostat control system

    NASA Astrophysics Data System (ADS)

    Gross, Fabian; Geiger, Mark; Buck, Reiner

    2017-06-01

    This paper describes the development of a universal heliostat control system as part of the AutoR project [1]. The system can control multiple receivers and heliostat types in a single application. The system offers support for multiple operators on different machines and is designed to be as adaptive as possible. Thus, the system can be used for different heliostat field setups with only minor adaptations of the system's source code. This is achieved by extensive usage of modern programming techniques like reflection and dependency injection. Furthermore, the system features co-simulation of a ray tracer, a reference PID-controller implementation for open volumetric receivers and methods for heliostat calibration and monitoring.

  3. Anti-D auto-immunization in a patient with weak D type 4.0.

    PubMed

    Ouchari, M; Chakroun, T; Abdelkefi, S; Romdhane, H; Houissa, B; Jemni Yacoub, S

    2014-03-01

    We report the case of a 56-year-old patient with blood group O+C-c+E-e+K-, followed for a myelodysplasic syndrome and treated by regular pheno-identical and compatible (RBCs) transfusion since December 2007. In June 2009, a positive crossmatch was found with 2 RBCs O+C-c+E-e+K-. A positive anti-body screening with a positive autocontrol was detected and anti-D was unidentified in the patient's serum. The DAT was positive (IgG) and elution identified an anti-D. The following assumptions were then made: it could be a partial D phenotype with anti-D alloantibodies or RH: 1 phenotype with an anti-D auto-antibodies. Molecular analysis by multiplex PCR and sequencing have depisted a weak D type 4.0 phenotype. In October 2009, over three months of RH:-1 RBC transfusion, the antibody screening and DAT (IgG) remained positive, and an eluate made from the patient's erythrocytes contained an anti-D. All these funding confirmed the autoimmune nature of the anti-D. This case report illustrates the importance of a well-conducted and immunohematological laboratories test in order to distinguish between auto- or allo-immune of anti-D in a RH: 1 poly-transfused patients. This distinction is of great importance for transfusion support. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  4. Thermally determining flow and/or heat load distribution in parallel paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.

    A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.

  5. Thermally determining flow and/or heat load distribution in parallel paths

    DOEpatents

    Chainer, Timothy J.; Iyengar, Madhusudan K.; Parida, Pritish R.

    2016-12-13

    A method including obtaining calibration data for at least one sub-component in a heat transfer assembly, wherein the calibration data comprises at least one indication of coolant flow rate through the sub-component for a given surface temperature delta of the sub-component and a given heat load into said sub-component, determining a measured heat load into the sub-component, determining a measured surface temperature delta of the sub-component, and determining a coolant flow distribution in a first flow path comprising the sub-component from the calibration data according to the measured heat load and the measured surface temperature delta of the sub-component.

  6. Landsat-7 Enhanced Thematic Mapper plus radiometric calibration

    USGS Publications Warehouse

    Markham, B.L.; Boncyk, Wayne C.; Helder, D.L.; Barker, J.L.

    1997-01-01

    Landsat-7 is currently being built and tested for launch in 1998. The Enhanced Thematic Mapper Plus (ETM+) sensor for Landsat-7, a derivative of the highly successful Thematic Mapper (TM) sensors on Landsats 4 and 5, and the Landsat-7 ground system are being built to provide enhanced radiometric calibration performance. In addition, regular vicarious calibration campaigns are being planned to provide additional information for calibration of the ETM+ instrument. The primary upgrades to the instrument include the addition of two solar calibrators: the full aperture solar calibrator, a deployable diffuser, and the partial aperture solar calibrator, a passive device that allows the ETM+ to image the sun. The ground processing incorporates for the first time an off-line facility, the Image Assessment System (IAS), to perform calibration, evaluation and analysis. Within the IAS, processing capabilities include radiometric artifact characterization and correction, radiometric calibration from the multiple calibrator sources, inclusion of results from vicarious calibration and statistical trending of calibration data to improve calibration estimation. The Landsat Product Generation System, the portion of the ground system responsible for producing calibrated products, will incorporate the radiometric artifact correction algorithms and will use the calibration information generated by the IAS. This calibration information will also be supplied to ground processing systems throughout the world.

  7. Microhardness and lattice parameter calibrations of the oxygen solid solutions of unalloyed alpha-titanium and Ti-6Al-2Sn-4Zr-2Mo

    NASA Technical Reports Server (NTRS)

    Wiedemann, K. E.; Shenoy, R. N.; Unnam, J.

    1987-01-01

    Standards were prepared for calibrating microanalyses of dissolved oxygen in unalloyed alpha-Ti and Ti-6Al-2Sn-4Zr-2Mo. Foils of both of these materials were homogenized for 120 hours in vacuum at 871 C following short exposures to the ambient atmosphere at 854 C that had partially oxidized the foils. The variation of Knoop microhardness with oxygen content was calibrated for both materials using 15-g and 5-g indentor loads. The unit-cell lattice parameters were calibrated for the unalloyed alpha-Ti. Example analyses demonstrate the usefulness of these calibrations and support an explanation of an anomaly in the lattice parameter variation. The results of the calibrations have been tabulated and summarized using predictive equations.

  8. Chemometrics-assisted simultaneous determination of cobalt(II) and chromium(III) with flow-injection chemiluminescence method

    NASA Astrophysics Data System (ADS)

    Li, Baoxin; Wang, Dongmei; Lv, Jiagen; Zhang, Zhujun

    2006-09-01

    In this paper, a flow-injection chemiluminescence (CL) system is proposed for simultaneous determination of Co(II) and Cr(III) with partial least squares calibration. This method is based on the fact that both Co(II) and Cr(III) catalyze the luminol-H 2O 2 CL reaction, and that their catalytic activities are significantly different on the same reaction condition. The CL intensity of Co(II) and Cr(III) was measured and recorded at different pH of reaction medium, and the obtained data were processed by the chemometric approach of partial least squares. The experimental calibration set was composed with nine sample solutions using orthogonal calibration design for two component mixtures. The calibration curve was linear over the concentration range of 2 × 10 -7 to 8 × 10 -10 and 2 × 10 -6 to 4 × 10 -9 g/ml for Co(II) and Cr(III), respectively. The proposed method offers the potential advantages of high sensitivity, simplicity and rapidity for Co(II) and Cr(III) determination, and was successfully applied to the simultaneous determination of both analytes in real water sample.

  9. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  10. Task-shifting of CD4 T cell count monitoring by the touchscreen-based Muse™ Auto CD4/CD4% single-platform system for CD4 T cell numeration: Implication for decentralization in resource-constrained settings.

    PubMed

    Kouabosso, André; Mossoro-Kpinde, Christian Diamant; Bouassa, Ralph-Sydney Mboumba; Longo, Jean De Dieu; Mbeko Simaleko, Marcel; Grésenguet, Gérard; Bélec, Laurent

    2018-04-01

    The accuracy of CD4 T cell monitoring by the recently developed flow cytometry-based CD4 T cell counting Muse™ Auto CD4/CD4% Assay analyzer (EMD Millipore Corporation, Merck Life Sciences, KGaA, Darmstadt, Germany) was evaluated in trained lay providers against laboratory technicians. After 2 days of training on the Muse™ Auto CD4/CD4% analyzer, EDTA-blood samples from 6 HIV-positive and 4 HIV-negative individuals were used for CD4 T cell counting in triplicate in parallel by 12 trained lay providers as compared to 10 lab technicians. Mean number of CD4 T cells in absolute number was 829 ± 380 cells/μl by lay providers and 794 ± 409 cells/μl by technicians (P > 0.05); and in percentage 36.2 ± 14.8%CD4 by lay providers and 36.1 ± 15.0%CD4 by laboratory technician (P > 0.05). The unweighted linear regression and Passing-Bablok regression analyses on CD4 T cell results expressed in absolute count revealed moderate correlation between CD4 T cell counts obtained by lay providers and lab technicians. The mean absolute bias measured by Bland-Altman analysis between CD4 T cell/μl obtained by lay providers and lab technicians was -3.41 cells/μl. Intra-assay coefficient of variance (CV) of Muse™ Auto CD4/CD4% in absolute number was 10.1% by lay providers and 8.5% by lab technicians (P > 0.05), and in percentage 5.5% by lay providers and 4.4% by lab technicians (P > 0.05). The inter-assay CV of Muse™ Auto CD4/CD4% in absolute number was 13.4% by lay providers and 10.3% by lab technicians (P > 0.05), and in percentage 7.8% by lay providers and 6.9% by lab technicians (P > 0.05). The study demonstrates the feasibility of CD4 T cell counting using the alternative flow cytometer Muse™ Auto CD4/CD4% analyzer by trained lay providers and therefore the practical possibility of decentralization CD4 T cell counting to health community centers. Copyright © 2018. Published by Elsevier B.V.

  11. Detection of Tetracycline in Milk using NIR Spectroscopy and Partial Least Squares

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Xu, Chenshan; Yang, Renjie; Ji, Xinning; Liu, Xinyuan; Yang, Fan; Zeng, Ming

    2018-02-01

    The feasibility of measuring tetracycline in milk was investigated by near infrared (NIR) spectroscopic technique combined with partial least squares (PLS) method. The NIR transmittance spectra of 40 pure milk samples and 40 tetracycline adulterated milk samples with different concentrations (from 0.005 to 40 mg/L) were obtained. The pure milk and tetracycline adulterated milk samples were properly assigned to the categories with 100% accuracy in the calibration set, and the rate of correct classification of 96.3% was obtained in the prediction set. For the quantitation of tetracycline in adulterated milk, the root mean squares errors for calibration and prediction models were 0.61 mg/L and 4.22 mg/L, respectively. The PLS model had good fitting effect in calibration set, however its predictive ability was limited, especially for low tetracycline concentration samples. Totally, this approach can be considered as a promising tool for discrimination of tetracycline adulterated milk, as a supplement to high performance liquid chromatography.

  12. Direct determination of danofloxacin and flumequine in milk by use of fluorescence spectrometry in combination with partial least-squares calibration.

    PubMed

    Murillo Pulgarín, J A; Alañón Molina, A; Boras, N

    2013-03-20

    A new method for the simultaneous determination of danofloxacin and flumequine in milk samples was developed by using the nonlinear variable-angle synchronous fluorescence technique to acquire data and a partial least-squares chemometric algorithm to process them. A calibration set of standard samples was designed by combination of a factorial design with two levels per factor and a central star design. Whey was used as the third component of the calibration matrix. In order to assess the goodness of the proposed method, a prediction set of 11 synthetic samples was analyzed, obtaining recovery percentages between 96.1% and 104.0%. Limits of detection, calculated by means of a new criterion, were 0.90 and 12.4 ng mL(-1) for danofloxacin and flumequine, respectively. Finally, the simultaneous determination of both fluoroquinoles in milk samples containing the analytes was successfully carried out, obtaining an average recovery percentage of 99.3 ± 4.4 for danofloxacin and 100.7 ± 4.4.

  13. Laparoscopic calibrated total vs partial fundoplication following Heller myotomy for oesophageal achalasia

    PubMed Central

    Martino, Natale Di; Brillantino, Antonio; Monaco, Luigi; Marano, Luigi; Schettino, Michele; Porfidia, Raffaele; Izzo, Giuseppe; Cosenza, Angelo

    2011-01-01

    AIM: To compare the mid-term outcomes of laparoscopic calibrated Nissen-Rossetti fundoplication with Dor fundoplication performed after Heller myotomy for oesophageal achalasia. METHODS: Fifty-six patients (26 men, 30 women; mean age 42.8 ± 14.7 years) presenting for minimally invasive surgery for oesophageal achalasia, were enrolled. All patients underwent laparoscopic Heller myotomy followed by a 180° anterior partial fundoplication in 30 cases (group 1) and calibrated Nissen-Rossetti fundoplication in 26 (group 2). Intraoperative endoscopy and manometry were used to calibrate the myotomy and fundoplication. A 6-mo follow-up period with symptomatic evaluation and barium swallow was undertaken. One and two years after surgery, the patients underwent symptom questionnaires, endoscopy, oesophageal manometry and 24 h oesophago-gastric pH monitoring. RESULTS: At the 2-year follow-up, no significant difference in the median symptom score was observed between the 2 groups (P = 0.66; Mann-Whitney U-test). The median percentage time with oesophageal pH < 4 was significantly higher in the Dor group compared to the Nissen-Rossetti group (2; range 0.8-10 vs 0.35; range 0-2) (P < 0.0001; Mann-Whitney U-test). CONCLUSION: Laparoscopic Dor and calibrated Nissen-Rossetti fundoplication achieved similar results in the resolution of dysphagia. Nissen-Rossetti fundoplication seems to be more effective in suppressing oesophageal acid exposure. PMID:21876635

  14. Modulated heat pulse propagation and partial transport barriers in chaotic magnetic fields

    DOE PAGES

    del-Castillo-Negrete, Diego; Blazevski, Daniel

    2016-04-01

    Direct numerical simulations of the time dependent parallel heat transport equation modeling heat pulses driven by power modulation in 3-dimensional chaotic magnetic fields are presented. The numerical method is based on the Fourier formulation of a Lagrangian-Green's function method that provides an accurate and efficient technique for the solution of the parallel heat transport equation in the presence of harmonic power modulation. The numerical results presented provide conclusive evidence that even in the absence of magnetic flux surfaces, chaotic magnetic field configurations with intermediate levels of stochasticity exhibit transport barriers to modulated heat pulse propagation. In particular, high-order islands and remnants of destroyed flux surfaces (Cantori) act as partial barriers that slow down or even stop the propagation of heat waves at places where the magnetic field connection length exhibits a strong gradient. The key parameter ismore » $$\\gamma=\\sqrt{\\omega/2 \\chi_\\parallel}$$ that determines the length scale, $$1/\\gamma$$, of the heat wave penetration along the magnetic field line. For large perturbation frequencies, $$\\omega \\gg 1$$, or small parallel thermal conductivities, $$\\chi_\\parallel \\ll 1$$, parallel heat transport is strongly damped and the magnetic field partial barriers act as robust barriers where the heat wave amplitude vanishes and its phase speed slows down to a halt. On the other hand, in the limit of small $$\\gamma$$, parallel heat transport is largely unimpeded, global transport is observed and the radial amplitude and phase speed of the heat wave remain finite. Results on modulated heat pulse propagation in fully stochastic fields and across magnetic islands are also presented. In qualitative agreement with recent experiments in LHD and DIII-D, it is shown that the elliptic (O) and hyperbolic (X) points of magnetic islands have a direct impact on the spatio-temporal dependence of the amplitude and the time delay of modulated heat pulses.« less

  15. Non-orthogonal tool/flange and robot/world calibration.

    PubMed

    Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-12-01

    For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Loss of migration and urbanization in birds: a case study of the blackbird (Turdus merula).

    PubMed

    Møller, Anders Pape; Jokimäki, Jukka; Skorka, Piotr; Tryjanowski, Piotr

    2014-07-01

    Many organisms have invaded urban habitats, although the underlying factors initially promoting urbanization remain poorly understood. Partial migration may facilitate urbanization because such populations benefit from surplus food in urban environments during winter, and hence enjoy reduced fitness costs of migratory deaths. We tested this hypothesis in the European blackbird Turdus merula, which has been urbanized since the 19th century, by compiling information on timing of urbanization, migratory status, and population density for 99 cities across the continent. Timing of urbanization was spatially auto-correlated at scales up to 600 km. Analyses of timing of urbanization revealed that urbanization occurred earlier in partially migratory and resident populations than in migratory populations of blackbirds. Independently, this effect was most pronounced in the range of the distribution that currently has the highest population density, suggesting that urbanization facilitated population growth. These findings are consistent with the hypothesis that timing of urbanization is facilitated by partial migration, resulting in subsequent residency and population growth.

  17. Characterization and Simulation of a New Design Parallel-Plate Ionization Chamber for CT Dosimetry at Calibration Laboratories

    NASA Astrophysics Data System (ADS)

    Perini, Ana P.; Neves, Lucio P.; Maia, Ana F.; Caldas, Linda V. E.

    2013-12-01

    In this work, a new extended-length parallel-plate ionization chamber was tested in the standard radiation qualities for computed tomography established according to the half-value layers defined at the IEC 61267 standard, at the Calibration Laboratory of the Instituto de Pesquisas Energéticas e Nucleares (IPEN). The experimental characterization was made following the IEC 61674 standard recommendations. The experimental results obtained with the ionization chamber studied in this work were compared to those obtained with a commercial pencil ionization chamber, showing a good agreement. With the use of the PENELOPE Monte Carlo code, simulations were undertaken to evaluate the influence of the cables, insulator, PMMA body, collecting electrode, guard ring, screws, as well as different materials and geometrical arrangements, on the energy deposited on the ionization chamber sensitive volume. The maximum influence observed was 13.3% for the collecting electrode, and regarding the use of different materials and design, the substitutions showed that the original project presented the most suitable configuration. The experimental and simulated results obtained in this work show that this ionization chamber has appropriate characteristics to be used at calibration laboratories, for dosimetry in standard computed tomography and diagnostic radiology quality beams.

  18. The simple procedure for the fluxgate magnetometers calibration

    NASA Astrophysics Data System (ADS)

    Marusenkov, Andriy

    2014-05-01

    The fluxgate magnetometers are widely used in geophysics investigations including the geomagnetic field monitoring at the global network of geomagnetic observatories as well as for electromagnetic sounding of the Earth's crust conductivity. For solving these tasks the magnetometers have to be calibrated with an appropriate level of accuracy. As a particular case, the ways to satisfy the recent requirements to the scaling and orientation errors of 1-second INTERNAGNET magnetometers are considered in the work. The goal of the present study was to choose a simple and reliable calibration method for estimation of scale factors and angular errors of the three-axis magnetometers in the field. There are a large number of the scalar calibration methods, which use a free rotation of the sensor in the calibration field followed by complicated data processing procedures for numerical solution of the high-order equations set. The chosen approach also exploits the Earth's magnetic field as a calibrating signal, but, in contrast to other methods, the sensor has to be oriented in some particular positions in respect to the total field vector, instead of the sensor free rotation. This allows to use very simple and straightforward linear computation formulas and, as a result, to achieve more reliable estimations of the calibrated parameters. The estimation of the scale factors is performed by the sequential aligning of each component of the sensor in two positions: parallel and anti-parallel to the Earth's magnetic field vector. The estimation of non-orthogonality angles between each pair of components is performed after sequential aligning of the components at the angles +/- 45 and +/- 135 degrees of arc in respect to the total field vector. Due to such four positions approach the estimations of the non-orthogonality angles are invariant to the zero offsets and non-linearity of transfer functions of the components. The experimental justifying of the proposed method by means of the Coil Calibration system reveals, that the achieved accuracy (<0.04 % for scale factors and 0.03 degrees of arc for angle errors) is sufficient for many applications, particularly for satisfying the INTERMAGNET requirements to 1-second instruments.

  19. Technical Note: exploring the limit for the conversion of energy-subtracted CT number to electron density for high-atomic-number materials.

    PubMed

    Saito, Masatoshi; Tsukihara, Masayoshi

    2014-07-01

    For accurate tissue inhomogeneity correction in radiotherapy treatment planning, the authors had previously proposed a novel conversion of the energy-subtracted CT number to an electron density (ΔHU-ρe conversion), which provides a single linear relationship between ΔHU and ρe over a wide ρe range. The purpose of this study is to address the limitations of the conversion method with respect to atomic number (Z) by elucidating the role of partial photon interactions in the ΔHU-ρe conversion process. The authors performed numerical analyses of the ΔHU-ρe conversion for 105 human body tissues, as listed in ICRU Report 46, and elementary substances with Z = 1-40. Total and partial attenuation coefficients for these materials were calculated using the XCOM photon cross section database. The effective x-ray energies used to calculate the attenuation were chosen to imitate a dual-source CT scanner operated at 80-140 kV/Sn under well-calibrated and poorly calibrated conditions. The accuracy of the resultant calibrated electron density,[Formula: see text], for the ICRU-46 body tissues fully satisfied the IPEM-81 tolerance levels in radiotherapy treatment planning. If a criterion of [Formula: see text]ρe - 1 is assumed to be within ± 2%, the predicted upper limit of Z applicable for the ΔHU-ρe conversion under the well-calibrated condition is Z = 27. In the case of the poorly calibrated condition, the upper limit of Z is approximately 16. The deviation from the ΔHU-ρe linearity for higher Z substances is mainly caused by the anomalous variation in the photoelectric-absorption component. Compensation among the three partial components of the photon interactions provides for sufficient linearity of the ΔHU-ρe conversion to be applicable for most human tissues even for poorly conditioned scans in which there exists a large variation of effective x-ray energies owing to beam-hardening effects arising from the mismatch between the sizes of the object and the calibration phantom.

  20. Calibration-free optical chemical sensors

    DOEpatents

    DeGrandpre, Michael D.

    2006-04-11

    An apparatus and method for taking absorbance-based chemical measurements are described. In a specific embodiment, an indicator-based pCO2 (partial pressure of CO2) sensor displays sensor-to-sensor reproducibility and measurement stability. These qualities are achieved by: 1) renewing the sensing solution, 2) allowing the sensing solution to reach equilibrium with the analyte, and 3) calculating the response from a ratio of the indicator solution absorbances which are determined relative to a blank solution. Careful solution preparation, wavelength calibration, and stray light rejection also contribute to this calibration-free system. Three pCO2 sensors were calibrated and each had response curves which were essentially identical within the uncertainty of the calibration. Long-term laboratory and field studies showed the response had no drift over extended periods (months). The theoretical response, determined from thermodynamic characterization of the indicator solution, also predicted the observed calibration-free performance.

  1. Terminal Area Procedures for Paired Runways

    NASA Technical Reports Server (NTRS)

    Lozito, Sandra; Verma, Savita Arora

    2011-01-01

    Parallel runway operations have been found to increase capacity within the National Airspace but poor visibility conditions reduce the use of these operations. The NextGen and SESAR Programs have identified the capacity benefits from increased use of closely-space parallel runway. Previous research examined the concepts and procedures related to parallel runways however, there has been no investigation of the procedures associated with the strategic and tactical pairing of aircraft for these operations. This simulation study developed and examined the pilot and controller procedures and information requirements for creating aircraft pairs for parallel runway operations. The goal was to achieve aircraft pairing with a temporal separation of 15s (+/- 10s error) at a coupling point that was about 12 nmi from the runway threshold. Two variables were explored for the pilot participants: two levels of flight deck automation (current-day flight deck automation and auto speed control future automation) as well as two flight deck displays that assisted in pilot conformance monitoring. The controllers were also provided with automation to help create and maintain aircraft pairs. Results show the operations in this study were acceptable and safe. Subjective workload, when using the pairing procedures and tools, was generally low for both controllers and pilots, and situation awareness was typically moderate to high. Pilot workload was influenced by display type and automation condition. Further research on pairing and off-nominal conditions is required however, this investigation identified promising findings about the feasibility of closely-spaced parallel runway operations.

  2. UNFOLD-SENSE: a parallel MRI method with self-calibration and artifact suppression.

    PubMed

    Madore, Bruno

    2004-08-01

    This work aims at improving the performance of parallel imaging by using it with our "unaliasing by Fourier-encoding the overlaps in the temporal dimension" (UNFOLD) temporal strategy. A self-calibration method called "self, hybrid referencing with UNFOLD and GRAPPA" (SHRUG) is presented. SHRUG combines the UNFOLD-based sensitivity mapping strategy introduced in the TSENSE method by Kellman et al. (5), with the strategy introduced in the GRAPPA method by Griswold et al. (10). SHRUG merges the two approaches to alleviate their respective limitations, and provides fast self-calibration at any given acceleration factor. UNFOLD-SENSE further includes an UNFOLD artifact suppression scheme to significantly suppress artifacts and amplified noise produced by parallel imaging. This suppression scheme, which was published previously (4), is related to another method that was presented independently as part of TSENSE. While the two are equivalent at accelerations < or = 2.0, the present approach is shown here to be significantly superior at accelerations > 2.0, with up to double the artifact suppression at high accelerations. Furthermore, a slight modification of Cartesian SENSE is introduced, which allows departures from purely Cartesian sampling grids. This technique, termed variable-density SENSE (vdSENSE), allows the variable-density data required by SHRUG to be reconstructed with the simplicity and fast processing of Cartesian SENSE. UNFOLD-SENSE is given by the combination of SHRUG for sensitivity mapping, vdSENSE for reconstruction, and UNFOLD for artifact/amplified noise suppression. The method was implemented, with online reconstruction, on both an SSFP and a myocardium-perfusion sequence. The results from six patients scanned with UNFOLD-SENSE are presented.

  3. Autologous/Allogeneic Hematopoietic Cell Transplantation versus Tandem Autologous Transplantation for Multiple Myeloma: Comparison of Long-Term Postrelapse Survival.

    PubMed

    Htut, Myo; D'Souza, Anita; Krishnan, Amrita; Bruno, Benedetto; Zhang, Mei-Jie; Fei, Mingwei; Diaz, Miguel Angel; Copelan, Edward; Ganguly, Siddhartha; Hamadani, Mehdi; Kharfan-Dabaja, Mohamed; Lazarus, Hillard; Lee, Cindy; Meehan, Kenneth; Nishihori, Taiga; Saad, Ayman; Seo, Sachiko; Ramanathan, Muthalagu; Usmani, Saad Z; Gasparetto, Christina; Mark, Tomer M; Nieto, Yago; Hari, Parameswaran

    2018-03-01

    We compared postrelapse overall survival (OS) after autologous/allogeneic (auto/allo) versus tandem autologous (auto/auto) hematopoietic cell transplantation (HCT) in patients with multiple myeloma (MM). Postrelapse survival of patients receiving an auto/auto or auto/allo HCT for MM and prospectively reported to the Center for International Blood and Marrow Transplant Research between 2000 and 2010 were analyzed. Relapse occurred in 404 patients (72.4%) in the auto/auto group and in 178 patients (67.4%) in the auto/allo group after a median follow-up of 8.5 years. Relapse occurred before 6 months after a second HCT in 46% of the auto/allo patients, compared with 26% of the auto/auto patients. The 6-year postrelapse survival was better in the auto/allo group compared with the auto/auto group (44% versus 35%; P = .05). Mortality due to MM was 69% (n = 101) in the auto/allo group and 83% (n = 229) deaths in auto/auto group. In multivariate analysis, both cohorts had a similar risk of death in the first year after relapse (hazard ratio [HR], .72; P = .12); however, for time points beyond 12 months after relapse, overall survival was superior in the auto/allo cohort (HR for death in auto/auto =1.55; P = .005). Other factors associated with superior survival were enrollment in a clinical trial for HCT, male sex, and use of novel agents at induction before HCT. Our findings shown superior survival afterrelapse in auto/allo HCT recipients compared with auto/auto HCT recipients. This likely reflects a better response to salvage therapy, such as immunomodulatory drugs, potentiated by a donor-derived immunologic milieu. Further augmentation of the post-allo-HCT immune system with new immunotherapies, such as monoclonal antibodies, checkpoint inhibitors, and others, merit investigation. Copyright © 2017 The American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.

  4. Self-organized dynamical complexity in human wakefulness and sleep: different critical brain-activity feedback for conscious and unconscious states.

    PubMed

    Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Laurino, Marco; Piarulli, Andrea; Gemignani, Angelo

    2015-09-01

    Criticality reportedly describes brain dynamics. The main critical feature is the presence of scale-free neural avalanches, whose auto-organization is determined by a critical branching ratio of neural-excitation spreading. Other features, directly associated to second-order phase transitions, are: (i) scale-free-network topology of functional connectivity, stemming from suprathreshold pairwise correlations, superimposable, in waking brain activity, with that of ferromagnets at Curie temperature; (ii) temporal long-range memory associated to renewal intermittency driven by abrupt fluctuations in the order parameters, detectable in human brain via spatially distributed phase or amplitude changes in EEG activity. Herein we study intermittent events, extracted from 29 night EEG recordings, including presleep wakefulness and all phases of sleep, where different levels of mentation and consciousness are present. We show that while critical avalanching is unchanged, at least qualitatively, intermittency and functional connectivity, present during conscious phases (wakefulness and REM sleep), break down during both shallow and deep non-REM sleep. We provide a theory for fragmentation-induced intermittency breakdown and suggest that the main difference between conscious and unconscious states resides in the backwards causation, namely on the constraints that the emerging properties at large scale induce to the lower scales. In particular, while in conscious states this backwards causation induces a critical slowing down, preserving spatiotemporal correlations, in dreamless sleep we see a self-organized maintenance of moduli working in parallel. Critical avalanches are still present, and establish transient auto-organization, whose enhanced fluctuations are able to trigger sleep-protecting mechanisms that reinstate parallel activity. The plausible role of critical avalanches in dreamless sleep is to provide a rapid recovery of consciousness, if stimuli are highly arousing.

  5. Self-organized dynamical complexity in human wakefulness and sleep: Different critical brain-activity feedback for conscious and unconscious states

    NASA Astrophysics Data System (ADS)

    Allegrini, Paolo; Paradisi, Paolo; Menicucci, Danilo; Laurino, Marco; Piarulli, Andrea; Gemignani, Angelo

    2015-09-01

    Criticality reportedly describes brain dynamics. The main critical feature is the presence of scale-free neural avalanches, whose auto-organization is determined by a critical branching ratio of neural-excitation spreading. Other features, directly associated to second-order phase transitions, are: (i) scale-free-network topology of functional connectivity, stemming from suprathreshold pairwise correlations, superimposable, in waking brain activity, with that of ferromagnets at Curie temperature; (ii) temporal long-range memory associated to renewal intermittency driven by abrupt fluctuations in the order parameters, detectable in human brain via spatially distributed phase or amplitude changes in EEG activity. Herein we study intermittent events, extracted from 29 night EEG recordings, including presleep wakefulness and all phases of sleep, where different levels of mentation and consciousness are present. We show that while critical avalanching is unchanged, at least qualitatively, intermittency and functional connectivity, present during conscious phases (wakefulness and REM sleep), break down during both shallow and deep non-REM sleep. We provide a theory for fragmentation-induced intermittency breakdown and suggest that the main difference between conscious and unconscious states resides in the backwards causation, namely on the constraints that the emerging properties at large scale induce to the lower scales. In particular, while in conscious states this backwards causation induces a critical slowing down, preserving spatiotemporal correlations, in dreamless sleep we see a self-organized maintenance of moduli working in parallel. Critical avalanches are still present, and establish transient auto-organization, whose enhanced fluctuations are able to trigger sleep-protecting mechanisms that reinstate parallel activity. The plausible role of critical avalanches in dreamless sleep is to provide a rapid recovery of consciousness, if stimuli are highly arousing.

  6. Auto-Context Convolutional Neural Network (Auto-Net) for Brain Extraction in Magnetic Resonance Imaging.

    PubMed

    Mohseni Salehi, Seyed Sadegh; Erdogmus, Deniz; Gholipour, Ali

    2017-11-01

    Brain extraction or whole brain segmentation is an important first step in many of the neuroimage analysis pipelines. The accuracy and the robustness of brain extraction, therefore, are crucial for the accuracy of the entire brain analysis process. The state-of-the-art brain extraction techniques rely heavily on the accuracy of alignment or registration between brain atlases and query brain anatomy, and/or make assumptions about the image geometry, and therefore have limited success when these assumptions do not hold or image registration fails. With the aim of designing an accurate, learning-based, geometry-independent, and registration-free brain extraction tool, in this paper, we present a technique based on an auto-context convolutional neural network (CNN), in which intrinsic local and global image features are learned through 2-D patches of different window sizes. We consider two different architectures: 1) a voxelwise approach based on three parallel 2-D convolutional pathways for three different directions (axial, coronal, and sagittal) that implicitly learn 3-D image information without the need for computationally expensive 3-D convolutions and 2) a fully convolutional network based on the U-net architecture. Posterior probability maps generated by the networks are used iteratively as context information along with the original image patches to learn the local shape and connectedness of the brain to extract it from non-brain tissue. The brain extraction results we have obtained from our CNNs are superior to the recently reported results in the literature on two publicly available benchmark data sets, namely, LPBA40 and OASIS, in which we obtained the Dice overlap coefficients of 97.73% and 97.62%, respectively. Significant improvement was achieved via our auto-context algorithm. Furthermore, we evaluated the performance of our algorithm in the challenging problem of extracting arbitrarily oriented fetal brains in reconstructed fetal brain magnetic resonance imaging (MRI) data sets. In this application, our voxelwise auto-context CNN performed much better than the other methods (Dice coefficient: 95.97%), where the other methods performed poorly due to the non-standard orientation and geometry of the fetal brain in MRI. Through training, our method can provide accurate brain extraction in challenging applications. This, in turn, may reduce the problems associated with image registration in segmentation tasks.

  7. Double-excitation fluorescence spectral imaging: eliminating tissue auto-fluorescence from in vivo PPIX measurements

    NASA Astrophysics Data System (ADS)

    Torosean, Sason; Flynn, Brendan; Samkoe, Kimberley S.; Davis, Scott C.; Gunn, Jason; Axelsson, Johan; Pogue, Brian W.

    2012-02-01

    An ultrasound coupled handheld-probe-based optical fluorescence molecular tomography (FMT) system has been in development for the purpose of quantifying the production of Protoporphyrin IX (PPIX) in aminolevulinic acid treated (ALA), Basal Cell Carcinoma (BCC) in vivo. The design couples fiber-based spectral sampling of PPIX fluorescence emission with a high frequency ultrasound imaging system, allowing regionally localized fluorescence intensities to be quantified [1]. The optical data are obtained by sequential excitation of the tissue with a 633nm laser, at four source locations and five parallel detections at each of the five interspersed detection locations. This method of acquisition permits fluorescence detection for both superficial and deep locations in ultrasound field. The optical boundary data, tissue layers segmented from ultrasound image and diffusion theory are used to estimate the fluorescence in tissue layers. To improve the recovery of the fluorescence signal of PPIX, eliminating tissue autofluorescence is of great importance. Here the approach was to utilize measurements which straddled the steep Qband excitation peak of PPIX, via the integration of an additional laser source, exciting at 637 nm; a wavelength with a 2 fold lower PPIX excitation value than 633nm.The auto-fluorescence spectrum acquired from the 637 nm laser is then used to spectrally decouple the fluorescence data and produce an accurate fluorescence emission signal, because the two wavelengths have very similar auto-fluorescence but substantially different PPIX excitation levels. The accuracy of this method, using a single source detector pair setup, is verified through animal tumor model experiments, and the result is compared to different methods of fluorescence signal recovery.

  8. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Y.

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block tridiagonal matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconstant coefficients. A method was recently proposed to parallelize and vectorize BCR. In this paper, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational compelxity lower than that of parallel BCR.

  9. Some fast elliptic solvers on parallel architectures and their complexities

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    The discretization of separable elliptic partial differential equations leads to linear systems with special block triangular matrices. Several methods are known to solve these systems, the most general of which is the Block Cyclic Reduction (BCR) algorithm which handles equations with nonconsistant coefficients. A method was recently proposed to parallelize and vectorize BCR. Here, the mapping of BCR on distributed memory architectures is discussed, and its complexity is compared with that of other approaches, including the Alternating-Direction method. A fast parallel solver is also described, based on an explicit formula for the solution, which has parallel computational complexity lower than that of parallel BCR.

  10. Effect of pressure on the Raman-active modes of zircon (ZrSiO4): a first-principles study

    NASA Astrophysics Data System (ADS)

    Sheremetyeva, Natalya; Cherniak, Daniele J.; Watson, E. Bruce; Meunier, Vincent

    2018-02-01

    Density-functional theory (DFT) was employed in a first-principles study of the effects of pressure on the Raman-active modes of zircon (ZrSiO4), using both the generalized gradient and local density approximations (GGA and LDA, respectively). Beginning with the equilibrium structure at zero pressure, we conducted a calibration of the effect of pressure in a manner procedurally similar to an experimental calibration. For pressures between 0 and 7 GPa, we find excellent qualitative agreement of frequency-pressure slopes partial ω /partial P calculated from GGA DFT with results of previous experimental studies. In addition, we were able to rationalize the ω vs. P behavior based on details of the vibrational modes and their atomic displacements. Most of the partial ω /partial P slopes are positive as expected, but the symmetry of the zircon lattice also results in two negative slopes for modes that involve slight shearing and rigid rotation of SiO4 tetrahedra. Overall, LDA yields absolute values of the frequencies of the Raman-active modes in good agreement with experimental values, while GGA reproduces the shift in frequency with pressure especially well.

  11. Performance characteristics of an automated gas chromatograph-ion trap mass spectrometer system used for the 1995 Southern Oxidants Study field investigation in Nashville, Tennessee

    NASA Astrophysics Data System (ADS)

    Daughtrey, E. Hunter; Adams, Jeffrey R.; Oliver, Karen D.; Kronmiller, Keith G.; McClenny, William A.

    1998-09-01

    A trailer-deployed automated gas chromatograph-mass spectrometer (autoGC-MS) system capable of making continuous hourly measurements was used to determine volatile organic compounds (VOCs) in ambient air at New Hendersonville, Tennessee, and Research Triangle Park, North Carolina, in 1995. The system configuration, including the autoGC-MS, trailer and transfer line, siting, and sampling plan and schedule, is described. The autoGC-MS system employs a pair of matched sorbent traps to allow simultaneous sampling and desorption. Desorption is followed by Stirling engine cryofocusing and subsequent GC separation and mass spectral identification and quantification. Quality control measurements described include evaluating precision and accuracy of replicate analyses of independently supplied audit and round-robin canisters and determining the completeness of the data sets taken in Tennessee. Data quality objectives for precision (±10%) and accuracy (±20%) of 10- to 20-ppbv audit canisters and a completeness of >75% data capture were met. Quality assurance measures used in reviewing the data set include retention time stability, calibration checks, frequency distribution checks, and checks of the mass spectra. Special procedures and tests were used to minimize sorbent trap artifacts, to verify the quality of a standard prepared in our laboratory, and to prove the integrity of the insulated, heated transfer line. A rigorous determination of total system blank concentration levels using humidified scientific air spiked with ozone allowed estimation of method detection limits, ranging from 0.01 to 1.0 ppb C, for most of the 100 target compounds, which were a composite list of the target compounds for the Photochemical Assessment Monitoring Station network, those for Environmental Protection Agency method TO-14, and selected oxygenated VOCs.

  12. Standardized Ki67 Diagnostics Using Automated Scoring--Clinical Validation in the GeparTrio Breast Cancer Study.

    PubMed

    Klauschen, Frederick; Wienert, Stephan; Schmitt, Wolfgang D; Loibl, Sibylle; Gerber, Bernd; Blohmer, Jens-Uwe; Huober, Jens; Rüdiger, Thomas; Erbstößer, Erhard; Mehta, Keyur; Lederer, Bianca; Dietel, Manfred; Denkert, Carsten; von Minckwitz, Gunter

    2015-08-15

    Scoring proliferation through Ki67 immunohistochemistry is an important component in predicting therapy response to chemotherapy in patients with breast cancer. However, recent studies have cast doubt on the reliability of "visual" Ki67 scoring in the multicenter setting, particularly in the lower, yet clinically important, proliferation range. Therefore, an accurate and standardized Ki67 scoring is pivotal both in routine diagnostics and larger multicenter studies. We validated a novel fully automated Ki67 scoring approach that relies on only minimal a priori knowledge on cell properties and requires no training data for calibration. We applied our approach to 1,082 breast cancer samples from the neoadjuvant GeparTrio trial and compared the performance of automated and manual Ki67 scoring. The three groups of autoKi67 as defined by low (≤ 15%), medium (15.1%-35%), and high (>35%) automated scores showed pCR rates of 5.8%, 16.9%, and 29.5%, respectively. AutoKi67 was significantly linked to prognosis with overall and progression-free survival P values P(OS) < 0.0001 and P(PFS) < 0.0002, compared with P(OS) < 0.0005 and P(PFS) < 0.0001 for manual Ki67 scoring. Moreover, automated Ki67 scoring was an independent prognosticator in the multivariate analysis with P(OS) = 0.002, P(PFS) = 0.009 (autoKi67) versus P(OS) = 0.007, PPFS = 0.004 (manual Ki67). The computer-assisted Ki67 scoring approach presented here offers a standardized means of tumor cell proliferation assessment in breast cancer that correlated with clinical endpoints and is deployable in routine diagnostics. It may thus help to solve recently reported reliability concerns in Ki67 diagnostics. ©2014 American Association for Cancer Research.

  13. A 64-channel ultra-low power system-on-chip for local field and action potentials recording

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Alberto; Delgado-Restituto, Manuel; Darie, Angela; Soto-Sánchez, Cristina; Fernández-Jover, Eduardo; Rodríguez-Vázquez, Ángel

    2015-06-01

    This paper reports an integrated 64-channel neural recording sensor. Neural signals are acquired, filtered, digitized and compressed in the channels. Additionally, each channel implements an auto-calibration mechanism which configures the transfer characteristics of the recording site. The system has two transmission modes; in one case the information captured by the channels is sent as uncompressed raw data; in the other, feature vectors extracted from the detected neural spikes are released. Data streams coming from the channels are serialized by an embedded digital processor. Experimental results, including in vivo measurements, show that the power consumption of the complete system is lower than 330μW.

  14. Total anthocyanin content determination in intact açaí (Euterpe oleracea Mart.) and palmitero-juçara (Euterpe edulis Mart.) fruit using near infrared spectroscopy (NIR) and multivariate calibration.

    PubMed

    Inácio, Maria Raquel Cavalcanti; de Lima, Kássio Michell Gomes; Lopes, Valquiria Garcia; Pessoa, José Dalton Cruz; de Almeida Teixeira, Gustavo Henrique

    2013-02-15

    The aim of this study was to evaluate near-infrared reflectance spectroscopy (NIR), and multivariate calibration potential as a rapid method to determinate anthocyanin content in intact fruit (açaí and palmitero-juçara). Several multivariate calibration techniques, including partial least squares (PLS), interval partial least squares, genetic algorithm, successive projections algorithm, and net analyte signal were compared and validated by establishing figures of merit. Suitable results were obtained with the PLS model (four latent variables and 5-point smoothing) with a detection limit of 6.2 g kg(-1), limit of quantification of 20.7 g kg(-1), accuracy estimated as root mean square error of prediction of 4.8 g kg(-1), mean selectivity of 0.79 g kg(-1), sensitivity of 5.04×10(-3) g kg(-1), precision of 27.8 g kg(-1), and signal-to-noise ratio of 1.04×10(-3) g kg(-1). These results suggest NIR spectroscopy and multivariate calibration can be effectively used to determine anthocyanin content in intact açaí and palmitero-juçara fruit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. The effect of perfluorocarbon vapour on the measurement of respiratory tidal volume during partial liquid ventilation.

    PubMed

    Davies, M W; Dunster, K R

    2000-08-01

    During partial liquid ventilation perfluorocarbon vapour is present in the exhaled gases. The volumes of these gases are measured by pneumotachometers. Error in measuring tidal volumes will give erroneous measurement of lung compliance during partial liquid ventilation. We aim to compare measured tidal volumes with and without perfluorocarbon vapour using tidal volumes suitable for use in neonates. Tidal volumes were produced with a 100 ml calibration syringe from 20 to 100 ml and with a calibrated Harvard rodent ventilator from 2.5 to 20 ml. Control tidal volumes were drawn from a humidifier chamber containing water vapour and the PFC tidal volumes were drawn from a humidifier chamber containing water and perfluorocarbon (FC-77) vapour. Tidal volumes were measured by a fixed orifice, target, differential pressure flowmeter (VenTrak) or a hot-wire anenometer (Bear Cub) placed between the calibration syringe or ventilator and the humidifier chamber. All tidal volumes measured with perfluorocarbon vapour were increased compared with control (ANOVA p < 0.001 and post t-test p < 0.0001). Measured tidal volume increased from 7 to 16% with the fixed orifice type flow-meter, and from 35 to 41% with the hot-wire type. In conclusion, perfluorocarbon vapour flowing through pneumotachometers gives falsely high tidal volume measurements. Calculation of lung compliance must take into account the effect of perfluorocarbon vapour on the measurement of tidal volume.

  16. Solution of partial differential equations on vector and parallel computers

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Voigt, R. G.

    1985-01-01

    The present status of numerical methods for partial differential equations on vector and parallel computers was reviewed. The relevant aspects of these computers are discussed and a brief review of their development is included, with particular attention paid to those characteristics that influence algorithm selection. Both direct and iterative methods are given for elliptic equations as well as explicit and implicit methods for initial boundary value problems. The intent is to point out attractive methods as well as areas where this class of computer architecture cannot be fully utilized because of either hardware restrictions or the lack of adequate algorithms. Application areas utilizing these computers are briefly discussed.

  17. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  18. Radiometric calibration of optical microscopy and microspectroscopy apparata over a broad spectral range using a special thin-film luminescence standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valenta, J., E-mail: jan.valenta@mff.cuni.cz; Greben, M.

    2015-04-15

    Application capabilities of optical microscopes and microspectroscopes can be considerably enhanced by a proper calibration of their spectral sensitivity. We propose and demonstrate a method of relative and absolute calibration of a microspectroscope over an extraordinary broad spectral range covered by two (parallel) detection branches in visible and near-infrared spectral regions. The key point of the absolute calibration of a relative spectral sensitivity is application of the standard sample formed by a thin layer of Si nanocrystals with stable and efficient photoluminescence. The spectral PL quantum yield and the PL spatial distribution of the standard sample must be characterized bymore » separate experiments. The absolutely calibrated microspectroscope enables to characterize spectral photon emittance of a studied object or even its luminescence quantum yield (QY) if additional knowledge about spatial distribution of emission and about excitance is available. Capabilities of the calibrated microspectroscope are demonstrated by measuring external QY of electroluminescence from a standard poly-Si solar-cell and of photoluminescence of Er-doped Si nanocrystals.« less

  19. Study for reducing lung dose of upper thoracic esophageal cancer radiotherapy by auto-planning: volumetric-modulated arc therapy vs intensity-modulated radiation therapy.

    PubMed

    Chen, Hua; Wang, Hao; Gu, Hengle; Shao, Yan; Cai, Xuwei; Fu, Xiaolong; Xu, Zhiyong

    2017-10-27

    This study aimed to investigate the dosimetric differences and lung sparing between volumetric-modulated arc therapy (VMAT) and intensity-modulated radiation therapy (IMRT) in the treatment of upper thoracic esophageal cancer with T3N0M0 for preoperative radiotherapy by auto-planning (AP). Sixteen patient cases diagnosed with upper thoracic esophageal cancer T3N0M0 for preoperative radiotherapy were retrospectively studied, and 3 plans were generated for each patient: full arc VMAT AP plan with double arcs, partial arc VMAT AP plan with 6 partial arcs, and conventional IMRT AP plan. A simultaneous integrated boost with 2 levels was planned in all patients. Target coverage, organ at risk sparing, treatment parameters including monitor units and treatment time (TT) were evaluated. Wilcoxon signed-rank test was used to check for significant differences (p < 0.05) between datasets. VMAT plans (pVMAT and fVMAT) significantly reduced total lung volume treated above 20 Gy (V 20 ), 25 Gy (V 25 ), 30 Gy (V 30 ), 35 Gy (V 35 ), 40 Gy (V 40 ), and without increasing the value of V 10 , V 13 , and V 15 . For V 5 of total lung value, pVMAT was similar to aIMRT, and it was better than fVMAT. Both pVMAT and fVMAT improved the target dose coverage and significantly decreased maximum dose for the spinal cord, monitor unit, and TT. No significant difference was observed with respect to V 10 and V 15 of body. VMAT AP plan was a good option for treating upper thoracic esophageal cancer with T3N0M0, especially partial arc VMAT AP plan. It had the potential to effectively reduce lung dose in a shorter TT and with superior target coverage and dose homogeneity. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  20. Thickness measurement of nontransparent free films by double-side white-light interferometry: Calibration and experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poilane, C.; Sandoz, P.; Departement d'Optique PM Duffieux, Institut FEMTO-ST, UMR CNRS 6174, Universite de Franche-Comte, 25030 Besancon, Cedex

    2006-05-15

    A double-side optical profilometer based on white-light interferometry was developed for thickness measurement of nontransparent films. The profile of the sample is measured simultaneously on both sides of the film. The resulting data allow the computation of the roughness, the flatness and the parallelism of the sides of the film, and the average thickness of the film. The key point is the apparatus calibration, i.e., the accurate determination of the distance between the reference mirrors of the complementary interferometers. Specific samples were processed for that calibration. The system is adaptable to various thickness scales as long as calibration can bemore » made accurately. A thickness accuracy better than 30 nm for films thinner than 200 {mu}m is reported with the experimental material used. In this article, we present the principle of the method as well as the calibration methodology. Limitation and accuracy of the method are discussed. Experimental results are presented.« less

  1. SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenton, O; Valdes, G; Yin, L

    Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. Themore » calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.« less

  2. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  3. Gradient-based model calibration with proxy-model assistance

    NASA Astrophysics Data System (ADS)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  4. Influence of length and diameter of implants associated with distal extension removable partial dentures.

    PubMed

    Verri, Fellippo Ramos; Pellizzer, Eduardo Piza; Rocha, Eduardo Passos; Pereira, João Antônio

    2007-09-01

    The aim of this study was to evaluate the influence of the length and diameter of the implant incorporated under the saddle of a distal-extension removable partial denture, acting as support. Six hemi-mandibular models were made with the presence of left inferior cuspid and first bicuspid, with the following differences: model A, without removable partial denture; model B, removable partial denture only; model C, removable partial denture and implant of 3.75 x x mm; model D, removable partial denture and implant of 3.75 x x3 mm; model E, removable partial denture and implant of 5 x x mm; and model F, removable partial denture and implant of 5 x x3 mm. These models were designed with the aid of AutoCAD 2000 (Autodesk, Inc., San Rafael, CA) and processed for finite element analysis by ANSYS 5.4 (Swanson Analysis Systems, Houston, PA). The loads applied were 50 N vertical on each cuspid point. It was noted that the presence of the removable partial denture overloaded the supporting tooth and other structures. The introduction of the implant reduced tensions, mainly at the extremities of the edentulous edge. Both the length and diameter tended to reduce tensions as their dimensions increased. Increasing the length of the implant had a great influence on the decrease of displacement and von Mises tension values. Increasing the diameter of the implant had a great influence on the decrease of von Mises tension values, but did not influence the displacement values. According to the results of this study, it is a good choice to use the greater and larger implant possible in the association between implant and distal extension removable partial denture.

  5. INVITED TOPICAL REVIEW: Parallel magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Larkman, David J.; Nunes, Rita G.

    2007-04-01

    Parallel imaging has been the single biggest innovation in magnetic resonance imaging in the last decade. The use of multiple receiver coils to augment the time consuming Fourier encoding has reduced acquisition times significantly. This increase in speed comes at a time when other approaches to acquisition time reduction were reaching engineering and human limits. A brief summary of spatial encoding in MRI is followed by an introduction to the problem parallel imaging is designed to solve. There are a large number of parallel reconstruction algorithms; this article reviews a cross-section, SENSE, SMASH, g-SMASH and GRAPPA, selected to demonstrate the different approaches. Theoretical (the g-factor) and practical (coil design) limits to acquisition speed are reviewed. The practical implementation of parallel imaging is also discussed, in particular coil calibration. How to recognize potential failure modes and their associated artefacts are shown. Well-established applications including angiography, cardiac imaging and applications using echo planar imaging are reviewed and we discuss what makes a good application for parallel imaging. Finally, active research areas where parallel imaging is being used to improve data quality by repairing artefacted images are also reviewed.

  6. Simultaneous spectrophotometric determination of valsartan and hydrochlorothiazide by H-point standard addition method and partial least squares regression.

    PubMed

    Lakshmi, Karunanidhi Santhana; Lakshmi, Sivasubramanian

    2011-03-01

    Simultaneous determination of valsartan and hydrochlorothiazide by the H-point standard additions method (HPSAM) and partial least squares (PLS) calibration is described. Absorbances at a pair of wavelengths, 216 and 228 nm, were monitored with the addition of standard solutions of valsartan. Results of applying HPSAM showed that valsartan and hydrochlorothiazide can be determined simultaneously at concentration ratios varying from 20:1 to 1:15 in a mixed sample. The proposed PLS method does not require chemical separation and spectral graphical procedures for quantitative resolution of mixtures containing the titled compounds. The calibration model was based on absorption spectra in the 200-350 nm range for 25 different mixtures of valsartan and hydrochlorothiazide. Calibration matrices contained 0.5-3 μg mL-1 of both valsartan and hydrochlorothiazide. The standard error of prediction (SEP) for valsartan and hydrochlorothiazide was 0.020 and 0.038 μg mL-1, respectively. Both proposed methods were successfully applied to the determination of valsartan and hydrochlorothiazide in several synthetic and real matrix samples.

  7. An automatic DI-flux at the Livingston Island geomagnetic observatory, Antarctica: requirements and lessons learned

    NASA Astrophysics Data System (ADS)

    Marsal, Santiago; José Curto, Juan; Torta, Joan Miquel; Gonsette, Alexandre; Favà, Vicent; Rasson, Jean; Ibañez, Miquel; Cid, Òscar

    2017-07-01

    The DI-flux, consisting of a fluxgate magnetometer coupled with a theodolite, is used for the absolute manual measurement of the magnetic field angles in most ground-based observatories worldwide. Commercial solutions for an automated DI-flux have recently been developed by the Royal Meteorological Institute of Belgium (RMI), and are practically restricted to the AutoDIF and its variant, the GyroDIF. In this article, we analyze the pros and cons of both instruments in terms of its suitability for installation at the partially manned geomagnetic observatory of Livingston Island (LIV), Antarctica. We conclude that the GyroDIF, even if it is less accurate and more power demanding, is more suitable than the AutoDIF for harsh conditions due to the simpler infrastructure that is necessary. Power constraints in the Spanish Antarctic Station Juan Carlos I (ASJI) during the unmanned season require an energy-efficient design of the thermally regulated box housing the instrument as well as thorough power management. Our experiences can benefit the geomagnetic community, which often faces similar challenges.

  8. Experimental study on heat transfer enhancement of laminar ferrofluid flow in horizontal tube partially filled porous media under fixed parallel magnet bars

    NASA Astrophysics Data System (ADS)

    Sheikhnejad, Yahya; Hosseini, Reza; Saffar Avval, Majid

    2017-02-01

    In this study, steady state laminar ferroconvection through circular horizontal tube partially filled with porous media under constant heat flux is experimentally investigated. Transverse magnetic fields were applied on ferrofluid flow by two fixed parallel magnet bar positioned on a certain distance from beginning of the test section. The results show promising notable enhancement in heat transfer as a consequence of partially filled porous media and magnetic field, up to 2.2 and 1.4 fold enhancement were observed in heat transfer coefficient respectively. It was found that presence of both porous media and magnetic field simultaneously can highly improve heat transfer up to 2.4 fold. Porous media of course plays a major role in this configuration. Virtually, application of Magnetic field and porous media also insert higher pressure loss along the pipe which again porous media contribution is higher that magnetic field.

  9. Interface Provides Standard-Bus Communication

    NASA Technical Reports Server (NTRS)

    Culliton, William G.

    1995-01-01

    Microprocessor-controlled interface (IEEE-488/LVABI) incorporates service-request and direct-memory-access features. Is circuit card enabling digital communication between system called "laser auto-covariance buffer interface" (LVABI) and compatible personal computer via general-purpose interface bus (GPIB) conforming to Institute for Electrical and Electronics Engineers (IEEE) Standard 488. Interface serves as second interface enabling first interface to exploit advantages of GPIB, via utility software written specifically for GPIB. Advantages include compatibility with multitasking and support of communication among multiple computers. Basic concept also applied in designing interfaces for circuits other than LVABI for unidirectional or bidirectional handling of parallel data up to 16 bits wide.

  10. Parallel, Real-Time and Pipeline Data Reduction for the ROVER Sub-mm Heterodyne Polarimeter on the JCMT with ACSIS and ORAC-DR

    NASA Astrophysics Data System (ADS)

    Leech, J.; Dewitt, S.; Jenness, T.; Greaves, J.; Lightfoot, J. F.

    2005-12-01

    ROVER is a rotating waveplate polarimeter for use with (sub)mm heterodyne instruments, particularly the 16 element focal plane Heterodyne Array Receiver HARP tep{Smit2003} due for commissioning on the JCMT in 2004. The ROVER/HARP back-end will be a digital auto-correlation spectrometer, known as ACSIS, designed specifically for the demanding data volumes from the HARP array receiver. ACSIS is being developed by DRAO, Penticton and UKATC. This paper will describe the data reduction of ROVER polarimetry data both in real-time by ACSIS-DR, and through the ORAC-DR data reduction pipeline.

  11. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  12. Implementation of DFT application on ternary optical computer

    NASA Astrophysics Data System (ADS)

    Junjie, Peng; Youyi, Fu; Xiaofeng, Zhang; Shuai, Kong; Xinyu, Wei

    2018-03-01

    As its characteristics of huge number of data bits and low energy consumption, optical computing may be used in the applications such as DFT etc. which needs a lot of computation and can be implemented in parallel. According to this, DFT implementation methods in full parallel as well as in partial parallel are presented. Based on resources ternary optical computer (TOC), extensive experiments were carried out. Experimental results show that the proposed schemes are correct and feasible. They provide a foundation for further exploration of the applications on TOC that needs a large amount calculation and can be processed in parallel.

  13. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  14. Calibration of mass spectrometric peptide mass fingerprint data without specific external or internal calibrants

    PubMed Central

    Wolski, Witold E; Lalowski, Maciej; Jungblut, Peter; Reinert, Knut

    2005-01-01

    Background Peptide Mass Fingerprinting (PMF) is a widely used mass spectrometry (MS) method of analysis of proteins and peptides. It relies on the comparison between experimentally determined and theoretical mass spectra. The PMF process requires calibration, usually performed with external or internal calibrants of known molecular masses. Results We have introduced two novel MS calibration methods. The first method utilises the local similarity of peptide maps generated after separation of complex protein samples by two-dimensional gel electrophoresis. It computes a multiple peak-list alignment of the data set using a modified Minimum Spanning Tree (MST) algorithm. The second method exploits the idea that hundreds of MS samples are measured in parallel on one sample support. It improves the calibration coefficients by applying a two-dimensional Thin Plate Splines (TPS) smoothing algorithm. We studied the novel calibration methods utilising data generated by three different MALDI-TOF-MS instruments. We demonstrate that a PMF data set can be calibrated without resorting to external or relying on widely occurring internal calibrants. The methods developed here were implemented in R and are part of the BioConductor package mscalib available from . Conclusion The MST calibration algorithm is well suited to calibrate MS spectra of protein samples resulting from two-dimensional gel electrophoretic separation. The TPS based calibration algorithm might be used to correct systematic mass measurement errors observed for large MS sample supports. As compared to other methods, our combined MS spectra calibration strategy increases the peptide/protein identification rate by an additional 5 – 15%. PMID:16102175

  15. A Comparison of Item Selection Techniques and Exposure Control Mechanisms in CATs Using the Generalized Partial Credit Model.

    ERIC Educational Resources Information Center

    Pastor, Dena A.; Dodd, Barbara G.; Chang, Hua-Hua

    2002-01-01

    Studied the impact of using five different exposure control algorithms in two sizes of item pool calibrated using the generalized partial credit model. Simulation results show that the a-stratified design, in comparison to a no-exposure control condition, could be used to reduce item exposure and overlap and increase pool use, while degrading…

  16. Using Dalton's Law of Partial Pressures to Determine the Vapor Pressure of a Volatile Liquid

    ERIC Educational Resources Information Center

    Hilgeman, Fred R.; Bertrand, Gary; Wilson, Brent

    2007-01-01

    This experiment, designed for a general chemistry laboratory, illustrates the use of Dalton's law of partial pressures to determine the vapor pressure of a volatile liquid. A predetermined volume of air is injected into a calibrated tube filled with a liquid whose vapor pressure is to be measured. The volume of the liquid displaced is greater than…

  17. Introduction to total- and partial-pressure measurements in vacuum systems

    NASA Technical Reports Server (NTRS)

    Outlaw, R. A.; Kern, F. A.

    1989-01-01

    An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.

  18. Operator assistant to support deep space network link monitor and control

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Desai, Rajiv; Martinez, Elmain

    1992-01-01

    Preparing the Deep Space Network (DSN) stations to support spacecraft missions (referred to as pre-cal, for pre-calibration) is currently an operator and time intensive activity. Operators are responsible for sending and monitoring several hundred operator directivities, messages, and warnings. Operator directives are used to configure and calibrate the various subsystems (antenna, receiver, etc.) necessary to establish a spacecraft link. Messages and warnings are issued by the subsystems upon completion of an operation, changes of status, or an anomalous condition. Some points of pre-cal are logically parallel. Significant time savings could be realized if the existing Link Monitor and Control system (LMC) could support the operator in exploiting the parallelism inherent in pre-cal activities. Currently, operators may work on the individual subsystems in parallel, however, the burden of monitoring these parallel operations resides solely with the operator. Messages, warnings, and directives are all presented as they are received; without being correlated to the event that triggered them. Pre-cal is essentially an overhead activity. During pre-cal, no mission is supported, and no other activity can be performed using the equipment in the link. Therefore, it is highly desirable to reduce pre-cal time as much as possible. One approach to do this, as well as to increase efficiency and reduce errors, is the LMC Operator Assistant (OA). The LMC OA prototype demonstrates an architecture which can be used in concert with the existing LMC to exploit parallelism in pre-cal operations while providing the operators with a true monitoring capability, situational awareness and positive control. This paper presents an overview of the LMC OA architecture and the results from initial prototyping and test activities.

  19. Parallelization of a hydrological model using the message passing interface

    USGS Publications Warehouse

    Wu, Yiping; Li, Tiejian; Sun, Liqun; Chen, Ji

    2013-01-01

    With the increasing knowledge about the natural processes, hydrological models such as the Soil and Water Assessment Tool (SWAT) are becoming larger and more complex with increasing computation time. Additionally, other procedures such as model calibration, which may require thousands of model iterations, can increase running time and thus further reduce rapid modeling and analysis. Using the widely-applied SWAT as an example, this study demonstrates how to parallelize a serial hydrological model in a Windows® environment using a parallel programing technology—Message Passing Interface (MPI). With a case study, we derived the optimal values for the two parameters (the number of processes and the corresponding percentage of work to be distributed to the master process) of the parallel SWAT (P-SWAT) on an ordinary personal computer and a work station. Our study indicates that model execution time can be reduced by 42%–70% (or a speedup of 1.74–3.36) using multiple processes (two to five) with a proper task-distribution scheme (between the master and slave processes). Although the computation time cost becomes lower with an increasing number of processes (from two to five), this enhancement becomes less due to the accompanied increase in demand for message passing procedures between the master and all slave processes. Our case study demonstrates that the P-SWAT with a five-process run may reach the maximum speedup, and the performance can be quite stable (fairly independent of a project size). Overall, the P-SWAT can help reduce the computation time substantially for an individual model run, manual and automatic calibration procedures, and optimization of best management practices. In particular, the parallelization method we used and the scheme for deriving the optimal parameters in this study can be valuable and easily applied to other hydrological or environmental models.

  20. Data Quality Verification at STScI - Automated Assessment and Your Data

    NASA Astrophysics Data System (ADS)

    Dempsey, R.; Swade, D.; Scott, J.; Hamilton, F.; Holm, A.

    1996-12-01

    As satellite based observatories improve their ability to deliver wider varieties and more complex types of scientific data, so to does the process of analyzing and reducing these data. It becomes correspondingly imperative that Guest Observers or Archival Researchers have access to an accurate, consistent, and easily understandable summary of the quality of their data. Previously, at the STScI, an astronomer would display and examine the quality and scientific usefulness of every single observation obtained with HST. Recently, this process has undergone a major reorganization at the Institute. A major part of the new process is that the majority of data are assessed automatically with little or no human intervention. As part of routine processing in the OSS--PODPS Unified System (OPUS), the Observatory Monitoring System (OMS) observation logs, the science processing trailer file (also known as the TRL file), and the science data headers are inspected by an automated tool, AUTO_DQ. AUTO_DQ then determines if any anomalous events occurred during the observation or through processing and calibration of the data that affects the procedural quality of the data. The results are placed directly into the Procedural Data Quality (PDQ) file as a string of predefined data quality keywords and comments. These in turn are used by the Contact Scientist (CS) to check the scientific usefulness of the observations. In this manner, the telemetry stream is checked for known problems such as losses of lock, re-centerings, or degraded guiding, for example, while missing data or calibration errors are also easily flagged. If the problem is serious, the data are then queued for manual inspection by an astronomer. The success of every target acquisition is verified manually. If serious failures are confirmed, the PI and the scheduling staff are notified so that options concerning rescheduling the observations can be explored.

  1. An auto-locked diode laser system for precision metrology

    NASA Astrophysics Data System (ADS)

    Beica, H. C.; Carew, A.; Vorozcovs, A.; Dowling, P.; Pouliot, A.; Barron, B.; Kumarakrishnan, A.

    2017-05-01

    We present a unique external cavity diode laser system that can be auto-locked with reference to atomic and molecular spectra. The vacuum-sealed laser head design uses an interchangeable base-plate comprised of a laser diode and optical elements that can be selected for desired wavelength ranges. The feedback light to the laser diode is provided by a narrow-band interference filter, which can be tuned from outside the laser cavity to fineadjust the output wavelength in vacuum. To stabilize the laser frequency, the digital laser controller relies either on a pattern-matching algorithm stored in memory, or on first or third derivative feedback. We have used the laser systems to perform spectroscopic studies in rubidium at 780 nm, and in iodine at 633 nm. The linewidth of the 780-nm laser system was measured to be ˜500 kHz, and we present Allan deviation measurements of the beat note and the lock stability. Furthermore, we show that the laser system can be the basis for a new class of lidar transmitters in which a temperature-stabilized fiber-Bragg grating is used to generate frequency references for on-line points of the transmitter. We show that the fiber-Bragg grating spectra can be calibrated with reference to atomic transitions.

  2. Method and apparatus for calibrating a linear variable differential transformer

    DOEpatents

    Pokrywka, Robert J [North Huntingdon, PA

    2005-01-18

    A calibration apparatus for calibrating a linear variable differential transformer (LVDT) having an armature positioned in au LVDT armature orifice, and the armature able to move along an axis of movement. The calibration apparatus includes a heating mechanism with an internal chamber, a temperature measuring mechanism for measuring the temperature of the LVDT, a fixture mechanism with an internal chamber for at least partially accepting the LVDT and for securing the LVDT within the heating mechanism internal chamber, a moving mechanism for moving the armature, a position measurement mechanism for measuring the position of the armature, and an output voltage measurement mechanism. A method for calibrating an LVDT, including the steps of: powering the LVDT; heating the LVDT to a desired temperature; measuring the position of the armature with respect to the armature orifice; and measuring the output voltage of the LVDT.

  3. Newton-like methods for Navier-Stokes solution

    NASA Astrophysics Data System (ADS)

    Qin, N.; Xu, X.; Richards, B. E.

    1992-12-01

    The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.

  4. Time-gated flow cytometry: an ultra-high selectivity method to recover ultra-rare-event μ-targets in high-background biosamples

    NASA Astrophysics Data System (ADS)

    Jin, Dayong; Piper, James A.; Leif, Robert C.; Yang, Sean; Ferrari, Belinda C.; Yuan, Jingli; Wang, Guilan; Vallarino, Lidia M.; Williams, John W.

    2009-03-01

    A fundamental problem for rare-event cell analysis is auto-fluorescence from nontarget particles and cells. Time-gated flow cytometry is based on the temporal-domain discrimination of long-lifetime (>1 μs) luminescence-stained cells and can render invisible all nontarget cell and particles. We aim to further evaluate the technique, focusing on detection of ultra-rare-event 5-μm calibration beads in environmental water dirt samples. Europium-labeled 5-μm calibration beads with improved luminescence homogeneity and reduced aggregation were evaluated using the prototype UV LED excited time-gated luminescence (TGL) flow cytometer (FCM). A BD FACSAria flow cytometer was used to sort accurately a very low number of beads (<100 events), which were then spiked into concentrated samples of environmental water. The use of europium-labeled beads permitted the demonstration of specific detection rates of 100%+/-30% and 91%+/-3% with 10 and 100 target beads, respectively, that were mixed with over one million nontarget autofluorescent background particles. Under the same conditions, a conventional FCM was unable to recover rare-event fluorescein isothiocyanate (FITC) calibration beads. Preliminary results on Giardia detection are also reported. We have demonstrated the scientific value of lanthanide-complex biolabels in flow cytometry. This approach may augment the current method that uses multifluorescence-channel flow cytometry gating.

  5. An Auto-Calibrating Knee Flexion-Extension Axis Estimator Using Principal Component Analysis with Inertial Sensors.

    PubMed

    McGrath, Timothy; Fineman, Richard; Stirling, Leia

    2018-06-08

    Inertial measurement units (IMUs) have been demonstrated to reliably measure human joint angles—an essential quantity in the study of biomechanics. However, most previous literature proposed IMU-based joint angle measurement systems that required manual alignment or prescribed calibration motions. This paper presents a simple, physically-intuitive method for IMU-based measurement of the knee flexion/extension angle in gait without requiring alignment or discrete calibration, based on computationally-efficient and easy-to-implement Principle Component Analysis (PCA). The method is compared against an optical motion capture knee flexion/extension angle modeled through OpenSim. The method is evaluated using both measured and simulated IMU data in an observational study ( n = 15) with an absolute root-mean-square-error (RMSE) of 9.24∘ and a zero-mean RMSE of 3.49∘. Variation in error across subjects was found, made emergent by the larger subject population than previous literature considers. Finally, the paper presents an explanatory model of RMSE on IMU mounting location. The observational data suggest that RMSE of the method is a function of thigh IMU perturbation and axis estimation quality. However, the effect size for these parameters is small in comparison to potential gains from improved IMU orientation estimations. Results also highlight the need to set relevant datums from which to interpret joint angles for both truth references and estimated data.

  6. Tandem Autologous versus Single Autologous Transplantation Followed by Allogeneic Hematopoietic Cell Transplantation for Patients with Multiple Myeloma: Results from the Blood and Marrow Transplant Clinical Trials Network (BMT CTN) 0102 Trial

    PubMed Central

    Krishnan, Amrita; Pasquini, Marcelo C.; Logan, Brent; Stadtmauer, Edward A.; Vesole, David H.; Alyea, Edwin; Antin, Joseph H.; Comenzo, Raymond; Goodman, Stacey; Hari, Parameswaran; Laport, Ginna; Qazilbash, Muzaffar H.; Rowley, Scott; Sahebi, Firoozeh; Somlo, George; Vogl, Dan T.; Weisdorf, Daniel; Ewell, Marian; Wu, Juan; Geller, Nancy L.; Horowitz, Mary M.; Giralt, Sergio; Maloney, David G.

    2012-01-01

    Background Autologous hematopoietic cell transplantation (HCT) improves survival in patients with multiple myeloma, but disease progression remains a challenge. Allogeneic HCT (alloHCT) has the potential to reduce disease progression through graft-versus-myeloma effects. The aim of the BMT CTN 0102 trial was to compare outcomes of autologous HCT (autoHCT) followed by alloHCT with non-myeloablative conditioning (auto-allo) to tandem autoHCT (auto-auto) in patients with standard risk myeloma. Patients in the auto-auto arm were randomized to one year of thalidomide and dexamethasone (Thal-Dex) maintenance therapy or observation (Obs). Methods Patients with multiple myeloma within 10 months from initiation of induction therapy were classified as standard (SRD) or high risk (HRD) disease based on cytogenetics and beta-2-microglobulin levels. Assignment to auto-allo HCT was based on availability of an HLA-matched sibling donor. Primary endpoint was three-year progression-free survival (PFS) according to intent-to-treat analysis. Results 710 patients were enrolled completed a minimum of 3-year follow up. Among 625 SRD patients, 189 and 436 were assigned to auto-allo and auto-auto, respectively. Seventeen percent (33/189) of SR patients in the auto-allo arm and 16% (70/436) in the auto-auto arm did not receive a second transplant. Thal-Dex was not completed in 77% (168/217) of assigned patients. PFS and overall survival (OS) did not differ between the Thal-Dex (49%, 80%) and Obs (41%, 81%) cohorts and these two arms were pooled for analysis. Three year PFS was 43% and 46% (p=0·671) and three-year OS was 77% and 80 % (p=0·191) with auto-allo and auto-auto, respectively. Corresponding progression/relapse rates were 46% and 50% (p=0·402); treatment-related mortality rates were 11% and 4% (p<0·001), respectively. Auto/allo patients with chronic graft-vs-host disease had a decreased risk of relapse. Most common grade 3 to 5 adverse events in auto-allo was hypebilirubenemia (21/189) and in the auto-auto was peripheral neuropathy (52/436). Among 85 HRD patients (37 auto-allo), three PFS was 40% and 33% (p=0·743) and three-year OS was 59% and 67% (p=0·460) with auto-allo and auto-auto, respectively. Conclusion Thal-Dex maintenance was associated with poor compliance and did not improve PFS or OS. At three years there was no improvement in PFS or OS with auto-allo compared to auto-auto transplantation in patients with standard risk myeloma. Decisions to proceed with alloHCT after an autoHCT in patients with standard risk myeloma should take into consideration results of the current trial. Future investigation of alloHCT in myeloma should focus to minimize TRM and maximize graft-versus myeloma effects. This trial was registered in Clinicaltrials.gov (NCT00075829) and was funded by the National Heart, Lung and Blood Institute and National Cancer Institute. PMID:21962393

  7. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    PubMed

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.

  8. New technique for calibrating hydrocarbon gas flowmeters

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Puster, R. L.

    1984-01-01

    A technique for measuring calibration correction factors for hydrocarbon mass flowmeters is described. It is based on the Nernst theorem for matching the partial pressure of oxygen in the combustion products of the test hydrocarbon, burned in oxygen-enriched air, with that in normal air. It is applied to a widely used type of commercial thermal mass flowmeter for a number of hydrocarbons. The calibration correction factors measured using this technique are in good agreement with the values obtained by other independent procedures. The technique is successfully applied to the measurement of differences as low as one percent of the effective hydrocarbon content of the natural gas test samples.

  9. Phobos/Harp post launch support

    NASA Technical Reports Server (NTRS)

    Nagy, Andrew

    1993-01-01

    The activity under this grant concentrated on: (1) post-launch calibration of the HARP instrument; and (2) analysis and interpretation of the data from the HARP and other related instruments. The HARP was taken by scientists and engineers from the Hungarian Central Research Institute for Physics (CRIP) to NASA/MSFC for calibration in their plasma chamber, with partial support of this grant. This electron and ion calibration of the HARP, helped in transforming measured currents to actual flux values. The analysis and interpretation of the data, carried out jointly by our Russian and Hungarian colleagues and us, led to a number of journal publications and presentations at scientific meetings.

  10. DVS-SOFTWARE: An Effective Tool for Applying Highly Parallelized Hardware To Computational Geophysics

    NASA Astrophysics Data System (ADS)

    Herrera, I.; Herrera, G. S.

    2015-12-01

    Most geophysical systems are macroscopic physical systems. The behavior prediction of such systems is carried out by means of computational models whose basic models are partial differential equations (PDEs) [1]. Due to the enormous size of the discretized version of such PDEs it is necessary to apply highly parallelized super-computers. For them, at present, the most efficient software is based on non-overlapping domain decomposition methods (DDM). However, a limiting feature of the present state-of-the-art techniques is due to the kind of discretizations used in them. Recently, I. Herrera and co-workers using 'non-overlapping discretizations' have produced the DVS-Software which overcomes this limitation [2]. The DVS-software can be applied to a great variety of geophysical problems and achieves very high parallel efficiencies (90%, or so [3]). It is therefore very suitable for effectively applying the most advanced parallel supercomputers available at present. In a parallel talk, in this AGU Fall Meeting, Graciela Herrera Z. will present how this software is being applied to advance MOD-FLOW. Key Words: Parallel Software for Geophysics, High Performance Computing, HPC, Parallel Computing, Domain Decomposition Methods (DDM)REFERENCES [1]. Herrera Ismael and George F. Pinder, Mathematical Modelling in Science and Engineering: An axiomatic approach", John Wiley, 243p., 2012. [2]. Herrera, I., de la Cruz L.M. and Rosas-Medina A. "Non Overlapping Discretization Methods for Partial, Differential Equations". NUMER METH PART D E, 30: 1427-1454, 2014, DOI 10.1002/num 21852. (Open source) [3]. Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  11. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  12. Nebo: An efficient, parallel, and portable domain-specific language for numerically solving partial differential equations

    DOE PAGES

    Earl, Christopher; Might, Matthew; Bagusetty, Abhishek; ...

    2016-01-26

    This study presents Nebo, a declarative domain-specific language embedded in C++ for discretizing partial differential equations for transport phenomena on multiple architectures. Application programmers use Nebo to write code that appears sequential but can be run in parallel, without editing the code. Currently Nebo supports single-thread execution, multi-thread execution, and many-core (GPU-based) execution. With single-thread execution, Nebo performs on par with code written by domain experts. With multi-thread execution, Nebo can linearly scale (with roughly 90% efficiency) up to 12 cores, compared to its single-thread execution. Moreover, Nebo’s many-core execution can be over 140x faster than its single-thread execution.

  13. Sample classification for improved performance of PLS models applied to the quality control of deep-frying oils of different botanic origins analyzed using ATR-FTIR spectroscopy.

    PubMed

    Kuligowski, Julia; Carrión, David; Quintás, Guillermo; Garrigues, Salvador; de la Guardia, Miguel

    2011-01-01

    The selection of an appropriate calibration set is a critical step in multivariate method development. In this work, the effect of using different calibration sets, based on a previous classification of unknown samples, on the partial least squares (PLS) regression model performance has been discussed. As an example, attenuated total reflection (ATR) mid-infrared spectra of deep-fried vegetable oil samples from three botanical origins (olive, sunflower, and corn oil), with increasing polymerized triacylglyceride (PTG) content induced by a deep-frying process were employed. The use of a one-class-classifier partial least squares-discriminant analysis (PLS-DA) and a rooted binary directed acyclic graph tree provided accurate oil classification. Oil samples fried without foodstuff could be classified correctly, independent of their PTG content. However, class separation of oil samples fried with foodstuff, was less evident. The combined use of double-cross model validation with permutation testing was used to validate the obtained PLS-DA classification models, confirming the results. To discuss the usefulness of the selection of an appropriate PLS calibration set, the PTG content was determined by calculating a PLS model based on the previously selected classes. In comparison to a PLS model calculated using a pooled calibration set containing samples from all classes, the root mean square error of prediction could be improved significantly using PLS models based on the selected calibration sets using PLS-DA, ranging between 1.06 and 2.91% (w/w).

  14. Submerged flow bridge scour under clear water conditions

    DOT National Transportation Integrated Search

    2012-09-01

    Prediction of pressure flow (vertical contraction) scour underneath a partially or fully submerged bridge superstructure : in an extreme flood event is crucial for bridge safety. An experimentally and numerically calibrated formulation is : developed...

  15. Online measurement of urea concentration in spent dialysate during hemodialysis.

    PubMed

    Olesberg, Jonathon T; Arnold, Mark A; Flanigan, Michael J

    2004-01-01

    We describe online optical measurements of urea in the effluent dialysate line during regular hemodialysis treatment of several patients. Monitoring urea removal can provide valuable information about dialysis efficiency. Spectral measurements were performed with a Fourier-transform infrared spectrometer equipped with a flow-through cell. Spectra were recorded across the 5000-4000 cm(-1) (2.0-2.5 microm) wavelength range at 1-min intervals. Savitzky-Golay filtering was used to remove baseline variations attributable to the temperature dependence of the water absorption spectrum. Urea concentrations were extracted from the filtered spectra by use of partial least-squares regression and the net analyte signal of urea. Urea concentrations predicted by partial least-squares regression matched concentrations obtained from standard chemical assays with a root mean square error of 0.30 mmol/L (0.84 mg/dL urea nitrogen) over an observed concentration range of 0-11 mmol/L. The root mean square error obtained with the net analyte signal of urea was 0.43 mmol/L with a calibration based only on a set of pure-component spectra. The error decreased to 0.23 mmol/L when a slope and offset correction were used. Urea concentrations can be continuously monitored during hemodialysis by near-infrared spectroscopy. Calibrations based on the net analyte signal of urea are particularly appealing because they do not require a training step, as do statistical multivariate calibration procedures such as partial least-squares regression.

  16. Exploring performance and energy tradeoffs for irregular applications: A case study on the Tilera many-core architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay; Chavarría-Miranda, Daniel; Manzano, Joseph B.

    High performance, parallel applications with irregular data accesses are becoming a critical workload class for modern systems. In particular, the execution of such workloads on emerging many-core systems is expected to be a significant component of applications in data mining, machine learning, scientific computing and graph analytics. However, power and energy constraints limit the capabilities of individual cores, memory hierarchy and on-chip interconnect of such systems, thus leading to architectural and software trade-os that must be understood in the context of the intended application’s behavior. Irregular applications are notoriously hard to optimize given their data-dependent access patterns, lack of structuredmore » locality and complex data structures and code patterns. We have ported two irregular applications, graph community detection using the Louvain method (Grappolo) and high-performance conjugate gradient (HPCCG), to the Tilera many-core system and have conducted a detailed study of platform-independent and platform-specific optimizations that improve their performance as well as reduce their overall energy consumption. To conduct this study, we employ an auto-tuning based approach that explores the optimization design space along three dimensions - memory layout schemes, GCC compiler flag choices and OpenMP loop scheduling options. We leverage MIT’s OpenTuner auto-tuning framework to explore and recommend energy optimal choices for different combinations of parameters. We then conduct an in-depth architectural characterization to understand the memory behavior of the selected workloads. Finally, we perform a correlation study to demonstrate the interplay between the hardware behavior and application characteristics. Using auto-tuning, we demonstrate whole-node energy savings and performance improvements of up to 49:6% and 60% relative to a baseline instantiation, and up to 31% and 45:4% relative to manually optimized variants.« less

  17. SU-C-BRA-03: An Automated and Quick Contour Errordetection for Auto Segmentation in Online Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, J; Ates, O; Li, X

    Purpose: To develop a tool that can quickly and automatically assess contour quality generated from auto segmentation during online adaptive replanning. Methods: Due to the strict time requirement of online replanning and lack of ‘ground truth’ contours in daily images, our method starts with assessing image registration accuracy focusing on the surface of the organ in question. Several metrics tightly related to registration accuracy including Jacobian maps, contours shell deformation, and voxel-based root mean square (RMS) analysis were computed. To identify correct contours, additional metrics and an adaptive decision tree are introduced. To approve in principle, tests were performed withmore » CT sets, planned and daily CTs acquired using a CT-on-rails during routine CT-guided RT delivery for 20 prostate cancer patients. The contours generated on daily CTs using an auto-segmentation tool (ADMIRE, Elekta, MIM) based on deformable image registration of the planning CT and daily CT were tested. Results: The deformed contours of 20 patients with total of 60 structures were manually checked as baselines. The incorrect rate of total contours is 49%. To evaluate the quality of local deformation, the Jacobian determinant (1.047±0.045) on contours has been analyzed. In an analysis of rectum contour shell deformed, the higher rate (0.41) of error contours detection was obtained compared to 0.32 with manual check. All automated detections took less than 5 seconds. Conclusion: The proposed method can effectively detect contour errors in micro and macro scope by evaluating multiple deformable registration metrics in a parallel computing process. Future work will focus on improving practicability and optimizing calculation algorithms and metric selection.« less

  18. Tools for model-building with cryo-EM maps

    DOE PAGES

    Terwilliger, Thomas Charles

    2018-01-01

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  19. Tools for model-building with cryo-EM maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terwilliger, Thomas Charles

    There are new tools available to you in Phenix for interpreting cryo-EM maps. You can automatically sharpen (or blur) a map with phenix.auto_sharpen and you can segment a map with phenix.segment_and_split_map. If you have overlapping partial models for a map, you can merge them with phenix.combine_models. If you have a protein-RNA complex and protein chains have been accidentally built in the RNA region, you can try to remove them with phenix.remove_poor_fragments. You can put these together and automatically sharpen, segment and build a map with phenix.map_to_model.

  20. Status of the calibration and alignment framework at the Belle II experiment

    NASA Astrophysics Data System (ADS)

    Dossett, D.; Sevior, M.; Ritter, M.; Kuhr, T.; Bilka, T.; Yaschenko, S.; Belle Software Group, II

    2017-10-01

    The Belle II detector at the Super KEKB e+e-collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.

  1. Ability of calibration phantom to reduce the interscan variability in electron beam computed tomography.

    PubMed

    Budoff, Matthew J; Mao, Songshou; Lu, Bin; Takasu, Junichiro; Child, Janis; Carson, Sivi; Fisher, Hans

    2002-01-01

    To test the hypothesis that a calibration phantom would improve interpatient and interscan variability in coronary artery calcium (CAC) studies. We scanned 144 patients twice with or without the calibration phantom and then scanned 93 patients with a single calcific lesion twice and, finally, scanned a cork heart with calcific foci. There were no linear correlations in computed tomography Hounsfield unit (CT HU) and CT HU interscan variation between blood pool and phantom plugs at any slice level in patient groups (p > 0.05). The CT HU interscan variation in phantom plugs (2.11 HU) was less than that of the blood pool (3.47 HU; p < 0.05) and CAC lesion (20.39; p < 0.001). Comparing images with and without a calibration phantom, there was a significant decrease in CT HU as well as an increase in noise and peak values in patient studies and the cork phantom study. The CT HU attenuation variations of the interpatient and interscan blood pool, calibration phantom plug, and cork coronary arteries were not parallel. Therefore, the ability to adjust the CT HU variation of calcific lesions by a calibration phantom is problematic and may worsen the problem.

  2. Status of calibration and data evaluation of AMSR on board ADEOS-II

    NASA Astrophysics Data System (ADS)

    Imaoka, Keiji; Fujimoto, Yasuhiro; Kachi, Misako; Takeshima, Toshiaki; Igarashi, Tamotsu; Kawanishi, Toneo; Shibata, Akira

    2004-02-01

    The Advanced Microwave Scanning Radiometer (AMSR) is the multi-frequency, passive microwave radiometer on board the Advanced Earth Observing Satellite-II (ADEOS-II), currently called Midori-II. The instrument has eight-frequency channels with dual polarization (except 50-GHz band) covering frequencies between 6.925 and 89.0 GHz. Measurement of 50-GHz channels is the first attempt by this kind of conically scanning microwave radiometers. Basic concept of the instrument including hardware configuration and calibration method is almost the same as that of ASMR for EOS (AMSR-E), the modified version of AMSR. Its swath width of 1,600 km is wider than that of AMSR-E. In parallel with the calibration and data evaluation of AMSR-E instrument, almost identical calibration activities have been made for AMSR instrument. After finished the initial checkout phase, the instrument has been continuously obtaining the data in global basis. Time series of radiometer sensitivities and automatic gain control telemetry indicate the stable instrument performance. For the radiometric calibration, we are now trying to apply the same procedure that is being used for AMSR-E. This paper provides an overview of the instrument characteristics, instrument status, and preliminary results of calibration and data evaluation activities.

  3. Support for Online Calibration in the ALICE HLT Framework

    NASA Astrophysics Data System (ADS)

    Krzewicki, Mikolaj; Rohr, David; Zampolli, Chiara; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Vorobyev, Ivan; Weber, Steffen; Schweda, Kai; Shahoyan, Ruben; Lindenstruth, Volker; ALICE Collaboration

    2017-10-01

    The ALICE detector employs sub detectors sensitive to environmental conditions such as pressure and temperature, e.g. the time projection chamber (TPC). A precise reconstruction of particle trajectories requires precise calibration of these detectors. Performing the calibration in real time in the HLT improves the online reconstruction and potentially renders certain offline calibration steps obsolete, speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data reduction will rely on reconstructed data, online calibration becomes a necessity. In order to run the calibration online, the HLT now supports the processing of tasks that typically run offline. These tasks run massively in parallel on all HLT compute nodes and their output is gathered and merged periodically. The calibration results are both stored offline for later use and fed back into the HLT chain via a feedback loop in order to apply calibration information to the online track reconstruction. Online calibration and feedback loop are subject to certain time constraints in order to provide up-to-date calibration information and they must not interfere with ALICE data taking. Our approach to run these tasks in asynchronous processes enables us to separate them from normal data taking in a way that makes it failure resilient. We performed a first test of online TPC drift time calibration under real conditions during the heavy-ion run in December 2015. We present an analysis and conclusions of this first test, new improvements and developments based on this, as well as our current scheme to commission this for production use.

  4. Clinical assessment of auto-positive end-expiratory pressure by diaphragmatic electrical activity during pressure support and neurally adjusted ventilatory assist.

    PubMed

    Bellani, Giacomo; Coppadoro, Andrea; Patroniti, Nicolò; Turella, Marta; Arrigoni Marocco, Stefano; Grasselli, Giacomo; Mauri, Tommaso; Pesenti, Antonio

    2014-09-01

    Auto-positive end-expiratory pressure (auto-PEEP) may substantially increase the inspiratory effort during assisted mechanical ventilation. Purpose of this study was to assess whether the electrical activity of the diaphragm (EAdi) signal can be reliably used to estimate auto-PEEP in patients undergoing pressure support ventilation and neurally adjusted ventilatory assist (NAVA) and whether NAVA was beneficial in comparison with pressure support ventilation in patients affected by auto-PEEP. In 10 patients with a clinical suspicion of auto-PEEP, the authors simultaneously recorded EAdi, airway, esophageal pressure, and flow during pressure support and NAVA, whereas external PEEP was increased from 2 to 14 cm H2O. Tracings were analyzed to measure apparent "dynamic" auto-PEEP (decrease in esophageal pressure to generate inspiratory flow), auto-EAdi (EAdi value at the onset of inspiratory flow), and IDEAdi (inspiratory delay between the onset of EAdi and the inspiratory flow). The pressure necessary to overcome auto-PEEP, auto-EAdi, and IDEAdi was significantly lower in NAVA as compared with pressure support ventilation, decreased with increase in external PEEP, although the effect of external PEEP was less pronounced in NAVA. Both auto-EAdi and IDEAdi were tightly correlated with auto-PEEP (r = 0.94 and r = 0.75, respectively). In the presence of auto-PEEP at lower external PEEP levels, NAVA was characterized by a characteristic shape of the airway pressure. In patients with auto-PEEP, NAVA, compared with pressure support ventilation, led to a decrease in the pressure necessary to overcome auto-PEEP, which could be reliably monitored by the electrical activity of the diaphragm before inspiratory flow onset (auto-EAdi).

  5. Application of coordinate transform on ball plate calibration

    NASA Astrophysics Data System (ADS)

    Wei, Hengzheng; Wang, Weinong; Ren, Guoying; Pei, Limei

    2015-02-01

    For the ball plate calibration method with coordinate measurement machine (CMM) equipped with laser interferometer, it is essential to adjust the ball plate parallel to the direction of laser beam. It is very time-consuming. To solve this problem, a method based on coordinate transformation between machine system and object system is presented. With the fixed points' coordinates of the ball plate measured in the object system and machine system, the transformation matrix between the coordinate systems is calculated. The laser interferometer measurement data error due to the placement of ball plate can be corrected with this transformation matrix. Experimental results indicate that this method is consistent with the handy adjustment method. It avoids the complexity of ball plate adjustment. It also can be applied to the ball beam calibration.

  6. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase I is complete for the development of a Computational Fluid Dynamics parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  7. Research in Parallel Algorithms and Software for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  8. ATLAS Tile calorimeter calibration and monitoring systems

    NASA Astrophysics Data System (ADS)

    Chomont, Arthur; ATLAS Collaboration

    2017-11-01

    The ATLAS Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of the ATLAS experiment and provides important information for reconstruction of hadrons, jets, hadronic decays of tau leptons and missing transverse energy. This sampling calorimeter uses steel plates as absorber and scintillating tiles as active medium. The light produced by the passage of charged particles is transmitted by wavelength shifting fibres to photomultiplier tubes (PMTs), located on the outside of the calorimeter. The readout is segmented into about 5000 cells (longitudinally and transversally), each of them being read out by two PMTs in parallel. To calibrate and monitor the stability and performance of each part of the readout chain during the data taking, a set of calibration systems is used. The TileCal calibration system comprises cesium radioactive sources, Laser and charge injection elements, and allows for monitoring and equalization of the calorimeter response at each stage of the signal production, from scintillation light to digitization. Based on LHC Run 1 experience, several calibration systems were improved for Run 2. The lessons learned, the modifications, and the current LHC Run 2 performance are discussed.

  9. Myocarditis in auto-immune or auto-inflammatory diseases.

    PubMed

    Comarmond, Cloé; Cacoub, Patrice

    2017-08-01

    Myocarditis is a major cause of heart disease in young patients and a common precursor of heart failure due to dilated cardiomyopathy. Some auto-immune and/or auto-inflammatory diseases may be accompanied by myocarditis, such as sarcoidosis, Behçet's disease, eosinophilic granulomatosis with polyangiitis, myositis, and systemic lupus erythematosus. However, data concerning myocarditis in such auto-immune and/or auto-inflammatory diseases are sparse. New therapeutic strategies should better target the modulation of the immune system, depending on the phase of the disease and the type of underlying auto-immune and/or auto-inflammatory disease. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Poster — Thur Eve — 10: Partial kV CBCT, complete kV CBCT and EPID in breast treatment: a dose comparison study for skin, breasts, heart and lungs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussin, E; Archambault, L K; Wierzbicki, W

    The advantages of kilovoltage cone beam CT (kV CBCT) imaging over electronic portal imaging device (EPID) such as accurate 3D anatomy, soft tissue visualization, fast rigid registration and enhanced precision on patient positioning has lead to its increasing use in clinics. The benefits of this imaging technique are at the cost of increasing the dose to healthy surrounding organs. Our center has moved toward the use of daily partial rotation kV CBCT to restrict the dose to healthy tissues. This study aims to better quantify radiation doses from different image-guidance techniques such as tangential EPID, complete and partial kV CBCTmore » for breast treatments. Cross-calibrated ionization chambers and kV calibrated Gafchromic films were used to measure the dose to the heart, lungs, breasts and skin. It was found that performing partial kV CBCT decreases the heart dose by about 36%, the lungs dose by 31%, the contralateral breast dose by 41% and the ipsilateral breast dose by 43% when compared to a full rotation CBCT. The skin dose measured for a full rotation CBCT was about 0.8 cGy for the contralateral breast and about 0.3 cGy for the ipsilateral breast. The study is still ongoing and results on skin doses for partial rotation kV CBCT as well as for tangential EPID images are upcoming.« less

  11. Oxygen-rich Mira variables: Near-infrared luminosity calibrations. Populations and period-luminosity relations

    NASA Technical Reports Server (NTRS)

    Alvarez, R.; Mennessier, M.-O.; Barthes, D.; Luri, X.; Mattei, J. A.

    1997-01-01

    Hipparcos astrometric and kinematical data of oxygen-rich Mira variables are used to calibrate absolute near-infrared magnitudes and kinematic parameters. Three distinct classes of stars with different kinematics and scale heights were identified. The two most significant groups present characteristics close to those usually assigned to extended/thick disk-halo populations and old disk populations, respectively, and thus they may differ by their metallicity abundance. Two parallel period-luminosity relations are found, one for each population. The shift between these relations is interpreted as the consequence of the effects of metallicity abundance on the luminosity.

  12. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  13. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  14. 40 CFR 63.10010 - What are my monitoring, installation, operation, and maintenance requirements?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... that emissions are controlled with a common control device or series of control devices, are discharged... parallel control devices or multiple series of control devices are discharged to the atmosphere through... quality control activities (including, as applicable, calibration checks and required zero and span...

  15. Holographic Associative Memory Employing Phase Conjugation

    NASA Astrophysics Data System (ADS)

    Soffer, B. H.; Marom, E.; Owechko, Y.; Dunning, G.

    1986-12-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,8'8' are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  16. Parallels between control PDE's (Partial Differential Equations) and systems of ODE's (Ordinary Differential Equations)

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Villarreal, Ramiro

    1987-01-01

    System theorists understand that the same mathematical objects which determine controllability for nonlinear control systems of ordinary differential equations (ODEs) also determine hypoellipticity for linear partial differentail equations (PDEs). Moreover, almost any study of ODE systems begins with linear systems. It is remarkable that Hormander's paper on hypoellipticity of second order linear p.d.e.'s starts with equations due to Kolmogorov, which are shown to be analogous to the linear PDEs. Eigenvalue placement by state feedback for a controllable linear system can be paralleled for a Kolmogorov equation if an appropriate type of feedback is introduced. Results concerning transformations of nonlinear systems to linear systems are similar to results for transforming a linear PDE to a Kolmogorov equation.

  17. Lessons from tobacco control for advocates of healthy transport.

    PubMed

    Mindell, J

    2001-06-01

    Many parallels can be drawn between cigarettes and motor vehicles, smoking and car driving, and the tobacco and the auto/oil industries. Those promoting healthy and sustainable transport policies can learn lessons from tobacco control activities over the past 50 years. Evidence-based legislation is more effective than negotiated voluntary agreements between industry and government. Media advocacy is crucial to reframe the issues to allow changes in national policies that facilitate healthier choices. Worthwhile public health policies seen as a threat by multinational companies will be opposed by them but active national and international networks of healthcare professionals, voluntary organizations, charities and their supporters can match the political power of these industries.

  18. Improvement of the repeatability of parallel transmission at 7T using interleaved acquisition in the calibration scan.

    PubMed

    Kameda, Hiroyuki; Kudo, Kohsuke; Matsuda, Tsuyoshi; Harada, Taisuke; Iwadate, Yuji; Uwano, Ikuko; Yamashita, Fumio; Yoshioka, Kunihiro; Sasaki, Makoto; Shirato, Hiroki

    2017-12-04

    Respiration-induced phase shift affects B 0 /B 1 + mapping repeatability in parallel transmission (pTx) calibration for 7T brain MRI, but is improved by breath-holding (BH). However, BH cannot be applied during long scans. To examine whether interleaved acquisition during calibration scanning could improve pTx repeatability and image homogeneity. Prospective. Nine healthy subjects. 7T MRI with a two-channel RF transmission system was used. Calibration scanning for B 0 /B 1 + mapping was performed under sequential acquisition/free-breathing (Seq-FB), Seq-BH, and interleaved acquisition/FB (Int-FB) conditions. The B 0 map was calculated with two echo times, and the B 1 + map was obtained using the Bloch-Siegert method. Actual flip-angle imaging (AFI) and gradient echo (GRE) imaging were performed using pTx and quadrature-Tx (qTx). All scans were acquired in five sessions. Repeatability was evaluated using intersession standard deviation (SD) or coefficient of variance (CV), and in-plane homogeneity was evaluated using in-plane CV. A paired t-test with Bonferroni correction for multiple comparisons was used. The intersession CV/SDs for the B 0 /B 1 + maps were significantly smaller in Int-FB than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The intersession CVs for the AFI and GRE images were also significantly smaller in Int-FB, Seq-BH, and qTx than in Seq-FB (Bonferroni-corrected P < 0.05 for all). The in-plane CVs for the AFI and GRE images in Seq-FB, Int-FB, and Seq-BH were significantly smaller than in qTx (Bonferroni-corrected P < 0.01 for all). Using interleaved acquisition during calibration scans of pTx for 7T brain MRI improved the repeatability of B 0 /B 1 + mapping, AFI, and GRE images, without BH. 1 Technical Efficacy Stage 1 J. Magn. Reson. Imaging 2017. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Quantifying the Climate-Scale Accuracy of Satellite Cloud Retrievals

    NASA Astrophysics Data System (ADS)

    Roberts, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Liang, L.; Di Girolamo, L.

    2014-12-01

    Instrument calibration and cloud retrieval algorithms have been developed to minimize retrieval errors on small scales. However, measurement uncertainties and assumptions within retrieval algorithms at the pixel level may alias into decadal-scale trends of cloud properties. We first, therefore, quantify how instrument calibration changes could alias into cloud property trends. For a perfect observing system the climate trend accuracy is limited only by the natural variability of the climate variable. Alternatively, for an actual observing system, the climate trend accuracy is additionally limited by the measurement uncertainty. Drifts in calibration over time may therefore be disguised as a true climate trend. We impose absolute calibration changes to MODIS spectral reflectance used as input to the CERES Cloud Property Retrieval System (CPRS) and run the modified MODIS reflectance through the CPRS to determine the sensitivity of cloud properties to calibration changes. We then use these changes to determine the impact of instrument calibration changes on trend uncertainty in reflected solar cloud properties. Secondly, we quantify how much cloud retrieval algorithm assumptions alias into cloud optical retrieval trends by starting with the largest of these biases: the plane-parallel assumption in cloud optical thickness (τC) retrievals. First, we collect liquid water cloud fields obtained from Multi-angle Imaging Spectroradiometer (MISR) measurements to construct realistic probability distribution functions (PDFs) of 3D cloud anisotropy (a measure of the degree to which clouds depart from plane-parallel) for different ISCCP cloud types. Next, we will conduct a theoretical study with dynamically simulated cloud fields and a 3D radiative transfer model to determine the relationship between 3D cloud anisotropy and 3D τC bias for each cloud type. Combining these results provides distributions of 3D τC bias by cloud type. Finally, we will estimate the change in frequency of occurrence of cloud types between two decades and will have the information needed to calculate the total change in 3D optical thickness bias between two decades. If we uncover aliases in this study, the results will motivate the development and rigorous testing of climate specific cloud retrieval algorithms.

  20. Accelerated Fast Spin-Echo Magnetic Resonance Imaging of the Heart Using a Self-Calibrated Split-Echo Approach

    PubMed Central

    Klix, Sabrina; Hezel, Fabian; Fuchs, Katharina; Ruff, Jan; Dieringer, Matthias A.; Niendorf, Thoralf

    2014-01-01

    Purpose Design, validation and application of an accelerated fast spin-echo (FSE) variant that uses a split-echo approach for self-calibrated parallel imaging. Methods For self-calibrated, split-echo FSE (SCSE-FSE), extra displacement gradients were incorporated into FSE to decompose odd and even echo groups which were independently phase encoded to derive coil sensitivity maps, and to generate undersampled data (reduction factor up to R = 3). Reference and undersampled data were acquired simultaneously. SENSE reconstruction was employed. Results The feasibility of SCSE-FSE was demonstrated in phantom studies. Point spread function performance of SCSE-FSE was found to be competitive with traditional FSE variants. The immunity of SCSE-FSE for motion induced mis-registration between reference and undersampled data was shown using a dynamic left ventricular model and cardiac imaging. The applicability of black blood prepared SCSE-FSE for cardiac imaging was demonstrated in healthy volunteers including accelerated multi-slice per breath-hold imaging and accelerated high spatial resolution imaging. Conclusion SCSE-FSE obviates the need of external reference scans for SENSE reconstructed parallel imaging with FSE. SCSE-FSE reduces the risk for mis-registration between reference scans and accelerated acquisitions. SCSE-FSE is feasible for imaging of the heart and of large cardiac vessels but also meets the needs of brain, abdominal and liver imaging. PMID:24728341

  1. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barker, Andrew T.; Benson, Thomas R.; Lee, Chak Shing

    ParELAG is a parallel C++ library for numerical upscaling of finite element discretizations and element-based algebraic multigrid solvers. It provides optimal complexity algorithms to build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured meshes. Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  2. Automatic Vagus Nerve Stimulation Triggered by Ictal Tachycardia: Clinical Outcomes and Device Performance--The U.S. E-37 Trial.

    PubMed

    Fisher, Robert S; Afra, Pegah; Macken, Micheal; Minecan, Daniela N; Bagić, Anto; Benbadis, Selim R; Helmers, Sandra L; Sinha, Saurabh R; Slater, Jeremy; Treiman, David; Begnaud, Jason; Raman, Pradheep; Najimipour, Bita

    2016-02-01

    The Automatic Stimulation Mode (AutoStim) feature of the Model 106 Vagus Nerve Stimulation (VNS) Therapy System stimulates the left vagus nerve on detecting tachycardia. This study evaluates performance, safety of the AutoStim feature during a 3-5-day Epilepsy Monitoring Unit (EMU) stay and long- term clinical outcomes of the device stimulating in all modes. The E-37 protocol (NCT01846741) was a prospective, unblinded, U.S. multisite study of the AspireSR(®) in subjects with drug-resistant partial onset seizures and history of ictal tachycardia. VNS Normal and Magnet Modes stimulation were present at all times except during the EMU stay. Outpatient visits at 3, 6, and 12 months tracked seizure frequency, severity, quality of life, and adverse events. Twenty implanted subjects (ages 21-69) experienced 89 seizures in the EMU. 28/38 (73.7%) of complex partial and secondarily generalized seizures exhibited ≥20% increase in heart rate change. 31/89 (34.8%) of seizures were treated by Automatic Stimulation on detection; 19/31 (61.3%) seizures ended during the stimulation with a median time from stimulation onset to seizure end of 35 sec. Mean duty cycle at six-months increased from 11% to 16%. At 12 months, quality of life and seizure severity scores improved, and responder rate was 50%. Common adverse events were dysphonia (n = 7), convulsion (n = 6), and oropharyngeal pain (n = 3). The Model 106 performed as intended in the study population, was well tolerated and associated with clinical improvement from baseline. The study design did not allow determination of which factors were responsible for improvements. © 2015 The Authors. Neuromodulation: Technology at the Neural Interface published by Wiley Periodicals, Inc. on behalf of International Neuromodulation Society.

  3. Calibration of ultra-high frequency (UHF) partial discharge sensors using FDTD method

    NASA Astrophysics Data System (ADS)

    Ishak, Asnor Mazuan; Ishak, Mohd Taufiq

    2018-02-01

    Ultra-high frequency (UHF) partial discharge sensors are widely used for conditioning monitoring and defect location in insulation system of high voltage equipment. Designing sensors for specific applications often requires an iterative process of manufacturing, testing and mechanical modifications. This paper demonstrates the use of finite-difference time-domain (FDTD) technique as a tool to predict the frequency response of UHF PD sensors. Using this approach, the design process can be simplified and parametric studies can be conducted in order to assess the influence of component dimensions and material properties on the sensor response. The modelling approach is validated using gigahertz transverse electromagnetic (GTEM) calibration system. The use of a transient excitation source is particularly suitable for modeling using FDTD, which is able to simulate the step response output voltage of the sensor from which the frequency response is obtained using the same post-processing applied to the physical measurement.

  4. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    PubMed

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  6. Determining insulation condition of 110kV instrument transformers. Linking PD measurement results from both gas chromatography and electrical method

    NASA Astrophysics Data System (ADS)

    Dan, C.; Morar, R.

    2017-05-01

    Working methods for on site testing of insulations: Gas chromatography (using the TFGA-P200 chromatographer); Electrical measurements of partial discharge levels using the digital detection, recording, analysis and partial discharge acquisition system, MPD600. First performed, between 2000-2015, were the chromatographic analyses concerning electrical insulating environments of: 102 current transformers, 110kV. Items in operation, functioning in 110/20kV substations. 38 voltage transformers, 110kV also in operation, functioning in 110/20kV substations. Then, electrical measurements of partial discharge inside instrument transformers, on site (power substations) were made (starting in the year 2009, over a 7-year period, collecting data until the year 2015) according to the provisions of standard EN 61869-1:2007 „Instrument transformers. General requirements”, applying, assimilated to it, type A partial discharge test procedure, using as test voltage the very rated 110kV distribution grid voltage. Given the results of two parallel measurements, containing: to this type of failure specific gas amount (H 2) and the quantitative partial discharge’ level, establishing a clear dependence between the quantity of partial discharges and the type and amount of in oil dissolved gases inside equipments affected by this type of defect: partial discharges, was expected. Of the „population” of instrument transformers subject of the two parallel measurements, the dependency between Q IEC (apparent charge) and (H 2) (hydrogen, gas amount dissolved within their insulating environment) represents a finite assemblage situated between the two limits developed on an empirical basis.

  7. Landsat-7 ETM+ On-Orbit Reflective-Band Radiometric Stability and Absolute Calibration

    NASA Technical Reports Server (NTRS)

    Markham, Brian L.; Thome, Kurtis J.; Barsi, Julia A.; Kaita, Ed; Helder, Dennis L.; Barker, John L.

    2003-01-01

    The Landsat-7 spacecraft carries the Enhanced Thematic Mapper Plus (ETM+) instrument. This instrument images the Earth land surface in eight parts of the electromagnetic spectrum, termed spectral bands. These spectral images are used to monitor changes in the land surface, so a consistent relationship, i.e., calibration, between the image data and the Earth surface brightness, is required. The ETM+ has several on- board calibration devices that are used to monitor this calibration. The best on-board calibration source employs a flat white painted reference panel and has indicated changes of between 0.5% to 2% per year in the ETM+ response, depending on the spectral band. However, most of these changes are believed to be caused by changes in the reference panel, as opposed to changes in the instrument's sensitivity. This belief is based partially on on-orbit calibrations using instrumented ground sites and observations of "invariant sites", hyper-arid sites of the Sahara and Arabia. Changes determined from these data sets indicate are 0.1% - 0.6% per year. Tests and comparisons to other sensors also indicate that the uncertainty of the calibration is at the 5% level.

  8. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization

    NASA Astrophysics Data System (ADS)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea

    2017-11-01

    Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a promising new set of tools for effectively balancing exploration, uncertainty, and computational demands when using EMODPS.

  9. 1001 Ways to run AutoDock Vina for virtual screening

    NASA Astrophysics Data System (ADS)

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D.

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  10. 1001 Ways to run AutoDock Vina for virtual screening.

    PubMed

    Jaghoori, Mohammad Mahdi; Bleijlevens, Boris; Olabarriaga, Silvia D

    2016-03-01

    Large-scale computing technologies have enabled high-throughput virtual screening involving thousands to millions of drug candidates. It is not trivial, however, for biochemical scientists to evaluate the technical alternatives and their implications for running such large experiments. Besides experience with the molecular docking tool itself, the scientist needs to learn how to run it on high-performance computing (HPC) infrastructures, and understand the impact of the choices made. Here, we review such considerations for a specific tool, AutoDock Vina, and use experimental data to illustrate the following points: (1) an additional level of parallelization increases virtual screening throughput on a multi-core machine; (2) capturing of the random seed is not enough (though necessary) for reproducibility on heterogeneous distributed computing systems; (3) the overall time spent on the screening of a ligand library can be improved by analysis of factors affecting execution time per ligand, including number of active torsions, heavy atoms and exhaustiveness. We also illustrate differences among four common HPC infrastructures: grid, Hadoop, small cluster and multi-core (virtual machine on the cloud). Our analysis shows that these platforms are suitable for screening experiments of different sizes. These considerations can guide scientists when choosing the best computing platform and set-up for their future large virtual screening experiments.

  11. Advanced autostereoscopic display for G-7 pilot project

    NASA Astrophysics Data System (ADS)

    Hattori, Tomohiko; Ishigaki, Takeo; Shimamoto, Kazuhiro; Sawaki, Akiko; Ishiguchi, Tsuneo; Kobayashi, Hiromi

    1999-05-01

    An advanced auto-stereoscopic display is described that permits the observation of a stereo pair by several persons simultaneously without the use of special glasses and any kind of head tracking devices for the viewers. The system is composed of a right eye system, a left eye system and a sophisticated head tracking system. In the each eye system, a transparent type color liquid crystal imaging plate is used with a special back light unit. The back light unit consists of a monochrome 2D display and a large format convex lens. The unit distributes the light of the viewers' correct each eye only. The right eye perspective system is combined with a left eye perspective system is combined with a left eye perspective system by a half mirror in order to function as a time-parallel stereoscopic system. The viewer's IR image is taken through and focused by the large format convex lens and feed back to the back light as a modulated binary half face image. The auto-stereoscopic display employs the TTL method as the accurate head tracking. The system was worked as a stereoscopic TV phone between Duke University Department Tele-medicine and Nagoya University School of Medicine Department Radiology using a high-speed digital line of GIBN. The applications are also described in this paper.

  12. Polycyclic aromatic hydrocarbons and the unidentified infrared emission bands - Auto exhaust along the Milky Way

    NASA Technical Reports Server (NTRS)

    Allamandola, L. J.; Tielens, A. G. G. M.; Barker, J. R.

    1985-01-01

    The unidentified infrared emission features (UIR bands) are attributed to a collection of partially hydrogenated, positively charged polycyclic aromatic hydrocarbons (PAHs). This assignment is based on a spectroscopic analysis of the UIR bands. Comparison of the observed interstellar 6.2 and 7.7-micron bands with the laboratory measured Raman spectrum of a collection of carbon-based particulates (auto exhaust) shows a very good agreement, supporting this identification. The infrared emission is due to relaxation from highly vibrationally and electronically excited states. The excitation is probably caused by UV photon absorption. The infrared fluorescence of one particular, highly vibrationally excited PAH (chrysene) is modeled. In this analysis the species is treated as a molecule rather than bulk material and the non-thermodynamic equilibrium nature of the emission is fully taken into account. From a comparison of the observed ratio of the 3.3 to 11.3-micron UIR bands with the model calculations, the average number of carbon atoms per molecule is estimated to be about 20. The abundance of interstellar PAHs is calculated to be about 2 x 10 to the -7th with respect to hydrogen.

  13. Continuous glucose monitoring in subcutaneous tissue using factory-calibrated sensors: a pilot study.

    PubMed

    Hoss, Udo; Jeddi, Iman; Schulz, Mark; Budiman, Erwin; Bhogal, Claire; McGarraugh, Geoffrey

    2010-08-01

    Commercial continuous subcutaneous glucose monitors require in vivo calibration using capillary blood glucose tests. Feasibility of factory calibration, i.e., sensor batch characterization in vitro with no further need for in vivo calibration, requires a predictable and stable in vivo sensor sensitivity and limited inter- and intra-subject variation of the ratio of interstitial to blood glucose concentration. Twelve volunteers wore two FreeStyle Navigator (Abbott Diabetes Care, Alameda, CA) continuous glucose monitoring systems for 5 days in parallel for two consecutive sensor wears (four sensors per subject, 48 sensors total). Sensors from a prototype sensor lot with a low variability in glucose sensitivity were used for the study. Median sensor sensitivity values based on capillary blood glucose were calculated per sensor and compared for inter- and intra-subject variation. Mean absolute relative difference (MARD) calculation and error grid analysis were performed using a single calibration factor for all sensors to simulate factory calibration and compared to standard fingerstick calibration. Sensor sensitivity variation in vitro was 4.6%, which increased to 8.3% in vivo (P < 0.0001). Analysis of variance revealed no significant inter-subject differences in sensor sensitivity (P = 0.134). Applying a single universal calibration factor retrospectively to all sensors resulted in a MARD of 10.4% and 88.1% of values in Clarke Error Grid Zone A, compared to a MARD of 10.9% and 86% of values in Error Grid Zone A for fingerstick calibration. Factory calibration of sensors for continuous subcutaneous glucose monitoring is feasible with similar accuracy to standard fingerstick calibration. Additional data are required to confirm this result in subjects with diabetes.

  14. Experimental cross-correlation nitrogen Q-branch CARS thermometry in a spark ignition engine

    NASA Astrophysics Data System (ADS)

    Lockett, R. D.; Ball, D.; Robertson, G. N.

    2013-07-01

    A purely experimental technique was employed to derive temperatures from nitrogen Q-branch Coherent Anti-Stokes Raman Scattering (CARS) spectra, obtained in a high pressure, high temperature environment (spark ignition Otto engine). This was in order to obviate any errors arising from deficiencies in the spectral scaling laws which are commonly used to represent nitrogen Q-branch CARS spectra at high pressure. The spectra obtained in the engine were compared with spectra obtained in a calibrated high pressure, high temperature cell, using direct cross-correlation in place of the minimisation of sums of squares of residuals. The technique is demonstrated through the measurement of air temperature as a function of crankshaft angle inside the cylinder of a motored single-cylinder Ricardo E6 research engine, followed by the measurement of fuel-air mixture temperatures obtained during the compression stroke in a knocking Ricardo E6 engine. A standard CARS programme (SANDIA's CARSFIT) was employed to calibrate the altered non-resonant background contribution to the CARS spectra that was caused by the alteration to the mole fraction of nitrogen in the unburned fuel-air mixture. The compression temperature profiles were extrapolated in order to predict the auto-ignition temperatures.

  15. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    PubMed

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.

  16. Comparisons of fully automated syphilis tests with conventional VDRL and FTA-ABS tests.

    PubMed

    Choi, Seung Jun; Park, Yongjung; Lee, Eun Young; Kim, Sinyoung; Kim, Hyon-Suk

    2013-06-01

    Serologic tests are widely used for the diagnosis of syphilis. However, conventional methods require well-trained technicians to produce reliable results. We compared automated nontreponemal and treponemal tests with conventional methods. The HiSens Auto Rapid Plasma Reagin (AutoRPR) and Treponema Pallidum particle agglutination (AutoTPPA) tests, which utilize latex turbidimetric immunoassay, were assessed. A total of 504 sera were assayed by AutoRPR, AutoTPPA, conventional VDRL and FTA-ABS. Among them, 250 samples were also tested by conventional TPPA. The concordance rate between the results of VDRL and AutoRPR was 67.5%, and 164 discrepant cases were all VDRL reactive but AutoRPR negative. In the 164 cases, 133 showed FTA-ABS reactivity. Medical records of 106 among the 133 cases were reviewed, and 82 among 106 specimens were found to be collected from patients already treated for syphilis. The concordance rate between the results of AutoTPPA and FTA-ABS was 97.8%. The results of conventional TPPA and AutoTPPA for 250 samples were concordant in 241 cases (96.4%). AutoRPR showed higher specificity than that of VDRL, while VDRL demonstrated higher sensitivity than that of AutoRPR regardless of whether the patients had been already treated for syphilis or not. Both FTA-ABS and AutoTPPA showed high sensitivities and specificities greater than 98.0%. Automated RPR and TPPA tests could be alternatives to conventional syphilis tests, and AutoRPR would be particularly suitable in treatment monitoring, since results by AutoRPR in cases after treatment became negative more rapidly than by VDRL. Copyright © 2013. Published by Elsevier Inc.

  17. Quality assessment of gasoline using comprehensive two-dimensional gas chromatography combined with unfolded partial least squares: A reliable approach for the detection of gasoline adulteration.

    PubMed

    Parastar, Hadi; Mostafapour, Sara; Azimi, Gholamhasan

    2016-01-01

    Comprehensive two-dimensional gas chromatography and flame ionization detection combined with unfolded-partial least squares is proposed as a simple, fast and reliable method to assess the quality of gasoline and to detect its potential adulterants. The data for the calibration set are first baseline corrected using a two-dimensional asymmetric least squares algorithm. The number of significant partial least squares components to build the model is determined using the minimum value of root-mean square error of leave-one out cross validation, which was 4. In this regard, blends of gasoline with kerosene, white spirit and paint thinner as frequently used adulterants are used to make calibration samples. Appropriate statistical parameters of regression coefficient of 0.996-0.998, root-mean square error of prediction of 0.005-0.010 and relative error of prediction of 1.54-3.82% for the calibration set show the reliability of the developed method. In addition, the developed method is externally validated with three samples in validation set (with a relative error of prediction below 10.0%). Finally, to test the applicability of the proposed strategy for the analysis of real samples, five real gasoline samples collected from gas stations are used for this purpose and the gasoline proportions were in range of 70-85%. Also, the relative standard deviations were below 8.5% for different samples in the prediction set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Criterion-Referenced Test Items for Auto Body.

    ERIC Educational Resources Information Center

    Tannehill, Dana, Ed.

    This test item bank on auto body repair contains criterion-referenced test questions based upon competencies found in the Missouri Auto Body Competency Profile. Some test items are keyed for multiple competencies. The tests cover the following 26 competency areas in the auto body curriculum: auto body careers; measuring and mixing; tools and…

  19. Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions

    NASA Astrophysics Data System (ADS)

    Buddala, Santhoshi Snigdha

    Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.

  20. A Plug-and-Play Human-Centered Virtual TEDS Architecture for the Web of Things.

    PubMed

    Hernández-Rojas, Dixys L; Fernández-Caramés, Tiago M; Fraga-Lamas, Paula; Escudero, Carlos J

    2018-06-27

    This article presents a Virtual Transducer Electronic Data Sheet (VTEDS)-based framework for the development of intelligent sensor nodes with plug-and-play capabilities in order to contribute to the evolution of the Internet of Things (IoT) toward the Web of Things (WoT). It makes use of new lightweight protocols that allow sensors to self-describe, auto-calibrate, and auto-register. Such protocols enable the development of novel IoT solutions while guaranteeing low latency, low power consumption, and the required Quality of Service (QoS). Thanks to the developed human-centered tools, it is possible to configure and modify dynamically IoT device firmware, managing the active transducers and their communication protocols in an easy and intuitive way, without requiring any prior programming knowledge. In order to evaluate the performance of the system, it was tested when using Bluetooth Low Energy (BLE) and Ethernet-based smart sensors in different scenarios. Specifically, user experience was quantified empirically (i.e., how fast the system shows collected data to a user was measured). The obtained results show that the proposed VTED architecture is very fast, with some smart sensors (located in Europe) able to self-register and self-configure in a remote cloud (in South America) in less than 3 s and to display data to remote users in less than 2 s.

  1. An engineered design of a diffractive mask for high precision astrometry [Modeling a diffractive mask that calibrates optical distortions

    DOE PAGES

    Dennison, Kaitlin; Ammons, S. Mark; Garrel, Vincent; ...

    2016-06-26

    AutoCAD, Zemax Optic Studio 15, and Interactive Data Language (IDL) with the Proper Library are used to computationally model and test a diffractive mask (DiM) suitable for use in the Gemini Multi-Conjugate Adaptive Optics System (GeMS) on the Gemini South Telescope. Systematic errors in telescope imagery are produced when the light travels through the adaptive optics system of the telescope. DiM is a transparent, flat optic with a pattern of miniscule dots lithographically applied to it. It is added ahead of the adaptive optics system in the telescope in order to produce diffraction spots that will encode systematic errors inmore » the optics after it. Once these errors are encoded, they can be corrected for. DiM will allow for more accurate measurements in astrometry and thus improve exoplanet detection. Furthermore, the mechanics and physical attributes of the DiM are modeled in AutoCAD. Zemax models the ray propagation of point sources of light through the telescope. IDL and Proper simulate the wavefront and image results of the telescope. Aberrations are added to the Zemax and IDL models to test how the diffraction spots from the DiM change in the final images. Based on the Zemax and IDL results, the diffraction spots are able to encode the systematic aberrations.« less

  2. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  3. Homology modeling and metabolism prediction of human carboxylesterase-2 using docking analyses by GriDock: a parallelized tool based on AutoDock 4.0

    NASA Astrophysics Data System (ADS)

    Vistoli, Giulio; Pedretti, Alessandro; Mazzolari, Angelica; Testa, Bernard

    2010-09-01

    Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted to developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas less has been done to predict the activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES2. The study involved first a homology modeling of the hCES2 protein based on the model of hCES1 since the two proteins share a high degree of homology (≅73%). A set of 40 known substrates of hCES2 was taken from the literature; the ligands were docked in both their neutral and ionized forms using GriDock, a parallel tool based on the AutoDock4.0 engine which can perform efficient and easy virtual screening analyses of large molecular databases exploiting multi-core architectures. Useful statistical models (e.g., r 2 = 0.91 for substrates in their unprotonated state) were calculated by correlating experimental pKm values with distance between the carbon atom of the substrate's ester group and the hydroxy function of Ser228. Additional parameters in the equations accounted for hydrophobic and electrostatic interactions between substrates and contributing residues. The negatively charged residues in the hCES2 cavity explained the preference of the enzyme for neutral substrates and, more generally, suggested that ligands which interact too strongly by ionic bonds (e.g., ACE inhibitors) cannot be good CES2 substrates because they are trapped in the cavity in unproductive modes and behave as inhibitors. The effects of protonation on substrate recognition and the contrasting behavior of substrates and products were finally investigated by MD simulations of some CES2 complexes.

  4. The Effect of Ethanol Addition to Gasoline on Low- and Intermediate-Temperature Heat Release under Boosted Conditions in Kinetically Controlled Engines

    NASA Astrophysics Data System (ADS)

    Vuilleumier, David Malcolm

    The detailed study of chemical kinetics in engines has become required to further advance engine efficiency while simultaneously lowering engine emissions. This push for higher efficiency engines is not caused by a lack of oil, but by efforts to reduce anthropogenic carbon dioxide emissions, that cause global warming. To operate in more efficient manners while reducing traditional pollutant emissions, modern internal combustion piston engines are forced to operate in regimes in which combustion is no longer fully transport limited, and instead is at least partially governed by chemical kinetics of combusting mixtures. Kinetically-controlled combustion allows the operation of piston engines at high compression ratios, with partially-premixed dilute charges; these operating conditions simultaneously provide high thermodynamic efficiency and low pollutant formation. The investigations presented in this dissertation study the effect of ethanol addition on the low-temperature chemistry of gasoline type fuels in engines. These investigations are carried out both in a simplified, fundamental engine experiment, named Homogeneous Charge Compression Ignition, as well as in more applied engine systems, named Gasoline Compression Ignition engines and Partial Fuel Stratification engines. These experimental investigations, and the accompanying modeling work, show that ethanol is an effective scavenger of radicals at low temperatures, and this inhibits the low temperature pathways of gasoline oxidation. Further, the investigations measure the sensitivity of gasoline auto-ignition to system pressure at conditions that are relevant to modern engines. It is shown that at pressures above 40 bar and temperatures below 850 Kelvin, gasoline begins to exhibit Low-Temperature Heat Release. However, the addition of 20% ethanol raises the pressure requirement to 60 bar, while the temperature requirement remains unchanged. These findings have major implications for a range of modern engines. Low-Temperature Heat Release significantly enhances the auto-ignition process, which limits the conditions under which advanced combustion strategies may operate. As these advanced combustion strategies are required to meet emissions and fuel-economy regulations, the findings of this dissertation may benefit and be incorporated into future engine design toolkits, such as detailed chemical kinetic mechanisms.

  5. Fast determination of total ginsenosides content in ginseng powder by near infrared reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang

    2006-01-01

    Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.

  6. Accuracy and Calibration of High Explosive Thermodynamic Equations of State

    NASA Astrophysics Data System (ADS)

    Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack

    2010-10-01

    The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.

  7. Multi-objective Calibration of DHSVM Based on Hydrologic Key Elements in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Liu, L.; Xu, Y. P.

    2017-12-01

    Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).

  8. Influence of local calibration on the quality of online wet weather discharge monitoring: feedback from five international case studies.

    PubMed

    Caradot, Nicolas; Sonnenberg, Hauke; Rouault, Pascale; Gruber, Günter; Hofer, Thomas; Torres, Andres; Pesci, Maria; Bertrand-Krajewski, Jean-Luc

    2015-01-01

    This paper reports about experiences gathered from five online monitoring campaigns in the sewer systems of Berlin (Germany), Graz (Austria), Lyon (France) and Bogota (Colombia) using ultraviolet-visible (UV-VIS) spectrometers and turbidimeters. Online probes are useful for the measurement of highly dynamic processes, e.g. combined sewer overflows (CSO), storm events, and river impacts. The influence of local calibration on the quality of online chemical oxygen demand (COD) measurements of wet weather discharges has been assessed. Results underline the need to establish local calibration functions for both UV-VIS spectrometers and turbidimeters. It is suggested that practitioners calibrate locally their probes using at least 15-20 samples. However, these samples should be collected over several events and cover most of the natural variability of the measured concentration. For this reason, the use of automatic peristaltic samplers in parallel to online monitoring is recommended with short representative sampling campaigns during wet weather discharges. Using reliable calibration functions, COD loads of CSO and storm events can be estimated with a relative uncertainty of approximately 20%. If no local calibration is established, concentrations and loads are estimated with a high error rate, questioning the reliability and meaning of the online measurement. Similar results have been obtained for total suspended solids measurements.

  9. Pockels-effect cell for gas-flow simulation

    NASA Astrophysics Data System (ADS)

    Weimer, D.

    1982-05-01

    A Pockels effect cell using a 75 cu cm DK*P crystal was developed and used as a gas flow simulator. Index of refraction gradients were produced in the cell by the fringing fields of parallel plate electrodes. Calibration curves for the device were obtained for index of refraction gradients in excess of .00025 m.

  10. Cryogenic liquid-level detector

    NASA Technical Reports Server (NTRS)

    Hamlet, J.

    1978-01-01

    Detector is designed for quick assembly, fast response, and good performance under vibratory stress. Its basic parallel-plate open configuration can be adapted to any length and allows its calibration scale factor to be predicted accurately. When compared with discrete level sensors, continuous reading sensor was found to be superior if there is sloshing, boiling, or other disturbance.

  11. Scheduling for Locality in Shared-Memory Multiprocessors

    DTIC Science & Technology

    1993-05-01

    Submitted in Partial Fulfillment of the Requirements for the Degree ’)iIC Q(JALfryT INSPECTED 5 DOCTOR OF PHILOSOPHY I Accesion For Supervised by NTIS CRAM... architecture on parallel program performance, explain the implications of this trend on popular parallel programming models, and propose system software to 0...decomoosition and scheduling algorithms. I. SUIUECT TERMS IS. NUMBER OF PAGES shared-memory multiprocessors; architecture trends; loop 110 scheduling

  12. Power/Performance Trade-offs of Small Batched LU Based Solvers on GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villa, Oreste; Fatica, Massimiliano; Gawande, Nitin A.

    In this paper we propose and analyze a set of batched linear solvers for small matrices on Graphic Processing Units (GPUs), evaluating the various alternatives depending on the size of the systems to solve. We discuss three different solutions that operate with different level of parallelization and GPU features. The first, exploiting the CUBLAS library, manages matrices of size up to 32x32 and employs Warp level (one matrix, one Warp) parallelism and shared memory. The second works at Thread-block level parallelism (one matrix, one Thread-block), still exploiting shared memory but managing matrices up to 76x76. The third is Thread levelmore » parallel (one matrix, one thread) and can reach sizes up to 128x128, but it does not exploit shared memory and only relies on the high memory bandwidth of the GPU. The first and second solution only support partial pivoting, the third one easily supports partial and full pivoting, making it attractive to problems that require greater numerical stability. We analyze the trade-offs in terms of performance and power consumption as function of the size of the linear systems that are simultaneously solved. We execute the three implementations on a Tesla M2090 (Fermi) and on a Tesla K20 (Kepler).« less

  13. Histogram analysis for smartphone-based rapid hematocrit determination

    PubMed Central

    Jalal, Uddin M.; Kim, Sang C.; Shim, Joon S.

    2017-01-01

    A novel and rapid analysis technique using histogram has been proposed for the colorimetric quantification of blood hematocrits. A smartphone-based “Histogram” app for the detection of hematocrits has been developed integrating the smartphone embedded camera with a microfluidic chip via a custom-made optical platform. The developed histogram analysis shows its effectiveness in the automatic detection of sample channel including auto-calibration and can analyze the single-channel as well as multi-channel images. Furthermore, the analyzing method is advantageous to the quantification of blood-hematocrit both in the equal and varying optical conditions. The rapid determination of blood hematocrits carries enormous information regarding physiological disorders, and the use of such reproducible, cost-effective, and standard techniques may effectively help with the diagnosis and prevention of a number of human diseases. PMID:28717569

  14. Associative Memory In A Phase Conjugate Resonator Cavity Utilizing A Hologram

    NASA Astrophysics Data System (ADS)

    Owechko, Y.; Marom, E.; Soffer, B. H.; Dunning, G.

    1987-01-01

    The principle of information retrieval by association has been suggested as a basis for parallel computing and as the process by which human memory functions.1 Various associative processors have been proposed that use electronic or optical means. Optical schemes,2-7 in particular, those based on holographic principles,3,6,7 are well suited to associative processing because of their high parallelism and information throughput. Previous workers8 demonstrated that holographically stored images can be recalled by using relatively complicated reference images but did not utilize nonlinear feedback to reduce the large cross talk that results when multiple objects are stored and a partial or distorted input is used for retrieval. These earlier approaches were limited in their ability to reconstruct the output object faithfully from a partial input.

  15. High Performance Input/Output for Parallel Computer Systems

    NASA Technical Reports Server (NTRS)

    Ligon, W. B.

    1996-01-01

    The goal of our project is to study the I/O characteristics of parallel applications used in Earth Science data processing systems such as Regional Data Centers (RDCs) or EOSDIS. Our approach is to study the runtime behavior of typical programs and the effect of key parameters of the I/O subsystem both under simulation and with direct experimentation on parallel systems. Our three year activity has focused on two items: developing a test bed that facilitates experimentation with parallel I/O, and studying representative programs from the Earth science data processing application domain. The Parallel Virtual File System (PVFS) has been developed for use on a number of platforms including the Tiger Parallel Architecture Workbench (TPAW) simulator, The Intel Paragon, a cluster of DEC Alpha workstations, and the Beowulf system (at CESDIS). PVFS provides considerable flexibility in configuring I/O in a UNIX- like environment. Access to key performance parameters facilitates experimentation. We have studied several key applications fiom levels 1,2 and 3 of the typical RDC processing scenario including instrument calibration and navigation, image classification, and numerical modeling codes. We have also considered large-scale scientific database codes used to organize image data.

  16. Arabidopsis TNL-WRKY domain receptor RRS1 contributes to temperature-conditioned RPS4 auto-immunity

    PubMed Central

    Heidrich, Katharina; Tsuda, Kenichi; Blanvillain-Baufumé, Servane; Wirthmueller, Lennart; Bautor, Jaqueline; Parker, Jane E.

    2013-01-01

    In plant effector-triggered immunity (ETI), intracellular nucleotide binding-leucine rich repeat (NLR) receptors are activated by specific pathogen effectors. The Arabidopsis TIR (Toll-Interleukin-1 receptor domain)-NLR (denoted TNL) gene pair, RPS4 and RRS1, confers resistance to Pseudomonas syringae pv tomato (Pst) strain DC3000 expressing the Type III-secreted effector, AvrRps4. Nuclear accumulation of AvrRps4, RPS4, and the TNL resistance regulator EDS1 is necessary for ETI. RRS1 possesses a C-terminal “WRKY” transcription factor DNA binding domain suggesting that important RPS4/RRS1 recognition and/or resistance signaling events occur at the nuclear chromatin. In Arabidopsis accession Ws-0, the RPS4Ws/RRS1Ws allelic pair governs resistance to Pst/AvrRps4 accompanied by host programed cell death (pcd). In accession Col-0, RPS4Col/RRS1Col effectively limits Pst/AvrRps4 growth without pcd. Constitutive expression of HA-StrepII tagged RPS4Col (in a 35S:RPS4-HS line) confers temperature-conditioned EDS1-dependent auto-immunity. Here we show that a high (28°C, non-permissive) to moderate (19°C, permissive) temperature shift of 35S:RPS4-HS plants can be used to follow defense-related transcriptional dynamics without a pathogen effector trigger. By comparing responses of 35S:RPS4-HS with 35S:RPS4-HS rrs1-11 and 35S:RPS4-HS eds1-2 mutants, we establish that RPS4Col auto-immunity depends entirely on EDS1 and partially on RRS1Col. Examination of gene expression microarray data over 24 h after temperature shift reveals a mainly quantitative RRS1Col contribution to up- or down-regulation of a small subset of RPS4Col-reprogramed, EDS1-dependent genes. We find significant over-representation of WRKY transcription factor binding W-box cis-elements within the promoters of these genes. Our data show that RRS1Col contributes to temperature-conditioned RPS4Col auto-immunity and are consistent with activated RPS4Col engaging RRS1Col for resistance signaling. PMID:24146667

  17. Human lipodystrophies: genetic and acquired diseases of adipose tissue

    PubMed Central

    Capeau, Jacqueline; Magré, Jocelyne; Caron-Debarle, Martine; Lagathu, Claire; Antoine, Bénédicte; Béréziat, Véronique; Lascols, Olivier; Bastard, Jean-Philippe; Vigouroux, Corinne

    2010-01-01

    Human lipodystrophies represent a heterogeneous group of diseases characterized by generalized or partial fat loss, with fat hypertrophy in other depots when partial. Insulin resistance, dyslipidemia and diabetes are generally associated, leading to early complications. Genetic forms are uncommon: recessive generalized congenital lipodystrophies result in most cases from mutations in the genes encoding seipin or the 1-acyl-glycerol-3-phosphate-acyltransferase 2 (AGPAT2). Dominant partial familial lipodystrophies result from mutations in genes encoding the nuclear protein lamin A/C or the adipose transcription factor PPARγ. Importantly, lamin A/C mutations are also responsible for metabolic laminopathies, resembling the metabolic syndrome and progeria, a syndrome of premature aging. A number of lipodystrophic patients remain undiagnosed at the genetic level. Acquired lipodystrophy can be generalized, resembling congenital forms, or partial, as the Barraquer-Simons syndrome, with loss of fat in the upper part of the body contrasting with accumulation in the lower part. Although their aetiology is generally unknown, they could be associated with signs of auto-immunity. The most common forms of lipodystrophies are iatrogenic. In human immunodeficiency virus-infected patients, some first generation antiretroviral drugs were strongly related with peripheral lipoatrophy and metabolic alterations. Partial lipodystrophy also characterize patients with endogenous or exogenous long-term corticoid excess. Treatment of fat redistribution can sometimes benefit from plastic surgery. Lipid and glucose alterations are difficult to control leading to early occurrence of diabetic, cardio-vascular and hepatic complications. PMID:20551664

  18. Photometry with FORS

    NASA Astrophysics Data System (ADS)

    Freudling, W.; Møller, P.; Patat, F.; Moehler, S.; Romaniello, M.; Jehin, E.; O'Brien, K.; Izzo, C.; Pompei, E.

    Photometric calibration observations are routinely carried out with all ESO imaging cameras in every clear night. The nightly zeropoints derived from these observations are accurate to about 10%. Recently, we have started the FORS Absolute Photometry Project (FAP) to investigate, if and how percent-level absolute photometric accuracy can be achieved with FORS1, and how such photometric calibration can be offered to observers. We found that there are significant differences between the sky-flats and the true photometric response of the instrument which partially depend on the rotator angle. A second order correction to the sky-flat significantly improves the relative photometry within the field. We demonstrate the feasibility of percent level photometry and describe the calibrations necessary to achieve that level of accuracy.

  19. A heterogeneous computing accelerated SCE-UA global optimization method using OpenMP, OpenCL, CUDA, and OpenACC.

    PubMed

    Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Liang, Ke; Hong, Yang

    2017-10-01

    The shuffled complex evolution optimization developed at the University of Arizona (SCE-UA) has been successfully applied in various kinds of scientific and engineering optimization applications, such as hydrological model parameter calibration, for many years. The algorithm possesses good global optimality, convergence stability and robustness. However, benchmark and real-world applications reveal the poor computational efficiency of the SCE-UA. This research aims at the parallelization and acceleration of the SCE-UA method based on powerful heterogeneous computing technology. The parallel SCE-UA is implemented on Intel Xeon multi-core CPU (by using OpenMP and OpenCL) and NVIDIA Tesla many-core GPU (by using OpenCL, CUDA, and OpenACC). The serial and parallel SCE-UA were tested based on the Griewank benchmark function. Comparison results indicate the parallel SCE-UA significantly improves computational efficiency compared to the original serial version. The OpenCL implementation obtains the best overall acceleration results however, with the most complex source code. The parallel SCE-UA has bright prospects to be applied in real-world applications.

  20. A proposed standard method for polarimetric calibration and calibration verification

    NASA Astrophysics Data System (ADS)

    Persons, Christopher M.; Jones, Michael W.; Farlow, Craig A.; Morell, L. Denise; Gulley, Michael G.; Spradley, Kevin D.

    2007-09-01

    Accurate calibration of polarimetric sensors is critical to reducing and analyzing phenomenology data, producing uniform polarimetric imagery for deployable sensors, and ensuring predictable performance of polarimetric algorithms. It is desirable to develop a standard calibration method, including verification reporting, in order to increase credibility with customers and foster communication and understanding within the polarimetric community. This paper seeks to facilitate discussions within the community on arriving at such standards. Both the calibration and verification methods presented here are performed easily with common polarimetric equipment, and are applicable to visible and infrared systems with either partial Stokes or full Stokes sensitivity. The calibration procedure has been used on infrared and visible polarimetric imagers over a six year period, and resulting imagery has been presented previously at conferences and workshops. The proposed calibration method involves the familiar calculation of the polarimetric data reduction matrix by measuring the polarimeter's response to a set of input Stokes vectors. With this method, however, linear combinations of Stokes vectors are used to generate highly accurate input states. This allows the direct measurement of all system effects, in contrast with fitting modeled calibration parameters to measured data. This direct measurement of the data reduction matrix allows higher order effects that are difficult to model to be discovered and corrected for in calibration. This paper begins with a detailed tutorial on the proposed calibration and verification reporting methods. Example results are then presented for a LWIR rotating half-wave retarder polarimeter.

  1. The first international workshop on "Advancement of POLarimetric Observations: calibration and improved aerosol retrievals": APOLO-2017

    NASA Astrophysics Data System (ADS)

    Dubovik, Oleg; Li, Zhengqiang; Mishchenko, Michael I.

    2018-06-01

    The international workshop on "Advancement of POLarimetric Observations: calibration and improved aerosol retrievals-2017" (APOLO-2017) took place in Hefei, China on 24 - 27 October 2017. This was the inaugural meeting of a planned series of workshops on satellite polarimetry aimed at addressing the rapidly growing interest of the scientific community in polarimetric remote-sensing observations from space. The workshop was held at the Anhui Institute of Optics and Fine Mechanics, Hefei, widely known for 15 years of experience in the development of research polarimetry sensors and for hosting the building in parallel of several orbital polarimeters.

  2. Spectral X-Ray Diffraction using a 6 Megapixel Photon Counting Array Detector.

    PubMed

    Muir, Ryan D; Pogranichniy, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2015-03-12

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  3. Spectral x-ray diffraction using a 6 megapixel photon counting array detector

    NASA Astrophysics Data System (ADS)

    Muir, Ryan D.; Pogranichniy, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2015-03-01

    Pixel-array array detectors allow single-photon counting to be performed on a massively parallel scale, with several million counting circuits and detectors in the array. Because the number of photoelectrons produced at the detector surface depends on the photon energy, these detectors offer the possibility of spectral imaging. In this work, a statistical model of the instrument response is used to calibrate the detector on a per-pixel basis. In turn, the calibrated sensor was used to perform separation of dual-energy diffraction measurements into two monochromatic images. Targeting applications include multi-wavelength diffraction to aid in protein structure determination and X-ray diffraction imaging.

  4. Total Pancreatectomy and Islet Auto-Transplantation in Children for Chronic Pancreatitis. Indication, Surgical Techniques, Post Operative Management and Long-Term Outcomes

    PubMed Central

    Chinnakotla, Srinath; Bellin, Melena D.; Schwarzenberg, Sarah J.; Radosevich, David M.; Cook, Marie; Dunn, Ty B.; Beilman, Gregory J.; Freeman, Martin L.; Balamurugan, A.N.; Wilhelm, Josh; Bland, Barbara; Jimenez-Vega, Jose M; Hering, Bernhard J.; Vickers, Selwyn M.; Pruett, Timothy L.; Sutherland, David E.R.

    2014-01-01

    Objective Describe the surgical technique, complications and long term outcomes of total pancreatectomy and islet auto transplantation (TP-IAT) in a large series of pediatric patients. Summary Background Data Surgical management of childhood pancreatitis is not clear; partial resection or drainage procedures often provide transient pain relief, but long term recurrence is common due to the diffuse involvement of the pancreas. Total pancreatectomy (TP) removes the source of the pain, while islet auto transplantation (IAT) potentially can prevent or minimize TP-related diabetes. Methods Retrospective review of 75 children undergoing TP-IAT for chronic pancreatitis who had failed medical, endoscopic or surgical treatment between 1989–2012. Results Pancreatitis pain and the severity of pain statistically improved in 90% of patients after TP-IAT (p =<0.001). The relief from narcotics was sustained. Of the 75 patients undergoing TP-IAT, 31 (41.3%) achieved insulin independence. Younger age (p=0.032), lack of prior Puestow (p=0.018), lower body surface area (p=0.048), IEQ per Kg Body Weight (p=0.001) and total IEQ (100,000) (0.004) were associated with insulin independence. By multivariate analysis, 3 factors were associated with insulin independence after TP-IAT:(1) male gender, (2) lower body surface area and the (3) higher total IEQ per kilogram body weight. Total IEQ (100,000) was the single factor most strongly associated with insulin independence (OR = 2.62; p value < 0.001). Conclusions TP-IAT provides sustained pain relief and improved quality of life. The β cell function is dependent on islet yield. TP-IAT is an effective therapy for children with painful pancreatitis that fail medical and or endoscopic management PMID:24509206

  5. Robust scoring functions for protein-ligand interactions with quantum chemical charge models.

    PubMed

    Wang, Jui-Chih; Lin, Jung-Hsin; Chen, Chung-Ming; Perryman, Alex L; Olson, Arthur J

    2011-10-24

    Ordinary least-squares (OLS) regression has been used widely for constructing the scoring functions for protein-ligand interactions. However, OLS is very sensitive to the existence of outliers, and models constructed using it are easily affected by the outliers or even the choice of the data set. On the other hand, determination of atomic charges is regarded as of central importance, because the electrostatic interaction is known to be a key contributing factor for biomolecular association. In the development of the AutoDock4 scoring function, only OLS was conducted, and the simple Gasteiger method was adopted. It is therefore of considerable interest to see whether more rigorous charge models could improve the statistical performance of the AutoDock4 scoring function. In this study, we have employed two well-established quantum chemical approaches, namely the restrained electrostatic potential (RESP) and the Austin-model 1-bond charge correction (AM1-BCC) methods, to obtain atomic partial charges, and we have compared how different charge models affect the performance of AutoDock4 scoring functions. In combination with robust regression analysis and outlier exclusion, our new protein-ligand free energy regression model with AM1-BCC charges for ligands and Amber99SB charges for proteins achieve lowest root-mean-squared error of 1.637 kcal/mol for the training set of 147 complexes and 2.176 kcal/mol for the external test set of 1427 complexes. The assessment for binding pose prediction with the 100 external decoy sets indicates very high success rate of 87% with the criteria of predicted root-mean-squared deviation of less than 2 Å. The success rates and statistical performance of our robust scoring functions are only weakly class-dependent (hydrophobic, hydrophilic, or mixed).

  6. Automotive Mechanics as Applied to Auto Body; Auto Body Repair and Refinishing 3: 9037.02.

    ERIC Educational Resources Information Center

    Dade County Public Schools, Miami, FL.

    This is a course in which the student will receive the general information, technical knowledge, basic skills, attitudes, and values required for job entry level as an auto body repair helper. Course content includes general and specific goals, orientation, instruction in service tools and bench skills, and auto mechanics as applied to auto body.…

  7. The performance of two automatic servo-ventilation devices in the treatment of central sleep apnea.

    PubMed

    Javaheri, Shahrokh; Goetting, Mark G; Khayat, Rami; Wylie, Paul E; Goodwin, James L; Parthasarathy, Sairam

    2011-12-01

    This study was conducted to evaluate the therapeutic performance of a new auto Servo Ventilation device (Philips Respironics autoSV Advanced) for the treatment of complex central sleep apnea (CompSA). The features of autoSV Advanced include an automatic expiratory pressure (EPAP) adjustment, an advanced algorithm for distinguishing open versus obstructed airway apnea, a modified auto backup rate which is proportional to subject's baseline breathing rate, and a variable inspiratory support. Our primary aim was to compare the performance of the advanced servo-ventilator (BiPAP autoSV Advanced) with conventional servo-ventilator (BiPAP autoSV) in treating central sleep apnea (CSA). A prospective, multicenter, randomized, controlled trial. Five sleep laboratories in the United States. Thirty-seven participants were included. All subjects had full night polysomnography (PSG) followed by a second night continuous positive airway pressure (CPAP) titration. All had a central apnea index ≥ 5 per hour of sleep on CPAP. Subjects were randomly assigned to 2 full-night PSGs while treated with either the previously marketed autoSV, or the new autoSV Advanced device. The 2 randomized sleep studies were blindly scored centrally. Across the 4 nights (PSG, CPAP, autoSV, and autoSV Advanced), the mean ± 1 SD apnea hypopnea indices were 53 ± 23, 35 ± 20, 10 ± 10, and 6 ± 6, respectively; indices for CSA were 16 ± 19, 19 ± 18, 3 ± 4, and 0.6 ± 1. AutoSV Advanced was more effective than other modes in correcting sleep related breathing disorders. BiPAP autoSV Advanced was more effective than conventional BiPAP autoSV in the treatment of sleep disordered breathing in patients with CSA.

  8. MEASUREMENT OF THE INTENSITY OF THE PROTON BEAM OF THE HARVARD UNIVERSITY SYNCHROCYCLOTRON FOR ENERGY-SPECTRAL MEASUREMENTS OF NUCLEAR SECONDARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santoro, R.T.; Peelle, R.W.

    1964-03-01

    Two thin helium-filled parallel-plate ionization chambers were designed for use in continuously monitoring the 160-Mev proton beam of the Harvard University Synchrocyclotron over an intensity range from 10/sup 5/ to 10/sup 10/ protons/ sec. The ionlzation chambers were calibrated by two independert methods. In four calibrations the charge collected in the ionization chambers was compared with that deposited in a Faraday cup which followed the ionization chambers in the proton beam. In a second method, a calibration was made by individually counting beam protons with a pnir of thin scintillation detectors. The ionization chamber response was found to be flatmore » within 2% for a five-decade range of beam intensity. Comparison of the Faraday-cup calibrations with that from proton counting shows agreement to within 5%, which is considered satisfactory. The experimental results were also in agreement, within estimated errors, with the ionization chamber response calculated using an accepted value of the average energy loss per ion pair for helium. A slow shift in the calibrations with time is ascribed to a gradual contamination of the helium of the chambers by air leakage. (auth)« less

  9. Calibration methodology application of kerma area product meters in situ: Preliminary results

    NASA Astrophysics Data System (ADS)

    Costa, N. A.; Potiens, M. P. A.

    2014-11-01

    The kerma-area product (KAP) is a useful quantity to establish the reference levels of conventional X-ray examinations. It can be obtained by measurements carried out with a KAP meter on a plane parallel transmission ionization chamber mounted on the X-ray system. A KAP meter can be calibrated in laboratory or in situ, where it is used. It is important to use one reference KAP meter in order to obtain reliable quantity of doses on the patient. The Patient Dose Calibrator (PDC) is a new equipment from Radcal that measures KAP. It was manufactured following the IEC 60580 recommendations, an international standard for KAP meters. This study had the aim to calibrate KAP meters using the PDC in situ. Previous studies and the quality control program of the PDC have shown that it has good function in characterization tests of dosimeters with ionization chamber and it also has low energy dependence. Three types of KAP meters were calibrated in four different diagnostic X-ray equipments. The voltages used in the two first calibrations were 50 kV, 70 kV, 100 kV and 120 kV. The other two used 50 kV, 70 kV and 90 kV. This was related to the equipments limitations. The field sizes used for the calibration were 10 cm, 20 cm and 30 cm. The calibrations were done in three different cities with the purpose to analyze the reproducibility of the PDC. The results gave the calibration coefficient for each KAP meter and showed that the PDC can be used as a reference instrument to calibrate clinical KAP meters.

  10. Visualization and Tracking of Parallel CFD Simulations

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi; Kremenetsky, Mark

    1995-01-01

    We describe a system for interactive visualization and tracking of a 3-D unsteady computational fluid dynamics (CFD) simulation on a parallel computer. CM/AVS, a distributed, parallel implementation of a visualization environment (AVS) runs on the CM-5 parallel supercomputer. A CFD solver is run as a CM/AVS module on the CM-5. Data communication between the solver, other parallel visualization modules, and a graphics workstation, which is running AVS, are handled by CM/AVS. Partitioning of the visualization task, between CM-5 and the workstation, can be done interactively in the visual programming environment provided by AVS. Flow solver parameters can also be altered by programmable interactive widgets. This system partially removes the requirement of storing large solution files at frequent time steps, a characteristic of the traditional 'simulate (yields) store (yields) visualize' post-processing approach.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cernoch, Antonin; Soubusta, Jan; Celechovska, Lucie

    We report on experimental implementation of the optimal universal asymmetric 1->2 quantum cloning machine for qubits encoded into polarization states of single photons. Our linear-optical machine performs asymmetric cloning by partially symmetrizing the input polarization state of signal photon and a blank copy idler photon prepared in a maximally mixed state. We show that the employed method of measurement of mean clone fidelities exhibits strong resilience to imperfect calibration of the relative efficiencies of single-photon detectors used in the experiment. Reliable characterization of the quantum cloner is thus possible even when precise detector calibration is difficult to achieve.

  12. New NIR Calibration Models Speed Biomass Composition and Reactivity Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-01

    Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. This highlight describes NREL's work to use near-infrared (NIR) spectroscopy and partial least squares multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. This highlight is being developed for the September 2015 Alliance S&T Board meeting.

  13. Observation of Bs-Bsbar Oscillations Using Partially Reconstructed Hadronic Bs Decays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles, Jeffrey Robert

    2008-02-01

    This thesis describes the contribution of partially reconstructed hadronic decays in the world's first observation of Bmore » $$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillations. The analysis is a core member of a suite of closely related studies whose combined time-dependent measurement of the B$$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillation frequency Δm s is of historic significance. Using a data sample of 1 fb -1 of p$$\\bar{p}$$ collisions at √s = 1.96 TeV collected with the CDF-II detector at the Fermilab Tevatron, they find signals of 3150 partially reconstructed hadronic B s decays from the combined decay channels B$$0\\atop{s}$$ → D*$$-\\atop{s}$$ π + and B$$0\\atop{s}$$ → D$$-\\atop{s}$$ ρ + with D$$-\\atop{s}$$ → Φπ -. These events are analyzed in parallel with 2000 fully reconstructed B$$0\\atop{s}$$ → D$$-\\atop{s}$$ π + (D$$-\\atop{s}$$ → Φπ -) decays. The treatment of the data is developed in stages of progressive complexity, using high-statistics samples of hadronic B 0and B + decays to study the attributes of partially reconstructed events. The analysis characterizes the data in mass and proper decay time, noting the potential of the partially reconstructed decays for precise measurement of B branching fractions and lifetimes, but consistently focusing on the effectiveness of the model for the oscillation measurement. They efficiently incorporate the measured quantities of each decay into a maximum likelihood fitting framework, from which they extract amplitude scans and a direct measurement of the oscillation frequency. The features of the amplitude scans are consistent with expected behavior, supporting the correctness of the calibrations for proper time uncertainty and flavor tagging dilution. The likelihood allows for the smooth combination of this analysis with results from other data samples, including 3500 fully reconstructed hadronic B s events and 61,500 partially reconstructed semileptonic B s events. The individual analyses show compelling evidence for B$$0\\atop{s}$$-$$\\bar{B}$$$0\\atop{s}$$ oscillations, and the combination yields a clear signal. The probability that random fluctuations could produce a comparable signature is 8 x 10 -8, which exceeds the 5 standard deviations threshold of significance for observation. The discovery threshold would not be achieved without inclusion of the partially reconstructed hadronic decays. They measure Δm s = 17.77 ± 0.10(stat) ± 0.07(syst) ps -1 and extract |V td/V ts| = 0.2060 ± 0.0007(exp)$$+0.0081\\atop{-0.0060}$$(theory), consistent with the Standard Model expectation.« less

  14. Electric currents and voltage drops along auroral field lines

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1983-01-01

    An assessment is presented of the current state of knowledge concerning Birkeland currents and the parallel electric field, with discussions focusing on the Birkeland primary region 1 sheets, the region 2 sheets which parallel them and appear to close in the partial ring current, the cusp currents (which may be correlated with the interplanetary B(y) component), and the Harang filament. The energy required by the parallel electric field and the associated particle acceleration processes appears to be derived from the Birkeland currents, for which evidence is adduced from particles, inverted V spectra, rising ion beams and expanded loss cones. Conics may on the other hand signify acceleration by electrostatic ion cyclotron waves associated with beams accelerated by the parallel electric field.

  15. Cross-Calibration between ASTER and MODIS Visible to Near-Infrared Bands for Improvement of ASTER Radiometric Calibration

    PubMed Central

    Tsuchida, Satoshi; Thome, Kurtis

    2017-01-01

    Radiometric cross-calibration between the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and the Terra-Moderate Resolution Imaging Spectroradiometer (MODIS) has been partially used to derive the ASTER radiometric calibration coefficient (RCC) curve as a function of date on visible to near-infrared bands. However, cross-calibration is not sufficiently accurate, since the effects of the differences in the sensor’s spectral and spatial responses are not fully mitigated. The present study attempts to evaluate radiometric consistency across two sensors using an improved cross-calibration algorithm to address the spectral and spatial effects and derive cross-calibration-based RCCs, which increases the ASTER calibration accuracy. Overall, radiances measured with ASTER bands 1 and 2 are on averages 3.9% and 3.6% greater than the ones measured on the same scene with their MODIS counterparts and ASTER band 3N (nadir) is 0.6% smaller than its MODIS counterpart in current radiance/reflectance products. The percentage root mean squared errors (%RMSEs) between the radiances of two sensors are 3.7, 4.2, and 2.3 for ASTER band 1, 2, and 3N, respectively, which are slightly greater or smaller than the required ASTER radiometric calibration accuracy (4%). The uncertainty of the cross-calibration is analyzed by elaborating the error budget table to evaluate the International System of Units (SI)-traceability of the results. The use of the derived RCCs will allow further reduction of errors in ASTER radiometric calibration and subsequently improve interoperability across sensors for synergistic applications. PMID:28777329

  16. Modeling of Arylamide Helix Mimetics in the p53 Peptide Binding Site of hDM2 Suggests Parallel and Anti-Parallel Conformations Are Both Stable

    PubMed Central

    Fuller, Jonathan C.; Jackson, Richard M.; Edwards, Thomas A.; Wilson, Andrew J.; Shirts, Michael R.

    2012-01-01

    The design of novel α-helix mimetic inhibitors of protein-protein interactions is of interest to pharmaceuticals and chemical genetics researchers as these inhibitors provide a chemical scaffold presenting side chains in the same geometry as an α-helix. This conformational arrangement allows the design of high affinity inhibitors mimicking known peptide sequences binding specific protein substrates. We show that GAFF and AutoDock potentials do not properly capture the conformational preferences of α-helix mimetics based on arylamide oligomers and identify alternate parameters matching solution NMR data and suitable for molecular dynamics simulation of arylamide compounds. Results from both docking and molecular dynamics simulations are consistent with the arylamides binding in the p53 peptide binding pocket. Simulations of arylamides in the p53 binding pocket of hDM2 are consistent with binding, exhibiting similar structural dynamics in the pocket as simulations of known hDM2 binders Nutlin-2 and a benzodiazepinedione compound. Arylamide conformations converge towards the same region of the binding pocket on the 20 ns time scale, and most, though not all dihedrals in the binding pocket are well sampled on this timescale. We show that there are two putative classes of binding modes for arylamide compounds supported equally by the modeling evidence. In the first, the arylamide compound lies parallel to the observed p53 helix. In the second class, not previously identified or proposed, the arylamide compound lies anti-parallel to the p53 helix. PMID:22916232

  17. Simultaneous determination of potassium guaiacolsulfonate, guaifenesin, diphenhydramine HCl and carbetapentane citrate in syrups by using HPLC-DAD coupled with partial least squares multivariate calibration.

    PubMed

    Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika

    2011-02-15

    A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. Detection of triterpene acids distribution in loquat (Eriobotrya japonica) leaf using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Shi, Jiyong; Chen, Wu; Zou, Xiaobo; Xu, Yiwei; Huang, Xiaowei; Zhu, Yaodi; Shen, Tingting

    2018-01-01

    Hyperspectral images (431-962 nm) and partial least squares (PLS) were used to detect the distribution of triterpene acids within loquat (Eriobotrya japonica) leaves. 72 fresh loquat leaves in the young group, mature group and old group were collected for hyperspectral imaging; and triterpene acids content of the loquat leaves was analyzed using high performance liquid chromatography (HPLC). Then the spectral data of loquat leaf hyperspectral images and the triterpene acids content were employed to build calibration models. After spectra pre-processing and wavelength selection, an optimum calibration model (Rp = 0.8473, RMSEP = 2.61 mg/g) for predicting triterpene acids was obtained by synergy interval partial least squares (siPLS). Finally, spectral data of each pixel in the loquat leaf hyperspectral image were extracted and substituted into the optimum calibration model to predict triterpene acids content of each pixel. Therefore, the distribution map of triterpene acids content was obtained. As shown in the distribution map, triterpene acids are accumulated mainly in the leaf mesophyll regions near the main veins, and triterpene acids concentration of young group is less than that of mature and old groups. This study showed that hyperspectral imaging is suitable to determine the distribution of active constituent content in medical herbs in a rapid and non-invasive manner.

  19. Airborne hygrometer calibration inter-comparison against a metrological water vapour standard

    NASA Astrophysics Data System (ADS)

    Smorgon, Denis; Boese, Norbert; Ebert, Volker

    2014-05-01

    Water vapour is the most important atmospheric greenhouse gas, which causes a major feedback to warming and other changes in the climate system. Knowledge of the distribution of water vapour and its climate induced changes is especially important in the upper troposphere and lower stratosphere (UT/LS) where vapour plays a critical role in atmospheric radiative balance, cirrus cloud formation, and photochemistry. But, our understanding of water in the UT/LS is limited by significant uncertainties in current UT/LS water measurements. One of the most comprehensive inter-comparison campaigns for airborne hygrometers, termed AQUAVIT (AV1) [1], took place in 2007 at the AIDA chamber at the Karlsruhe Institute of Technology (KIT) in Germany. AV1 was a well-defined, referred, blind inter-comparison of 22 airborne field instruments from 17 international research groups. One major metrological deficit of AV1, however, was, that no traceable reference instrument participated in the inter-comparison experiments and that the calibration procedures of the participating instruments were not monitored or interrogated. Consequently a follow-up inter-comparison was organized in April 2013, which for the first time also provides a traceable link to the international humidity scale. This AQUAVIT2 (AV2) campaign (details see: http://www.imk-aaf.kit.edu/aquavit/index.php/Main_Page) was again located at KIT/AIDA and organised by an international organizing committee including KIT, PTB, FZJ and others. Generally AV2 is divided in two parallel comparisons: 1) AV2-A uses the AIDA chamber for a simultaneous comparison of all instruments (incl. sampling and in-situ instruments) over a broad range of conditions characteristic for the UT/LS; 2) AV2-B, about which this paper is reporting, is a sequential comparison of selected hygrometers and (when possible) their reference calibration infrastructures by means of a chilled mirror hygrometer traced back to the primary National humidity standard of PTB and a validated, two-pressure generator acting as a highly stable and reproducible source of water vapour. The aim of AV2-B was to perform an absolute, metrological comparison of the field instruments/calibration infrastructures to the metrological humidity scale, and to collect essential information about methods and procedures used by the atmospheric community for instrument calibration and validation, in order to investigate e.g. the necessity and possible comparability advantage by a standardized calibration procedure. The work will give an overview over the concept of the AV2-B inter-comparison, the various general measurement and calibration principles, and discuss the outcome and consequences of the comparison effort. The AQUAVIT effort is linked to the EMRP project METEOMET (ENV07) and partially supported by the EMRP and ENV07. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union. [1] H. Saathoff, C. Schiller, V. Ebert, D. W. Fahey, R.-S. Gao, O. Möhler, and the aquavit team, The AQUAVIT formal intercomparison of atmospheric water measurement methods, 5th General Assembly of the European Geosciences Union, 13-18 April 2008, Vienna, Austria Keywords: humidity, water vapour, inter-comparison, airborne instruments.

  20. On the p(dis) correction factor for cylindrical chambers.

    PubMed

    Andreo, Pedro

    2010-03-07

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor p(dis) for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in p(dis) (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between electron dosimetry based on an electron cross-calibrated plane-parallel chamber (against a Farmer) or on a directly 60Co calibrated plane-parallel chamber, which is not likely to be in error by 1%. It is suggested that the influence of the 60Co source spectrum used in the simulations may not be negligible for calculations aimed at an uncertainty level of 0.1%.

  1. LETTER TO THE EDITOR: On the pdis correction factor for cylindrical chambers

    NASA Astrophysics Data System (ADS)

    Andreo, Pedro

    2010-03-01

    The authors of a recent paper (Wang and Rogers 2009 Phys. Med. Biol. 54 1609) have used the Monte Carlo method to simulate the 'classical' experiment made more than 30 years ago by Johansson et al (1978 National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) on the displacement (or replacement) perturbation correction factor pdis for cylindrical chambers in 60Co and high-energy photon beams. They conclude that an 'unreasonable normalization at dmax' of the ionization chambers response led to incorrect results, and for the IAEA TRS-398 Code of Practice, which uses ratios of those results, 'the difference in the correction factors can lead to a beam calibration deviation of more than 0.5% for Farmer-like chambers'. The present work critically examines and questions some of the claims and generalized conclusions of the paper. It is demonstrated that for real, commercial Farmer-like chambers, the possible deviations in absorbed dose would be much smaller (typically 0.13%) than those stated by Wang and Rogers, making the impact of their proposed values negligible on practical high-energy photon dosimetry. Differences of the order of 0.4% would only appear at the upper extreme of the energies potentially available for clinical use (around 25 MV) and, because lower energies are more frequently used, the number of radiotherapy photon beams for which the deviations would be larger than say 0.2% is extremely small. This work also raises concerns on the proposed value of pdis for Farmer chambers at the reference quality of 60Co in relation to their impact on electron beam dosimetry, both for direct dose determination using these chambers and for the cross-calibration of plane-parallel chambers. The proposed increase of about 1% in pdis (compared with TRS-398) would lower the kQ factors and therefore Dw in electron beams by the same amount. This would yield a severe discrepancy with the current good agreement between electron dosimetry based on an electron cross-calibrated plane-parallel chamber (against a Farmer) or on a directly 60Co calibrated plane-parallel chamber, which is not likely to be in error by 1%. It is suggested that the influence of the 60Co source spectrum used in the simulations may not be negligible for calculations aimed at an uncertainty level of 0.1%.

  2. Comparative study on ATR-FTIR calibration models for monitoring solution concentration in cooling crystallization

    NASA Astrophysics Data System (ADS)

    Zhang, Fangkun; Liu, Tao; Wang, Xue Z.; Liu, Jingxiang; Jiang, Xiaobin

    2017-02-01

    In this paper calibration model building based on using an ATR-FTIR spectroscopy is investigated for in-situ measurement of the solution concentration during a cooling crystallization process. The cooling crystallization of L-glutamic Acid (LGA) as a case is studied here. It was found that using the metastable zone (MSZ) data for model calibration can guarantee the prediction accuracy for monitoring the operating window of cooling crystallization, compared to the usage of undersaturated zone (USZ) spectra for model building as traditionally practiced. Calibration experiments were made for LGA solution under different concentrations. Four candidate calibration models were established using different zone data for comparison, by using a multivariate partial least-squares (PLS) regression algorithm for the collected spectra together with the corresponding temperature values. Experiments under different process conditions including the changes of solution concentration and operating temperature were conducted. The results indicate that using the MSZ spectra for model calibration can give more accurate prediction of the solution concentration during the crystallization process, while maintaining accuracy in changing the operating temperature. The primary reason of prediction error was clarified as spectral nonlinearity for in-situ measurement between USZ and MSZ. In addition, an LGA cooling crystallization experiment was performed to verify the sensitivity of these calibration models for monitoring the crystal growth process.

  3. A Hydrodynamic Characteristic of a Dual Fluidized Bed Gasification

    NASA Astrophysics Data System (ADS)

    Sung, Yeon Kyung; Song, Jae Hun; Bang, Byung Ryeul; Yu, Tae U.; Lee, Uen Do

    A cold model dual fluidized bed (DFB) reactor, consisting of two parallel interconnected bubbling and fast fluidized beds, was designed for developing an auto-thermal biomass gasifier. The combustor of this system burns the rest char of the gasification process and provides heat to the gasifier by circulating solids inventory. To find an optimal mixing and circulation of heavy solid inventory and light biomass and char materials, we investigate two types of DFB reactors which have different configuration of distributor and way-out location of the solid inventory and char materials in the gasifier. To determine appropriate operating conditions, we measured minimum fluidization velocity, solid circulation rate, axial solid holdup and gas bypassing between the lower loop seal and the gasifier.

  4. Design of an auto change mechanism and intelligent gripper for the space station

    NASA Technical Reports Server (NTRS)

    Dehoff, Paul H.; Naik, Dipak P.

    1989-01-01

    Robot gripping of objects in space is inherently demanding and dangerous and nowhere is this more clearly reflected than in the design of the robot gripper. An object which escapes the gripper in a micro g environment is launched not dropped. To prevent this, the gripper must have sensors and signal processing to determine that the object is properly grasped, e.g., grip points and gripping forces and, if not, to provide information to the robot to enable closed loop corrections to be made. The sensors and sensor strategies employed in the NASA/GSFC Split-Rail Parallel Gripper are described. Objectives and requirements are given followed by the design of the sensor suite, sensor fusion techniques and supporting algorithms.

  5. Fault Tolerant Parallel Implementations of Iterative Algorithms for Optimal Control Problems

    DTIC Science & Technology

    1988-01-21

    p/.V)] steps, but did not discuss any specific parallel implementation. Gajski [51 improved upon this result by performing the SIMD computation in...N = p2. our approach reduces to that of [51, except that Gajski presents the coefficient computation and partial solution phases as a single...8217>. the SIMD algo- rithm presented by Gajski [5] can be most efficiently mapped to a unidirec- tional ring network with broadcasting capability. Based

  6. Effects of atmospheric parameters on radon measurements using alpha-track detectors.

    PubMed

    Zhao, C; Zhuo, W; Fan, D; Yi, Y; Chen, B

    2014-02-01

    The calibration factors of alpha-track radon detectors (ATDs) are essential for accurate determination of indoor radon concentrations. In this paper, the effects of atmospheric parameters on the calibration factors were theoretically studied and partially testified. Based on the atmospheric thermodynamics theory and detection characteristics of the allyl diglycol carbonate (CR-39), the calibration factors for 5 types of ATDs were calculated through Monte Carlo simulations under different atmospheric conditions. Simulation results showed that the calibration factor increased by up to 31% for the ATDs with a decrease of air pressure by 35.5 kPa (equivalent to an altitude increase of 3500 m), and it also increased by up to 12% with a temperature increase from 5 °C to 35 °C, but it was hardly affected by the relative humidity unless the water-vapor condensation occurs inside the detectors. Furthermore, it was also found that the effects on calibration factors also depended on the dimensions of ATDs. It indicated that variations of the calibration factor with air pressure and temperature should be considered for an accurate radon measurement with a large dimensional ATD, and water-vapor condensation inside the detector should be avoided in field measurements.

  7. Surrogate matrix and surrogate analyte approaches for definitive quantitation of endogenous biomolecules.

    PubMed

    Jones, Barry R; Schultz, Gary A; Eckstein, James A; Ackermann, Bradley L

    2012-10-01

    Quantitation of biomarkers by LC-MS/MS is complicated by the presence of endogenous analytes. This challenge is most commonly overcome by calibration using an authentic standard spiked into a surrogate matrix devoid of the target analyte. A second approach involves use of a stable-isotope-labeled standard as a surrogate analyte to allow calibration in the actual biological matrix. For both methods, parallelism between calibration standards and the target analyte in biological matrix must be demonstrated in order to ensure accurate quantitation. In this communication, the surrogate matrix and surrogate analyte approaches are compared for the analysis of five amino acids in human plasma: alanine, valine, methionine, leucine and isoleucine. In addition, methodology based on standard addition is introduced, which enables a robust examination of parallelism in both surrogate analyte and surrogate matrix methods prior to formal validation. Results from additional assays are presented to introduce the standard-addition methodology and to highlight the strengths and weaknesses of each approach. For the analysis of amino acids in human plasma, comparable precision and accuracy were obtained by the surrogate matrix and surrogate analyte methods. Both assays were well within tolerances prescribed by regulatory guidance for validation of xenobiotic assays. When stable-isotope-labeled standards are readily available, the surrogate analyte approach allows for facile method development. By comparison, the surrogate matrix method requires greater up-front method development; however, this deficit is offset by the long-term advantage of simplified sample analysis.

  8. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

  9. A rotation-translation invariant molecular descriptor of partial charges and its use in ligand-based virtual screening

    PubMed Central

    2014-01-01

    Background Measures of similarity for chemical molecules have been developed since the dawn of chemoinformatics. Molecular similarity has been measured by a variety of methods including molecular descriptor based similarity, common molecular fragments, graph matching and 3D methods such as shape matching. Similarity measures are widespread in practice and have proven to be useful in drug discovery. Because of our interest in electrostatics and high throughput ligand-based virtual screening, we sought to exploit the information contained in atomic coordinates and partial charges of a molecule. Results A new molecular descriptor based on partial charges is proposed. It uses the autocorrelation function and linear binning to encode all atoms of a molecule into two rotation-translation invariant vectors. Combined with a scoring function, the descriptor allows to rank-order a database of compounds versus a query molecule. The proposed implementation is called ACPC (AutoCorrelation of Partial Charges) and released in open source. Extensive retrospective ligand-based virtual screening experiments were performed and other methods were compared with in order to validate the method and associated protocol. Conclusions While it is a simple method, it performed remarkably well in experiments. At an average speed of 1649 molecules per second, it reached an average median area under the curve of 0.81 on 40 different targets; hence validating the proposed protocol and implementation. PMID:24887178

  10. A low-cost and portable realization on fringe projection three-dimensional measurement

    NASA Astrophysics Data System (ADS)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2015-12-01

    Fringe projection three-dimensional measurement is widely applied in a wide range of industrial application. The traditional fringe projection system has the disadvantages of high expense, big size, and complicated calibration requirements. In this paper we introduce a low-cost and portable realization on three-dimensional measurement with Pico projector. It has the advantages of low cost, compact physical size, and flexible configuration. For the proposed fringe projection system, there is no restriction to camera and projector's relative alignment on parallelism and perpendicularity for installation. Moreover, plane-based calibration method is adopted in this paper that avoids critical requirements on calibration system such as additional gauge block or precise linear z stage. What is more, error sources existing in the proposed system are introduced in this paper. The experimental results demonstrate the feasibility of the proposed low cost and portable fringe projection system.

  11. Misalignments calibration in small-animal PET scanners based on rotating planar detectors and parallel-beam geometry.

    PubMed

    Abella, M; Vicente, E; Rodríguez-Ruano, A; España, S; Lage, E; Desco, M; Udias, J M; Vaquero, J J

    2012-11-21

    Technological advances have improved the assembly process of PET detectors, resulting in quite small mechanical tolerances. However, in high-spatial-resolution systems, even submillimetric misalignments of the detectors may lead to a notable degradation of image resolution and artifacts. Therefore, the exact characterization of misalignments is critical for optimum reconstruction quality in such systems. This subject has been widely studied for CT and SPECT scanners based on cone beam geometry, but this is not the case for PET tomographs based on rotating planar detectors. The purpose of this work is to analyze misalignment effects in these systems and to propose a robust and easy-to-implement protocol for geometric characterization. The result of the proposed calibration method, which requires no more than a simple calibration phantom, can then be used to generate a correct 3D-sinogram from the acquired list mode data.

  12. Coupled electromechanical model of the heart: Parallel finite element formulation.

    PubMed

    Lafortune, Pierre; Arís, Ruth; Vázquez, Mariano; Houzeaux, Guillaume

    2012-01-01

    In this paper, a highly parallel coupled electromechanical model of the heart is presented and assessed. The parallel-coupled model is thoroughly discussed, with scalability proven up to hundreds of cores. This work focuses on the mechanical part, including the constitutive model (proposing some modifications to pre-existent models), the numerical scheme and the coupling strategy. The model is next assessed through two examples. First, the simulation of a small piece of cardiac tissue is used to introduce the main features of the coupled model and calibrate its parameters against experimental evidence. Then, a more realistic problem is solved using those parameters, with a mesh of the Oxford ventricular rabbit model. The results of both examples demonstrate the capability of the model to run efficiently in hundreds of processors and to reproduce some basic characteristic of cardiac deformation.

  13. Photoreceptor layer map using spectral-domain optical coherence tomography.

    PubMed

    Lee, Ji Eun; Lim, Dae Won; Bae, Han Yong; Park, Hyun Jin

    2009-12-01

    To develop a novel method for analysis of the photoreceptor layer map (PLM) generated using spectral-domain optical coherence tomography (OCT). OCT scans were obtained from 20 eyes, 10 with macular holes (MH) and 10 with central serous chorioretinopathy (CSC) using the Macular Cube (512 x 128) protocol of the Cirrus HD-OCT (Carl Zeiss). The scanned data were processed using embedded tools of the advanced visualization. A partial thickness OCT fundus image of the photoreceptor layer was generated by setting the region of interest to a 50-microm thick layer that was parallel and adjacent to the retinal pigment epithelium. The resulting image depicted the photoreceptor layer as a map of the reflectivity in OCT. The PLM was compared with fundus photography, auto-fluorescence, tomography, and retinal thickness map. The signal from the photoreceptor layer of every OCT scan in each case was demonstrated as a single image of PLM in a fundus photograph fashion. In PLM images, detachment of the sensory retina is depicted as a hypo-reflective area, which represents the base of MH and serous detachment in CSC. Relative hypo-reflectivity, which was also noted at closed MH and at recently reattached retina in CSC, was associated with reduced signal from the junction between the inner and outer segments of photoreceptors in OCT images. Using PLM, changes in the area of detachment and reflectivity of the photoreceptor layer could be efficiently monitored. The photoreceptor layer can be analyzed as a map using spectral-domain OCT. In the treatment of both MH and CSC, PLM may provide new pathological information about the photoreceptor layer to expand our understanding of these diseases.

  14. Calibrating the pixel-level Kepler imaging data with a causal data-driven model

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Foreman-Mackey, Daniel; Hogg, David W.; Schölkopf, Bernhard

    2015-01-01

    In general, astronomical observations are affected by several kinds of noise, each with it's own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. In particular, the precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here we present the Causal Pixel Model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level (not the photometric measurement level); it can capture more fine-grained information about the variation of the spacecraft than is available in the pixel-summed aperture photometry. The basic idea is that CPM predicts each target pixel value from a large number of pixels of other stars sharing the instrument variabilities while not containing any information on possible transits at the target star. In addition, we use the target star's future and past (auto-regression). By appropriately separating the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the fitting of the model. The method has four hyper-parameters (the number of predictor stars, the auto-regressive window size, and two L2-regularization amplitudes for model components), which we set by cross-validation. We determine a generic set of hyper-parameters that works well on most of the stars with 11≤V≤12 mag and apply the method to a corresponding set of target stars with known planet transits. We find that we can consistently outperform (for the purposes of exoplanet detection) the Kepler Pre-search Data Conditioning (PDC) method for exoplanet discovery, often improving the SNR by a factor of two. While we have not yet exhaustively tested the method at other magnitudes, we expect that it should be generally applicable, with positive consequences for subsequent exoplanet detection or stellar variability (in which case we must exclude the autoregressive part to preserve intrinsic variability).

  15. Development of a pre-concentration system and auto-analyzer for dissolved methane, ethane, propane, and butane concentration measurements with a GC-FID

    NASA Astrophysics Data System (ADS)

    Chepigin, A.; Leonte, M.; Colombo, F.; Kessler, J. D.

    2014-12-01

    Dissolved methane, ethane, propane, and butane concentrations in natural waters are traditionally measured using a headspace equilibration technique and gas chromatograph with flame ionization detector (GC-FID). While a relatively simple technique, headspace equilibration suffers from slow equilibration times and loss of sensitivity due to concentration dilution with the pure gas headspace. Here we present a newly developed pre-concentration system and auto-analyzer for use with a GC-FID. This system decreases the time required for each analysis by eliminating the headspace equilibration time, increases the sensitivity and precision with a rapid pre-concentration step, and minimized operator time with an autoanalyzer. In this method, samples are collected from Niskin bottles in newly developed 1 L plastic sample bags rather than glass vials. Immediately following sample collection, the sample bags are placed in an incubator and individually connected to a multiport sampling valve. Water is pumped automatically from the desired sample bag through a small (6.5 mL) Liqui-Cel® membrane contactor where the dissolved gas is vacuum extracted and directly flushed into the GC sample loop. The gases of interest are preferentially extracted with the Liqui-Cel and thus a natural pre-concentration effect is obtained. Daily method calibration is achieved in the field with a five-point calibration curve that is created by analyzing gas standard-spiked water stored in 5 L gas-impermeable bags. Our system has been shown to substantially pre-concentrate the dissolved gases of interest and produce a highly linear response of peak areas to dissolved gas concentration. The system retains the high accuracy, precision, and wide range of measurable concentrations of the headspace equilibration method while simultaneously increasing the sensitivity due to the pre-concentration step. The time and labor involved in the headspace equilibration method is eliminated and replaced with the immediate and automatic analysis of a maximum of 13 sequential samples. The elapsed time between sample collection and analysis is reduced from approximately 12 hrs to < 10 min, enabling dynamic and highly resolved sampling plans.

  16. Enhanced Axial Resolution of Wide-Field Two-Photon Excitation Microscopy by Line Scanning Using a Digital Micromirror Device.

    PubMed

    Park, Jong Kang; Rowlands, Christopher J; So, Peter T C

    2017-01-01

    Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice.

  17. Enhanced Axial Resolution of Wide-Field Two-Photon Excitation Microscopy by Line Scanning Using a Digital Micromirror Device

    PubMed Central

    Park, Jong Kang; Rowlands, Christopher J.; So, Peter T. C.

    2017-01-01

    Temporal focusing multiphoton microscopy is a technique for performing highly parallelized multiphoton microscopy while still maintaining depth discrimination. While the conventional wide-field configuration for temporal focusing suffers from sub-optimal axial resolution, line scanning temporal focusing, implemented here using a digital micromirror device (DMD), can provide substantial improvement. The DMD-based line scanning temporal focusing technique dynamically trades off the degree of parallelization, and hence imaging speed, for axial resolution, allowing performance parameters to be adapted to the experimental requirements. We demonstrate this new instrument in calibration specimens and in biological specimens, including a mouse kidney slice. PMID:29387484

  18. Sequential monitoring and stability of ex vivo-expanded autologous and non-autologous regulatory T cells following infusion in non-human primates

    PubMed Central

    Zhang, H.; Guo, H.; Lu, L.; Zahorchak, A. F.; Wiseman, R. W.; Raimondi, G.; Cooper, D. K. C.; Ezzelarab, M. B.; Thomson, A. W.

    2016-01-01

    Ex vivo-expanded cynomolgus monkey CD4+CD25+CD127− regulatory T cells (Treg) maintained Foxp3 demethylation status at the Treg-Specific Demethylation Region (TSDR), and potently suppressed T cell proliferation through 3 rounds of expansion. When CFSE- or VPD450-labeled autologous (auto) and non-autologous (non-auto) expanded Treg were infused into monkeys, the number of labeled auto-Treg in peripheral blood declined rapidly during the first week, but persisted at low levels in both normal and anti-thymocyte globulin plus rapamycin-treated (immunosuppressed; IS) animals for at least 3 weeks. By contrast, MHC-mismatched non-auto-Treg could not be detected in normal monkey blood or in blood of two out of the three IS monkeys by day 6 post-infusion. They were also more difficult to detect than auto-Treg in peripheral lymphoid tissue. Both auto- and non-auto-Treg maintained Ki67 expression early after infusion. Sequential monitoring revealed that adoptively-transferred auto-Treg maintained similarly high levels of Foxp3 and CD25 and low CD127 compared with endogenous Treg, although Foxp3 staining diminished over time in these non-transplanted recipients. Thus, infused ex vivo-expanded auto-Treg persist longer than MHC-mismatched non-auto-Treg in blood of non-human primates and can be detected in secondary lymphoid tissue. Host lymphodepletion and rapamycin administration did not consistently prolong the persistence of non-auto-Treg in these sites. PMID:25783759

  19. High-speed mixture fraction and temperature imaging of pulsed, turbulent fuel jets auto-igniting in high-temperature, vitiated co-flows

    NASA Astrophysics Data System (ADS)

    Papageorge, Michael J.; Arndt, Christoph; Fuest, Frederik; Meier, Wolfgang; Sutton, Jeffrey A.

    2014-07-01

    In this manuscript, we describe an experimental approach to simultaneously measure high-speed image sequences of the mixture fraction and temperature fields during pulsed, turbulent fuel injection into a high-temperature, co-flowing, and vitiated oxidizer stream. The quantitative mixture fraction and temperature measurements are determined from 10-kHz-rate planar Rayleigh scattering and a robust data processing methodology which is accurate from fuel injection to the onset of auto-ignition. In addition, the data processing is shown to yield accurate temperature measurements following ignition to observe the initial evolution of the "burning" temperature field. High-speed OH* chemiluminescence (CL) was used to determine the spatial location of the initial auto-ignition kernel. In order to ensure that the ignition kernel formed inside of the Rayleigh scattering laser light sheet, OH* CL was observed in two viewing planes, one near-parallel to the laser sheet and one perpendicular to the laser sheet. The high-speed laser measurements are enabled through the use of the unique high-energy pulse burst laser system which generates long-duration bursts of ultra-high pulse energies at 532 nm (>1 J) suitable for planar Rayleigh scattering imaging. A particular focus of this study was to characterize the fidelity of the measurements both in the context of the precision and accuracy, which includes facility operating and boundary conditions and measurement of signal-to-noise ratio (SNR). The mixture fraction and temperature fields deduced from the high-speed planar Rayleigh scattering measurements exhibited SNR values greater than 100 at temperatures exceeding 1,300 K. The accuracy of the measurements was determined by comparing the current mixture fraction results to that of "cold", isothermal, non-reacting jets. All profiles, when properly normalized, exhibited self-similarity and collapsed upon one another. Finally, example mixture fraction, temperature, and OH* emission sequences are presented for a variety for fuel and vitiated oxidizer combinations. For all cases considered, auto-ignition occurred at the periphery of the fuel jet, under very "lean" conditions, where the local mixture fraction was less than the stoichiometric mixture fraction ( ξ < ξ s). Furthermore, the ignition kernel formed in regions of low scalar dissipation rate, which agrees with previous results from direct numerical simulations.

  20. Calibration strategies for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gaug, Markus; Berge, David; Daniel, Michael; Doro, Michele; Förster, Andreas; Hofmann, Werner; Maccarone, Maria C.; Parsons, Dan; de los Reyes Lopez, Raquel; van Eldik, Christopher

    2014-08-01

    The Central Calibration Facilities workpackage of the Cherenkov Telescope Array (CTA) observatory for very high energy gamma ray astronomy defines the overall calibration strategy of the array, develops dedicated hardware and software for the overall array calibration and coordinates the calibration efforts of the different telescopes. The latter include LED-based light pulsers, and various methods and instruments to achieve a calibration of the overall optical throughput. On the array level, methods for the inter-telescope calibration and the absolute calibration of the entire observatory are being developed. Additionally, the atmosphere above the telescopes, used as a calorimeter, will be monitored constantly with state-of-the-art instruments to obtain a full molecular and aerosol profile up to the stratosphere. The aim is to provide a maximal uncertainty of 10% on the reconstructed energy-scale, obtained through various independent methods. Different types of LIDAR in combination with all-sky-cameras will provide the observatory with an online, intelligent scheduling system, which, if the sky is partially covered by clouds, gives preference to sources observable under good atmospheric conditions. Wide-field optical telescopes and Raman Lidars will provide online information about the height-resolved atmospheric extinction, throughout the field-of-view of the cameras, allowing for the correction of the reconstructed energy of each gamma-ray event. The aim is to maximize the duty cycle of the observatory, in terms of usable data, while reducing the dead time introduced by calibration activities to an absolute minimum.

  1. A rapid tool for determination of titanium dioxide content in white chickpea samples.

    PubMed

    Sezer, Banu; Bilge, Gonca; Berkkan, Aysel; Tamer, Ugur; Hakki Boyaci, Ismail

    2018-02-01

    Titanium dioxide (TiO 2 ) is a widely used additive in foods. However, in the scientific community there is an ongoing debate on health concerns about TiO 2 . The main goal of this study is to determine TiO 2 content by using laser induced breakdown spectroscopy (LIBS). To this end, different amounts of TiO 2 was added to white chickpeas and analyzed by using LIBS. Calibration curve was obtained by following Ti emissions at 390.11nm for univariate calibration, and partial least square (PLS) calibration curve was obtained by evaluating the whole spectra. The results showed that Ti calibration curve at 390.11nm provides successful determination of Ti level with 0.985 of R 2 and 33.9ppm of limit of detection (LOD) value, while PLS has 0.989 of R 2 and 60.9ppm of LOD. Furthermore, commercial white chickpea samples were used to validate the method, and validation R 2 for simple calibration and PLS were calculated as 0.989 and 0.951, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Economic evaluation of epinephrine auto-injectors for peanut allergy.

    PubMed

    Shaker, Marcus; Bean, Katherine; Verdi, Marylee

    2017-08-01

    Three commercial epinephrine auto-injectors were available in the United States in the summer of 2016: EpiPen, Adrenaclick, and epinephrine injection, USP auto-injector. To describe the variation in pharmacy costs among epinephrine auto-injector devices in New England and evaluate the additional expense associated with incremental auto-injector costs. Decision analysis software was used to evaluate costs of the most and least expensive epinephrine auto-injector devices for children with peanut allergy. To evaluate regional variation in epinephrine auto-injector costs, a random sample of New England national and corporate pharmacies was compared with a convenience sample of pharmacies from 10 Canadian provinces. Assuming prescriptions written for 2 double epinephrine packs each year (home and school), the mean costs of food allergy over the 20-year model horizon totaled $58,667 (95% confidence interval [CI] $57,745-$59,588) when EpiPen was prescribed and $45,588 (95% CI $44,873-$46,304) when epinephrine injection, USP auto-injector was prescribed. No effectiveness differences were evident between groups, with 17.19 (95% CI 17.11-17.27) quality-adjusted life years accruing for each subject. The incremental cost per episode of anaphylaxis treated with epinephrine over the model horizon was $12,576 for EpiPen vs epinephrine injection, USP auto-injector. EpiPen costs were lowest at Canadian pharmacies ($96, 95% CI $85-$107). There was price consistency between corporate and independent pharmacies throughout New England by device brand, with the epinephrine injection, USP auto-injector being the most affordable device. Cost differences among epinephrine auto-injectors were significant. More expensive auto-injector brands did not appear to provide incremental benefit. Copyright © 2017 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  3. Unintentional Epinephrine Auto-injector Injuries: A National Poison Center Observational Study.

    PubMed

    Anshien, Marco; Rose, S Rutherfoord; Wills, Brandon K

    2016-11-24

    Epinephrine is the only first-line therapeutic agent used to treat life-threatening anaphylaxis. Epinephrine auto-injectors are commonly carried by patients at risk for anaphylaxis, and reported cases of unintentional auto-injector injury have increased over the last decade. Modifications of existing designs and release of a new style of auto-injector are intended to reduce epinephrine auto-injector misuse. The aim of the study was to characterize reported cases of unintentional epinephrine auto-injector exposures from 2013 to 2014 and compare demographics, auto-injector model, and anatomical site of such exposures. The American Association of Poison Control Center's National Poison Data System was searched from January 1, 2013, to December 31, 2014, for cases of unintentional epinephrine auto-injector exposures. Anatomical site data were obtained from all cases reported to the Virginia Poison Center and participating regional poison center for Auvi-Q cases. A total of 6806 cases of unintentional epinephrine auto-injector exposures were reported to US Poison Centers in 2013 and 2014. Of these cases, 3933 occurred with EpiPen, 2829 with EpiPen Jr, 44 with Auvi-Q, and no case reported of Adrenaclick. The most common site of unintentional injection for traditional epinephrine auto-injectors was the digit or thumb, with 58% of cases for EpiPen and 39% of cases with EpiPen Jr. With Auvi-Q, the most common site was the leg (78% of cases). The number of unintentional epinephrine auto-injector cases reported to American Poison Centers in 2013-2014 has increased compared with previous data. Most EpiPen exposures were in the digits, whereas Auvi-Q was most frequently in the leg. Because of the limitations of Poison Center data, more research is needed to identify incidence of unintentional exposures and the effectiveness of epinephrine auto-injector redesign.

  4. Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics

    DOE PAGES

    Scott, S. D.; Mumgaard, R. T.

    2016-07-20

    A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less

  5. Tokamak-independent software analysis suite for multi-spectral line-polarization MSE diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott, S. D.; Mumgaard, R. T.

    A tokamak-independent analysis suite has been developed to process data from Motional Stark Effect (mse) diagnostics. The software supports multi-spectral line-polarization mse diagnostics which simultaneously measure emission at the mse σ and π lines as well as at two "background" wavelengths that are displaced from the mse spectrum by a few nanometers. This analysis accurately estimates the amplitude of partially polarized background light at the σ and π wavelengths even in situations where the background light changes rapidly in time and space, a distinct improvement over traditional "time-interpolation" background estimation. The signal amplitude at many frequencies is computed using amore » numerical-beat algorithm which allows the retardance of the mse photo-elastic modulators (pem's) to be monitored during routine operation. It also allows the use of summed intensities at multiple frequencies in the calculation of polarization direction, which increases the effective signal strength and reduces sensitivity to pem retardance drift. The software allows the polarization angles to be corrected for calibration drift using a system that illuminates the mse diagnostic with polarized light at four known polarization angles within ten seconds of a plasma discharge. As a result, the software suite is modular, parallelized, and portable to other facilities.« less

  6. Research on remote sensing image pixel attribute data acquisition method in AutoCAD

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui

    2013-07-01

    The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.

  7. Long-term stability of radiotherapy dosimeters calibrated at the Polish Secondary Standard Dosimetry Laboratory.

    PubMed

    Ulkowski, Piotr; Bulski, Wojciech; Chełmiński, Krzysztof

    2015-10-01

    Unidos 10001, Unidos E (10008/10009) and Dose 1 electrometers from 14 radiotherapy centres were calibrated 3-4 times over a long period of time, together with Farmer type (PTW 30001, 30013, Nuclear Enterprises 2571 and Scanditronix-Wellhofer FC65G) cylindrical ionization chambers and plane-parallel type chambers (PTW Markus 23343 and Scanditronix-Wellhofer PPC05). On the basis of the long period of repetitive establishing of calibration coefficients for the same electrometers and ionization chambers, the accuracy of electrometers and the long-term stability of ionization chambers were examined. All measurements were carried out at the same laboratory, by the same staff, according to the same IAEA recommendations. A good accuracy and long-term stability of the dosimeters used in Polish radiotherapy centres was observed. These values were within 0.1% for electrometers and 0.2% for the chambers with electrometers. Furthermore, these values were not observed to vary over time. The observations confirm the opinion that the requirement of calibration of the dosimeters more often than every 2 years is not justified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Monogenic Auto-inflammatory Syndromes: A Review of the Literature.

    PubMed

    Azizi, Gholamreza; Khadem Azarian, Shahin; Nazeri, Sepideh; Mosayebian, Ali; Ghiasy, Saleh; Sadri, Ghazal; Mohebi, Ali; Khan Nazer, Nikoo Hossein; Afraei, Sanaz; Mirshafiey, Abbas

    2016-12-01

    Auto-inflammatory syndromes are a new group of distinct hereditable disorders characterized by episodes of seemingly unprovoked inflammation (most commonly in skin, joints, gut, and eye), the absence of a high titer of auto-antibodies or auto-reactive T cells, and an inborn error of innate immunity. A narrative literature review was carried out of studies related to auto-inflammatory syndromes to discuss the pathogenesis and clinical manifestation of these syndromes. This review showed that the main monogenic auto-inflammatory syndromes are familial Mediterranean fever (FMF), mevalonate kinase deficiency (MKD), Blau syndrome, TNF receptor-associated periodic syndrome (TRAPS), cryopyrin-associated periodic syndrome (CAPS), and pyogenic arthritis with pyoderma gangrenosum and acne (PAPA). The data suggest that correct diagnosis and treatment of monogenic auto-inflammatory diseases relies on the physicians' awareness. Therefore, understanding of the underlying pathogenic mechanisms of auto-inflammatory syndromes, and especially the fact that these disorders are mediated by IL-1 secretion stimulated by monocytes and macrophages, facilitated significant progress in patient management.

  9. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  10. Advancing computational methods for calibration of the Soil and Water Assessment Tool (SWAT): Application for modeling climate change impacts on water resources in the Upper Neuse Watershed of North Carolina

    NASA Astrophysics Data System (ADS)

    Ercan, Mehmet Bulent

    Watershed-scale hydrologic models are used for a variety of applications from flood prediction, to drought analysis, to water quality assessments. A particular challenge in applying these models is calibration of the model parameters, many of which are difficult to measure at the watershed-scale. A primary goal of this dissertation is to contribute new computational methods and tools for calibration of watershed-scale hydrologic models and the Soil and Water Assessment Tool (SWAT) model, in particular. SWAT is a physically-based, watershed-scale hydrologic model developed to predict the impact of land management practices on water quality and quantity. The dissertation follows a manuscript format meaning it is comprised of three separate but interrelated research studies. The first two research studies focus on SWAT model calibration, and the third research study presents an application of the new calibration methods and tools to study climate change impacts on water resources in the Upper Neuse Watershed of North Carolina using SWAT. The objective of the first two studies is to overcome computational challenges associated with calibration of SWAT models. The first study evaluates a parallel SWAT calibration tool built using the Windows Azure cloud environment and a parallel version of the Dynamically Dimensioned Search (DDS) calibration method modified to run in Azure. The calibration tool was tested for six model scenarios constructed using three watersheds of increasing size (the Eno, Upper Neuse, and Neuse) for both a 2 year and 10 year simulation duration. Leveraging the cloud as an on demand computing resource allowed for a significantly reduced calibration time such that calibration of the Neuse watershed went from taking 207 hours on a personal computer to only 3.4 hours using 256 cores in the Azure cloud. The second study aims at increasing SWAT model calibration efficiency by creating an open source, multi-objective calibration tool using the Non-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the methodological advances presented in the first two studies. Therefore, the dissertation contains three independent by interrelated studies that collectively advance the field of watershed-scale hydrologic modeling and analysis.

  11. Discordance between net analyte signal theory and practical multivariate calibration.

    PubMed

    Brown, Christopher D

    2004-08-01

    Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.

  12. Adjustment method for embedded metrology engine in an EM773 series microcontroller.

    PubMed

    Blazinšek, Iztok; Kotnik, Bojan; Chowdhury, Amor; Kačič, Zdravko

    2015-09-01

    This paper presents the problems of implementation and adjustment (calibration) of a metrology engine embedded in NXP's EM773 series microcontroller. The metrology engine is used in a smart metering application to collect data about energy utilization and is controlled with the use of metrology engine adjustment (calibration) parameters. The aim of this research is to develop a method which would enable the operators to find and verify the optimum parameters which would ensure the best possible accuracy. Properly adjusted (calibrated) metrology engines can then be used as a base for variety of products used in smart and intelligent environments. This paper focuses on the problems encountered in the development, partial automatisation, implementation and verification of this method. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  14. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  16. Preflight and in-flight calibration plan for ASTER

    USGS Publications Warehouse

    Ono, A.; Sakuma, F.; Arai, K.; Yamaguchi, Y.; Fujisada, H.; Slater, P.N.; Thome, K.J.; Palluconi, Frank Don; Kieffer, H.H.

    1996-01-01

    Preflight and in-flight radiometric calibration plans are described for the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) that is a multispectral optical imager of high spatial resolution. It is designed for the remote sensing from orbit of land surfaces and clouds, and is expected to be launched in 1998 on NASA's EOS AM-1 spacecraft. ASTER acquires images in three separate spectral regions, the visible and near-infrared (VNIR), the shortwave infrared (SWIR), and the thermal infrared (TIR) with three imaging radiometer subsystems. The absolute radiometric accuracy is required to be better than 4% for VNIR and SWIR radiance measurements and 1 to 3 K, depending on the temperature regions from 200 to 370 K, for TIR temperature measurements. A reference beam is introduced at the entrance pupil of each imaging radiometer to provide the in-flight calibration Thus, the ASTER instrument includes internal onboard calibration units that comprise incandescent lamps for the VNIR and SWIR and a blackbody radiator for the TIR as reference sources. The calibration reliability of the VNIR and SWIR is enhanced by a dual system of onboard calibration units as well as by high-stability halogen lamps. A ground calibration system of spectral radiances traceable to fixed-point blackbodies is used for the preflight VNIR and SWIR calibration. Because of the possibility of nonuniform contamination effects on the partial-aperture onboard calibration, it is desirable to check their results with respect to other methods. Reflectance- and radiance-based vicarious methods have been developed for this purpose. These, and methods involving in-flight cross-calibration with other sensors are also described.

  17. Feedback circuit design of an auto-gating power supply for low-light-level image intensifier

    NASA Astrophysics Data System (ADS)

    Yang, Ye; Yan, Bo; Zhi, Qiang; Ni, Xiao-bing; Li, Jun-guo; Wang, Yu; Yao, Ze

    2015-11-01

    This paper introduces the basic principle of auto-gating power supply which using a hybrid automatic brightness control scheme. By the analysis of current as image intensifier to special requirements of auto-gating power supply, a feedback circuit of the auto-gating power supply is analyzed. Find out the reason of the screen flash after the auto-gating power supply assembled image intensifier. This paper designed a feedback circuit which can shorten the response time of auto-gating power supply and improve screen slight flicker phenomenon which the human eye can distinguish under the high intensity of illumination.

  18. Plasma Generator Using Spiral Conductors

    NASA Technical Reports Server (NTRS)

    Szatkowski, George N. (Inventor); Dudley, Kenneth L. (Inventor); Ticatch, Larry A. (Inventor); Smith, Laura J. (Inventor); Koppen, Sandra V. (Inventor); Nguyen, Truong X. (Inventor); Ely, Jay J. (Inventor)

    2016-01-01

    A plasma generator includes a pair of identical spiraled electrical conductors separated by dielectric material. Both spiraled conductors have inductance and capacitance wherein, in the presence of a time-varying electromagnetic field, the spiraled conductors resonate to generate a harmonic electromagnetic field response. The spiraled conductors lie in parallel planes and partially overlap one another in a direction perpendicular to the parallel planes. The geometric centers of the spiraled conductors define endpoints of a line that is non-perpendicular with respect to the parallel planes. A voltage source coupled across the spiraled conductors applies a voltage sufficient to generate a plasma in at least a portion of the dielectric material.

  19. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    PubMed

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  20. 47 CFR 80.307 - Compulsory use of radiotelegraph auto alarm.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Compulsory use of radiotelegraph auto alarm. 80.307 Section 80.307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... Safety Watches § 80.307 Compulsory use of radiotelegraph auto alarm. The radiotelegraph auto alarm...

  1. 47 CFR 80.307 - Compulsory use of radiotelegraph auto alarm.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Compulsory use of radiotelegraph auto alarm. 80.307 Section 80.307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... Safety Watches § 80.307 Compulsory use of radiotelegraph auto alarm. The radiotelegraph auto alarm...

  2. 47 CFR 80.307 - Compulsory use of radiotelegraph auto alarm.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Compulsory use of radiotelegraph auto alarm. 80.307 Section 80.307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... Safety Watches § 80.307 Compulsory use of radiotelegraph auto alarm. The radiotelegraph auto alarm...

  3. 47 CFR 80.307 - Compulsory use of radiotelegraph auto alarm.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Compulsory use of radiotelegraph auto alarm. 80.307 Section 80.307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... Safety Watches § 80.307 Compulsory use of radiotelegraph auto alarm. The radiotelegraph auto alarm...

  4. 47 CFR 80.307 - Compulsory use of radiotelegraph auto alarm.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Compulsory use of radiotelegraph auto alarm. 80.307 Section 80.307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... Safety Watches § 80.307 Compulsory use of radiotelegraph auto alarm. The radiotelegraph auto alarm...

  5. Doppler Imaging with FUSE: The Partially Eclipsing Binary VW Cep

    NASA Technical Reports Server (NTRS)

    Sonneborn, George (Technical Monitor); Brickhouse, Nancy

    2003-01-01

    This report covers the FUSE Guest Observer program. This project involves the study of emission line profiles for the partially eclipsing, rapidly rotating binary system VW Cep. Active regions on the surface of the star(s) produce observable line shifts as the stars move with respect to the observer. By studying the time-dependence of the line profile changes and centroid shifts, one can determine the location of the activity. FUSE spectra were obtained by the P.I. 27 Sept 2002 and data reduction is in progress. Since we are interested in line profile analysis, we are now investigating the wavelength scale calibration in some detail. We have also obtained and are analyzing Chandra data in order to compare the X-ray velocities with the FUV velocities. A complementary project comparing X-ray and Far UltraViolet (FUV) emission for the similar system 44i Boo is also underway. Postdoctoral fellow Ronnie Hoogerwerf has joined the investigation team and will perform the data analysis, once the calibration is optimized.

  6. Determination of elemental composition of shale rocks by laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Sanghapi, Hervé K.; Jain, Jinesh; Bol'shakov, Alexander; Lopano, Christina; McIntyre, Dustin; Russo, Richard

    2016-08-01

    In this study laser induced breakdown spectroscopy (LIBS) is used for elemental characterization of outcrop samples from the Marcellus Shale. Powdered samples were pressed to form pellets and used for LIBS analysis. Partial least squares regression (PLS-R) and univariate calibration curves were used for quantification of analytes. The matrix effect is substantially reduced using the partial least squares calibration method. Predicted results with LIBS are compared to ICP-OES results for Si, Al, Ti, Mg, and Ca. As for C, its results are compared to those obtained by a carbon analyzer. Relative errors of the LIBS measurements are in the range of 1.7 to 12.6%. The limits of detection (LODs) obtained for Si, Al, Ti, Mg and Ca are 60.9, 33.0, 15.6, 4.2 and 0.03 ppm, respectively. An LOD of 0.4 wt.% was obtained for carbon. This study shows that the LIBS method can provide a rapid analysis of shale samples and can potentially benefit depleted gas shale carbon storage research.

  7. Determination of main fruits in adulterated nectars by ATR-FTIR spectroscopy combined with multivariate calibration and variable selection methods.

    PubMed

    Miaw, Carolina Sheng Whei; Assis, Camila; Silva, Alessandro Rangel Carolino Sales; Cunha, Maria Luísa; Sena, Marcelo Martins; de Souza, Scheilla Vitorino Carvalho

    2018-07-15

    Grape, orange, peach and passion fruit nectars were formulated and adulterated by dilution with syrup, apple and cashew juices at 10 levels for each adulterant. Attenuated total reflectance Fourier transform mid infrared (ATR-FTIR) spectra were obtained. Partial least squares (PLS) multivariate calibration models allied to different variable selection methods, such as interval partial least squares (iPLS), ordered predictors selection (OPS) and genetic algorithm (GA), were used to quantify the main fruits. PLS improved by iPLS-OPS variable selection showed the highest predictive capacity to quantify the main fruit contents. The selected variables in the final models varied from 72 to 100; the root mean square errors of prediction were estimated from 0.5 to 2.6%; the correlation coefficients of prediction ranged from 0.948 to 0.990; and, the mean relative errors of prediction varied from 3.0 to 6.7%. All of the developed models were validated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Enhancing Price Response Programs through Auto-DR: California's 2007 Implementation Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiliccote, Sila; Wikler, Greg; Chiu, Albert

    2007-12-18

    This paper describes automated demand response (Auto-DR) activities, an innovative effort in California to ensure that DR programs produce effective and sustainable impacts. Through the application of automation and communication technologies coupled with well-designed incentives and DR programs such as Critical Peak Pricing (CPP) and Demand Bidding (DBP), Auto-DR is opening up the opportunity for many different types of buildings to effectively participate in DR programs. We present the results of Auto-DR implementation efforts by the three California investor-owned utilities for the Summer of 2007. The presentation emphasizes Pacific Gas and Electric Company's (PG&E) Auto-DR efforts, which represents the largestmore » in the state. PG&E's goal was to recruit, install, test and operate 15 megawatts of Auto-DR system capability. We describe the unique delivery approaches, including optimizing the utility incentive structures designed to foster an Auto-DR service provider community. We also show how PG&E's Critical Peak Pricing (CPP) and Demand Bidding (DBP) options were called and executed under the automation platform. Finally, we show the results of the Auto-DR systems installed and operational during 2007, which surpassed PG&E's Auto-DR goals. Auto-DR is being implemented by a multi-disciplinary team including the California Investor Owned Utilities (IOUs), energy consultants, energy management control system vendors, the Lawrence Berkeley National Laboratory (LBNL), and the California Energy Commission (CEC).« less

  9. Recent progresses of neural network unsupervised learning: I. Independent component analyses generalizing PCA

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1999-03-01

    The early vision principle of redundancy reduction of 108 sensor excitations is understandable from computer vision viewpoint toward sparse edge maps. It is only recently derived using a truly unsupervised learning paradigm of artificial neural networks (ANN). In fact, the biological vision, Hubel- Wiesel edge maps, is reproduced seeking the underlying independent components analyses (ICA) among 102 image samples by maximizing the ANN output entropy (partial)H(V)/(partial)[W] equals (partial)[W]/(partial)t. When a pair of newborn eyes or ears meet the bustling and hustling world without supervision, they seek ICA by comparing 2 sensory measurements (x1(t), x2(t))T equalsV X(t). Assuming a linear and instantaneous mixture model of the external world X(t) equals [A] S(t), where both the mixing matrix ([A] equalsV [a1, a2] of ICA vectors and the source percentages (s1(t), s2(t))T equalsV S(t) are unknown, we seek the independent sources approximately equals [I] where the approximated sign indicates that higher order statistics (HOS) may not be trivial. Without a teacher, the ANN weight matrix [W] equalsV [w1, w2] adjusts the outputs V(t) equals tanh([W]X(t)) approximately equals [W]X(t) until no desired outputs except the (Gaussian) 'garbage' (neither YES '1' nor NO '-1' but at linear may-be range 'origin 0') defined by Gaussian covariance G equals [I] equals [W][A] at the fixed point (partial)E/(partial)wi equals 0 resulted in an exact Toplitz matrix inversion for a stationary covariance assumption. We generalize AR by a nonlinear output vi(t+1) equals tanh(wiTX(t)) within E equals <[x(t+1) - vi(t+1)]2>, and the gradient descent (partial)E/(partial)wi equals - (partial)wi/(partial)t. Further generalization is possible because of specific image/speech having a specific histogram whose gray scale statistics departs from that of Gaussian random variable and can be measured by the fourth order cumulant, Kurtosis K(vi) equals - 3 2 (K greater than or equal to 0 super-G for speeches, K less than or equal to 0 sub-G for images). Thus, the stationary value at (partial)K/(partial)wi equals plus or minus 4 PTLwi/(partial)t can de-mix unknown mixtures of noisy images/speeches without a teacher. This stationary statistics may be parallel implemented using the 'factorized pdf code: (rho) (v1, v2) equals (rho) (v1) (rho) (v2)' occurred at a maximal entropy algorithm improved by the natural gradient of Amari. Real world applications are given in Part II, (Wavelet Appl-VI, SPIE Proc. Vol. 3723) such as remote sensing subpixel composition, speech segmentation by means of ICA de-hyphenation, and cable TV bandwidth enhancement by simultaneously mixing sport and movie entertainment events.

  10. Female Infertility and Serum Auto-antibodies: a Systematic Review.

    PubMed

    Deroux, Alban; Dumestre-Perard, Chantal; Dunand-Faure, Camille; Bouillet, Laurence; Hoffmann, Pascale

    2017-08-01

    On average, 10 % of infertile couples have unexplained infertility. Auto-immune disease (systemic lupus erythematosus, anti-phospholipid syndrome) accounts for a part of these cases. In the last 20 years, aspecific auto-immunity, defined as positivity of auto-antibodies in blood sample without clinical or biological criteria for defined diseases, has been evoked in a subpopulation of infertile women. A systematic review was performed (PUBMED) using the MESH search terms "infertility" and "auto-immunity" or "reproductive technique" or "assisted reproduction" or "in vitro fertilization" and "auto-immunity." We retained clinical and physiopathological studies that were applicable to the clinician in assuming joint management of both infertility associated with serum auto-antibodies in women. Thyroid auto-immunity which affects thyroid function could be a cause of infertility; even in euthyroidia, the presence of anti-thyroperoxydase antibodies and/or thyroglobulin are related to infertility. The presence of anti-phospholipid (APL) and/or anti-nuclear (ANA) antibodies seems to be more frequent in the population of infertile women; serum auto-antibodies are associated with early ovarian failure, itself responsible for fertility disorders. However, there exist few publications on this topic. The methods of dosage, as well as the clinical criteria of unexplained infertility deserve to be standardized to allow a precise response to the question of the role of serum auto-antibodies in these women. The direct pathogenesis of this auto-immunity is unknown, but therapeutic immunomodulators, prescribed on a case-by-case basis, could favor pregnancy even in cases of unexplained primary or secondary infertility.

  11. Effects of Granular Control on Customers’ Perspective and Behavior with Automated Demand Response Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schetrit, Oren; Kim, Joyce; Yin, Rongxin

    2014-08-01

    Automated demand response (Auto-DR) is expected to close the loop between buildings and the grid by providing machine-to-machine communications to curtail loads without the need for human intervention. Hence, it can offer more reliable and repeatable demand response results to the grid than the manual approach and make demand response participation a hassle-free experience for customers. However, many building operators misunderstand Auto-DR and are afraid of losing control over their building operation. To ease the transition from manual to Auto-DR, we designed and implemented granular control of Auto-DR systems so that building operators could modify or opt out of individualmore » load-shed strategies whenever they wanted. This paper reports the research findings from this effort demonstrated through a field study in large commercial buildings located in New York City. We focused on (1) understanding how providing granular control affects building operators’ perspective on Auto-DR, and (2) evaluating the usefulness of granular control by examining their interaction with the Auto-DR user interface during test events. Through trend log analysis, interviews, and surveys, we found that: (1) the opt-out capability during Auto-DR events can remove the feeling of being forced into load curtailments and increase their willingness to adopt Auto-DR; (2) being able to modify individual load-shed strategies allows flexible Auto-DR participation that meets the building’s changing operational requirements; (3) a clear display of automation strategies helps building operators easily identify how Auto-DR is functioning and can build trust in Auto-DR systems.« less

  12. Administration of the adrenaline auto-injector at the nursery/kindergarten/school in Western Japan.

    PubMed

    Korematsu, Seigo; Fujitaka, Michiko; Ogata, Mika; Zaitsu, Masafumi; Motomura, Chikako; Kuzume, Kazuyo; Toku, Yuchiro; Ikeda, Masanori; Odajima, Hiroshi

    2017-01-01

    In view of the increasing prevalence of food allergies, there has been an associated increase in frequency of situations requiring an emergency response for anaphylaxis at the home, childcare facilities and educational institutions. To clarify the situation of adrenaline auto-injector administration in nursery/kindergarten/school, we carried out a questionnaire survey on pediatric physicians in Western Japan. In 2015, self-reported questionnaires were mailed to 421 physicians who are members of the West Japan Research Society Pediatric Clinical Allergy and Shikoku Research Society Pediatric Clinical Allergy. The response rate was 44% (185 physicians) where 160 physicians had a prescription registration for the adrenaline auto-injector. In the past year, 1,330 patients were prescribed the adrenaline auto-injector where 83 patients (6% of the prescribed patients) actually administered the adrenaline auto-injector, of which 14 patients (17% of the administered patients) self-administered the adrenaline auto-injector. "Guardians" at the nursery/kindergarten and elementary school were found to have administered the adrenaline auto-injector the most. Among 117 adrenaline auto-injector prescription-registered physicians, 79% had experienced nonadministration of adrenaline auto-injector at nursery/kindergarten/school when anaphylaxis has occurred. The most frequent reason cited for not administering the adrenaline auto-injector was "hesitation about the timing of administration." If the adrenaline auto-injector was administered after the guardian arrived at the nursery/kindergarten/school, it may lead to delayed treatment of anaphylaxis in which symptoms develop in minutes. Education and cooperation among physicians and nursery/kindergarten/school staff will reduce the number of children suffering unfortunate outcomes due to anaphylaxis.

  13. Unsteady stokes flow of dusty fluid between two parallel plates through porous medium in the presence of magnetic field

    NASA Astrophysics Data System (ADS)

    Sasikala, R.; Govindarajan, A.; Gayathri, R.

    2018-04-01

    This paper focus on the result of dust particle between two parallel plates through porous medium in the presence of magnetic field with constant suction in the upper plate and constant injection in the lower plate. The partial differential equations governing the flow are solved by similarity transformation. The velocity of the fluid and the dust particle decreases when there is an increase in the Hartmann number.

  14. Hybrid Parallelization of Adaptive MHD-Kinetic Module in Multi-Scale Fluid-Kinetic Simulation Suite

    DOE PAGES

    Borovikov, Sergey; Heerikhuisen, Jacob; Pogorelov, Nikolai

    2013-04-01

    The Multi-Scale Fluid-Kinetic Simulation Suite has a computational tool set for solving partially ionized flows. In this paper we focus on recent developments of the kinetic module which solves the Boltzmann equation using the Monte-Carlo method. The module has been recently redesigned to utilize intra-node hybrid parallelization. We describe in detail the redesign process, implementation issues, and modifications made to the code. Finally, we conduct a performance analysis.

  15. V-TECS Guide for Auto Body Repair.

    ERIC Educational Resources Information Center

    Gregory, Margaret R.; Benson, Robert T.

    This curriculum guide consists of materials for teaching a course in auto body repair. Addressed in the individual units of the guide are the following topics: the nature and scope of auto body repair; safety; tools; auto body construction; simple metal straightening; welding; painting and refinishing; refinishing complete lacquer; refinishing…

  16. Conversations with AutoTutor Help Students Learn

    ERIC Educational Resources Information Center

    Graesser, Arthur C.

    2016-01-01

    AutoTutor helps students learn by holding a conversation in natural language. AutoTutor is adaptive to the learners' actions, verbal contributions, and in some systems their emotions. Many of AutoTutor's conversation patterns simulate human tutoring, but other patterns implement ideal pedagogies that open the door to computer tutors eclipsing…

  17. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations

    PubMed Central

    Mitchell, William F.

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given. PMID:28009355

  18. The Refinement-Tree Partition for Parallel Solution of Partial Differential Equations.

    PubMed

    Mitchell, William F

    1998-01-01

    Dynamic load balancing is considered in the context of adaptive multilevel methods for partial differential equations on distributed memory multiprocessors. An approach that periodically repartitions the grid is taken. The important properties of a partitioning algorithm are presented and discussed in this context. A partitioning algorithm based on the refinement tree of the adaptive grid is presented and analyzed in terms of these properties. Theoretical and numerical results are given.

  19. Identification and quantification of carbamate pesticides in dried lime tree flowers by means of excitation-emission molecular fluorescence and parallel factor analysis when quenching effect exists.

    PubMed

    Rubio, L; Ortiz, M C; Sarabia, L A

    2014-04-11

    A non-separative, fast and inexpensive spectrofluorimetric method based on the second order calibration of excitation-emission fluorescence matrices (EEMs) was proposed for the determination of carbaryl, carbendazim and 1-naphthol in dried lime tree flowers. The trilinearity property of three-way data was used to handle the intrinsic fluorescence of lime flowers and the difference in the fluorescence intensity of each analyte. It also made possible to identify unequivocally each analyte. Trilinearity of the data tensor guarantees the uniqueness of the solution obtained through parallel factor analysis (PARAFAC), so the factors of the decomposition match up with the analytes. In addition, an experimental procedure was proposed to identify, with three-way data, the quenching effect produced by the fluorophores of the lime flowers. This procedure also enabled the selection of the adequate dilution of the lime flowers extract to minimize the quenching effect so the three analytes can be quantified. Finally, the analytes were determined using the standard addition method for a calibration whose standards were chosen with a D-optimal design. The three analytes were unequivocally identified by the correlation between the pure spectra and the PARAFAC excitation and emission spectral loadings. The trueness was established by the accuracy line "calculated concentration versus added concentration" in all cases. Better decision limit values (CCα), in x0=0 with the probability of false positive fixed at 0.05, were obtained for the calibration performed in pure solvent: 2.97 μg L(-1) for 1-naphthol, 3.74 μg L(-1) for carbaryl and 23.25 μg L(-1) for carbendazim. The CCα values for the second calibration carried out in matrix were 1.61, 4.34 and 51.75 μg L(-1) respectively; while the values obtained considering only the pure samples as calibration set were: 2.65, 8.61 and 28.7 μg L(-1), respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. RF Reference Switch for Spaceflight Radiometer Calibration

    NASA Technical Reports Server (NTRS)

    Knuble, Joseph

    2013-01-01

    The goal of this technology is to provide improved calibration and measurement sensitivity to the Soil Moisture Active Passive Mission (SMAP) radiometer. While RF switches have been used in the past to calibrate microwave radiometers, the switch used on SMAP employs several techniques uniquely tailored to the instrument requirements and passive remote-sensing in general to improve radiometer performance. Measurement error and sensitivity are improved by employing techniques to reduce thermal gradients within the device, reduce insertion loss during antenna observations, increase insertion loss temporal stability, and increase rejection of radar and RFI (radio-frequency interference) signals during calibration. The two legs of the single-pole double-throw reference switch employ three PIN diodes per leg in a parallel-shunt configuration to minimize insertion loss and increase stability while exceeding rejection requirements at 1,413 MHz. The high-speed packaged diodes are selected to minimize junction capacitance and resistance while ensuring the parallel devices have very similar I-V curves. Switch rejection is improved by adding high-impedance quarter-wave tapers before and after the diodes, along with replacing the ground via of one diode per leg with an open circuit stub. Errors due to thermal gradients in the switch are reduced by embedding the 50-ohm reference load within the switch, along with using a 0.25-in. (approximately equal to 0.6-cm) aluminum prebacked substrate. Previous spaceflight microwave radiometers did not embed the reference load and thermocouple directly within the calibration switch. In doing so, the SMAP switch reduces error caused by thermal gradients between the load and switch. Thermal issues are further reduced by moving the custom, highspeed regulated driver circuit to a physically separate PWB (printed wiring board). Regarding RF performance, previous spaceflight reference switches have not employed high-impedance tapers to improve rejection. The use of open-circuit stubs instead of a via to provide an improved RF short is unique to this design. The stubs are easily tunable to provide high rejection at specific frequencies while maintaining very low insertion loss in-band.

Top