General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barenboim, Gabriela; /Valencia U.; Lykken, Joseph D.
2006-08-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard {Lambda}CDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction ofmore » the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP-allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to {Lambda}CDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.« less
Colliders as a simultaneous probe of supersymmetric dark matter and Terascale cosmology
NASA Astrophysics Data System (ADS)
Barenboim, Gabriela; Lykken, Joseph D.
2006-12-01
Terascale supersymmetry has the potential to provide a natural explanation of the dominant dark matter component of the standard ΛCDM cosmology. However once we impose the constraints on minimal supersymmetry parameters from current particle physics data, a satisfactory dark matter abundance is no longer prima facie natural. This Neutralino Tuning Problem could be a hint of nonstandard cosmology during and/or after the Terascale era. To quantify this possibility, we introduce an alternative cosmological benchmark based upon a simple model of quintessential inflation. This benchmark has no free parameters, so for a given supersymmetry model it allows an unambiguous prediction of the dark matter relic density. As a example, we scan over the parameter space of the CMSSM, comparing the neutralino relic density predictions with the bounds from WMAP. We find that the WMAP allowed regions of the CMSSM are an order of magnitude larger if we use the alternative cosmological benchmark, as opposed to ΛCDM. Initial results from the CERN Large Hadron Collider will distinguish between the two allowed regions.
Realistic simplified gaugino-higgsino models in the MSSM
NASA Astrophysics Data System (ADS)
Fuks, Benjamin; Klasen, Michael; Schmiemann, Saskia; Sunder, Marthijn
2018-03-01
We present simplified MSSM models for light neutralinos and charginos with realistic mass spectra and realistic gaugino-higgsino mixing, that can be used in experimental searches at the LHC. The formerly used naive approach of defining mass spectra and mixing matrix elements manually and independently of each other does not yield genuine MSSM benchmarks. We suggest the use of less simplified, but realistic MSSM models, whose mass spectra and mixing matrix elements are the result of a proper matrix diagonalisation. We propose a novel strategy targeting the design of such benchmark scenarios, accounting for user-defined constraints in terms of masses and particle mixing. We apply it to the higgsino case and implement a scan in the four relevant underlying parameters {μ , tan β , M1, M2} for a given set of light neutralino and chargino masses. We define a measure for the quality of the obtained benchmarks, that also includes criteria to assess the higgsino content of the resulting charginos and neutralinos. We finally discuss the distribution of the resulting models in the MSSM parameter space as well as their implications for supersymmetric dark matter phenomenology.
Exploring theory space with Monte Carlo reweighting
Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...
2014-10-13
Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less
De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M
2017-02-01
To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.
Singlet-catalyzed electroweak phase transitions in the 100 TeV frontier
NASA Astrophysics Data System (ADS)
Kotwal, Ashutosh V.; Ramsey-Musolf, Michael J.; No, Jose Miguel; Winslow, Peter
2016-08-01
We study the prospects for probing a gauge singlet scalar-driven strong first-order electroweak phase transition with a future proton-proton collider in the 100 TeV range. Singlet-Higgs mixing enables resonantly enhanced di-Higgs production, potentially aiding discovery prospects. We perform Monte Carlo scans of the parameter space to identify regions associated with a strong first-order electroweak phase transition, analyze the corresponding di-Higgs signal, and select a set of benchmark points that span the range of di-Higgs signal strengths. For the b b ¯γ γ and 4 τ final states, we investigate discovery prospects for each benchmark point for the high-luminosity phase of the Large Hadron Collider and for a future p p collider with √{s }=50 , 100, or 200 TeV. We find that any of these future collider scenarios could significantly extend the reach beyond that of the high-luminosity LHC, and that with √{s }=100 TeV (200 TeV) and 30 ab-1 , the full region of parameter space favorable to strong first-order electroweak phase transitions is almost fully (fully) discoverable.
Local Neutral Density and Plasma Parameter Measurements in a Hollow Cathode Plume
NASA Technical Reports Server (NTRS)
Jameson, Kristina K.; Goebel, Dan M.; MiKellides, Joannis; Watkins, Ron M.
2006-01-01
In order to understand the cathode and keeper wear observed during the Extended Life Test (ELT) of the DS1 flight spare NSTAR thruster and provide benchmarking data for a 2D cathode/cathode-plume model, a basic understanding of the plasma and neutral gas parameters in the cathode orifice and keeper region of the cathode plume must be obtained. The JPL cathode facility is instrumented with an array of Langmuir probe diagnostics along with an optical diagnostic to measure line intensity of xenon neutrals. In order to make direct comparisons with the present model, a flat plate anode arrangement was installed for these tests. Neutral density is deduced from the scanning probe data of the plasma parameters and the measured xenon line intensity in the optical regime. The Langmuir probes are scanned both axially, out to 7.0 cm downstream of the keeper, and radially to obtain 2D profile of the plasma parameters. The optical fiber is housed in a collimating stainless steel tube, and is scanned to view across the cathode plume along cuts in front of the keeper with a resolution of 1.5 mm. The radial intensities are unfolded using the Abel inversion technique that produces radial profiles of local neutral density. In this paper, detailed measurements of the plasma parameters and the local neutral densities will be presented in the cathode/keeper plume region for a 1.5 cm diameter NEXIS cathode at 25A of discharge current at several different strengths of applied magnetic field.
Singlet-catalyzed electroweak phase transitions in the 100 TeV frontier
Kotwal, Ashutosh V.; Ramsey-Musolf, Michael J.; No, Jose Miguel; ...
2016-08-23
We study the prospects for probing a gauge singlet scalar-driven strong first-order electroweak phase transition with a future proton-proton collider in the 100 TeV range. Singlet-Higgs mixing enables resonantly enhanced di-Higgs production, potentially aiding discovery prospects. We perform Monte Carlo scans of the parameter space to identify regions associated with a strong first-order electroweak phase transition, analyze the corresponding di-Higgs signal, and select a set of benchmark points that span the range of di-Higgs signal strengths. For the bmore » $$\\bar{b}$$γγ and 4τ final states, we investigate discovery prospects for each benchmark point for the high-luminosity phase of the Large Hadron Collider and for a future pp collider with s=50, 100, or 200 TeV. We find that any of these future collider scenarios could significantly extend the reach beyond that of the high-luminosity LHC, and that with s=100 TeV (200 TeV) and 30 ab -1, the full region of parameter space favorable to strong first-order electroweak phase transitions is almost fully (fully) discoverable.« less
Improved Peptide and Protein Torsional Energetics with the OPLSAA Force Field.
Robertson, Michael J; Tirado-Rives, Julian; Jorgensen, William L
2015-07-14
The development and validation of new peptide dihedral parameters are reported for the OPLS-AA force field. High accuracy quantum chemical methods were used to scan φ, ψ, χ1, and χ2 potential energy surfaces for blocked dipeptides. New Fourier coefficients for the dihedral angle terms of the OPLS-AA force field were fit to these surfaces, utilizing a Boltzmann-weighted error function and systematically examining the effects of weighting temperature. To prevent overfitting to the available data, a minimal number of new residue-specific and peptide-specific torsion terms were developed. Extensive experimental solution-phase and quantum chemical gas-phase benchmarks were used to assess the quality of the new parameters, named OPLS-AA/M, demonstrating significant improvement over previous OPLS-AA force fields. A Boltzmann weighting temperature of 2000 K was determined to be optimal for fitting the new Fourier coefficients for dihedral angle parameters. Conclusions are drawn from the results for best practices for developing new torsion parameters for protein force fields.
NASA Astrophysics Data System (ADS)
Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Lin, Hui; Du, Zhiqiang; Zhang, Yeting; Zhang, Yunsheng
2014-06-01
The filtering of point clouds is a ubiquitous task in the processing of airborne laser scanning (ALS) data; however, such filtering processes are difficult because of the complex configuration of the terrain features. The classical filtering algorithms rely on the cautious tuning of parameters to handle various landforms. To address the challenge posed by the bundling of different terrain features into a single dataset and to surmount the sensitivity of the parameters, in this study, we propose an adaptive surface filter (ASF) for the classification of ALS point clouds. Based on the principle that the threshold should vary in accordance to the terrain smoothness, the ASF embeds bending energy, which quantitatively depicts the local terrain structure to self-adapt the filter threshold automatically. The ASF employs a step factor to control the data pyramid scheme in which the processing window sizes are reduced progressively, and the ASF gradually interpolates thin plate spline surfaces toward the ground with regularization to handle noise. Using the progressive densification strategy, regularization and self-adaption, both performance improvement and resilience to parameter tuning are achieved. When tested against the benchmark datasets provided by ISPRS, the ASF performs the best in comparison with all other filtering methods, yielding an average total error of 2.85% when optimized and 3.67% when using the same parameter set.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Spangenberg, Elin M F; Keeling, Linda J
2016-02-01
Welfare problems in laboratory mice can be a consequence of an ongoing experiment, or a characteristic of a particular genetic line, but in some cases, such as breeding animals, they are most likely to be a result of the design and management of the home cage. Assessment of the home cage environment is commonly performed using resource-based measures, like access to nesting material. However, animal-based measures (related to the health status and behaviour of the animals) can be used to assess the current welfare of animals regardless of the inputs applied (i.e. the resources or management). The aim of this study was to design a protocol for assessing the welfare of laboratory mice using only animal-based measures. The protocol, to be used as a benchmarking tool, assesses mouse welfare in the home cage and does not contain parameters related to experimental situations. It is based on parameters corresponding to the 12 welfare criteria established by the Welfare Quality® project. Selection of animal-based measures was performed by scanning existing published, web-based and informal protocols, and by choosing parameters that matched these criteria, were feasible in practice and, if possible, were already validated indicators of mouse welfare. The parameters should identify possible animal welfare problems and enable assessment directly in an animal room during cage cleaning procedures, without the need for extra equipment. Thermal comfort behaviours and positive emotional states are areas where more research is needed to find valid, reliable and feasible animal-based measures. © The Author(s) 2015.
Benchmark Testing of a New 56Fe Evaluation for Criticality Safety Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz C; Ivanov, E.
2015-01-01
The SAMMY code was used to evaluate resonance parameters of the 56Fe cross section in the resolved resonance energy range of 0–2 MeV using transmission data, capture, elastic, inelastic, and double differential elastic cross sections. The resonance analysis was performed with the code SAMMY that fits R-matrix resonance parameters using the generalized least-squares technique (Bayes’ theory). The evaluation yielded a set of resonance parameters that reproduced the experimental data very well, along with a resonance parameter covariance matrix for data uncertainty calculations. Benchmark tests were conducted to assess the evaluation performance in benchmark calculations.
Benchmarking Using Basic DBMS Operations
NASA Astrophysics Data System (ADS)
Crolotte, Alain; Ghazal, Ahmad
The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.
Benchmark studies of the gyro-Landau-fluid code and gyro-kinetic codes on kinetic ballooning modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, T. F.; Lawrence Livermore National Laboratory, Livermore, California 94550; Xu, X. Q.
2016-03-15
A Gyro-Landau-Fluid (GLF) 3 + 1 model has been recently implemented in BOUT++ framework, which contains full Finite-Larmor-Radius effects, Landau damping, and toroidal resonance [Ma et al., Phys. Plasmas 22, 055903 (2015)]. A linear global beta scan has been conducted using the JET-like circular equilibria (cbm18 series), showing that the unstable modes are kinetic ballooning modes (KBMs). In this work, we use the GYRO code, which is a gyrokinetic continuum code widely used for simulation of the plasma microturbulence, to benchmark with GLF 3 + 1 code on KBMs. To verify our code on the KBM case, we first perform the beta scan basedmore » on “Cyclone base case parameter set.” We find that the growth rate is almost the same for two codes, and the KBM mode is further destabilized as beta increases. For JET-like global circular equilibria, as the modes localize in peak pressure gradient region, a linear local beta scan using the same set of equilibria has been performed at this position for comparison. With the drift kinetic electron module in the GYRO code by including small electron-electron collision to damp electron modes, GYRO generated mode structures and parity suggest that they are kinetic ballooning modes, and the growth rate is comparable to the GLF results. However, a radial scan of the pedestal for a particular set of cbm18 equilibria, using GYRO code, shows different trends for the low-n and high-n modes. The low-n modes show that the linear growth rate peaks at peak pressure gradient position as GLF results. However, for high-n modes, the growth rate of the most unstable mode shifts outward to the bottom of pedestal and the real frequency of what was originally the KBMs in ion diamagnetic drift direction steadily approaches and crosses over to the electron diamagnetic drift direction.« less
A new numerical benchmark of a freshwater lens
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Walther, M.; Graf, T.
2016-04-01
A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.
Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos
2016-01-01
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328
Influence of particle geometry and PEGylation on phagocytosis of particulate carriers.
Mathaes, Roman; Winter, Gerhard; Besheer, Ahmed; Engert, Julia
2014-04-25
Particle geometry of micro- and nanoparticles has been identified as an important design parameter to influence the interaction with cells such as macrophages. A head to head comparison of elongated, non-spherical and spherical micro- and nanoparticles with and without PEGylation was carried out to benchmark two phagocytosis inhibiting techniques. J774.A1 macrophages were incubated with fluorescently labeled PLGA micro- and nanoparticles and analyzed by confocal laser scanning microscope (CLSM) and flow cytometry (FACS). Particle uptake into macrophages was significantly reduced upon PEGylation or elongated particle geometry. A combination of both, an elongated shape and PEGylation, had the strongest phagocytosis inhibiting effect for nanoparticles. Copyright © 2014 Elsevier B.V. All rights reserved.
Hopkins, Carl
2011-05-01
In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.
NASA Technical Reports Server (NTRS)
Krause, David L.; Brewer, Ethan J.; Pawlik, Ralph
2013-01-01
This report provides test methodology details and qualitative results for the first structural benchmark creep test of an Advanced Stirling Convertor (ASC) heater head of ASC-E2 design heritage. The test article was recovered from a flight-like Microcast MarM-247 heater head specimen previously used in helium permeability testing. The test article was utilized for benchmark creep test rig preparation, wall thickness and diametral laser scan hardware metrological developments, and induction heater custom coil experiments. In addition, a benchmark creep test was performed, terminated after one week when through-thickness cracks propagated at thermocouple weld locations. Following this, it was used to develop a unique temperature measurement methodology using contact thermocouples, thereby enabling future benchmark testing to be performed without the use of conventional welded thermocouples, proven problematic for the alloy. This report includes an overview of heater head structural benchmark creep testing, the origin of this particular test article, test configuration developments accomplished using the test article, creep predictions for its benchmark creep test, qualitative structural benchmark creep test results, and a short summary.
MIPS bacterial genomes functional annotation benchmark dataset.
Tetko, Igor V; Brauner, Barbara; Dunger-Kaltenbach, Irmtraud; Frishman, Goar; Montrone, Corinna; Fobo, Gisela; Ruepp, Andreas; Antonov, Alexey V; Surmeli, Dimitrij; Mewes, Hans-Wernen
2005-05-15
Any development of new methods for automatic functional annotation of proteins according to their sequences requires high-quality data (as benchmark) as well as tedious preparatory work to generate sequence parameters required as input data for the machine learning methods. Different program settings and incompatible protocols make a comparison of the analyzed methods difficult. The MIPS Bacterial Functional Annotation Benchmark dataset (MIPS-BFAB) is a new, high-quality resource comprising four bacterial genomes manually annotated according to the MIPS functional catalogue (FunCat). These resources include precalculated sequence parameters, such as sequence similarity scores, InterPro domain composition and other parameters that could be used to develop and benchmark methods for functional annotation of bacterial protein sequences. These data are provided in XML format and can be used by scientists who are not necessarily experts in genome annotation. BFAB is available at http://mips.gsf.de/proj/bfab
Benchmarking image fusion system design parameters
NASA Astrophysics Data System (ADS)
Howell, Christopher L.
2013-06-01
A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
An imaging-based computational model for simulating angiogenesis and tumour oxygenation dynamics
NASA Astrophysics Data System (ADS)
Adhikarla, Vikram; Jeraj, Robert
2016-05-01
Tumour growth, angiogenesis and oxygenation vary substantially among tumours and significantly impact their treatment outcome. Imaging provides a unique means of investigating these tumour-specific characteristics. Here we propose a computational model to simulate tumour-specific oxygenation changes based on the molecular imaging data. Tumour oxygenation in the model is reflected by the perfused vessel density. Tumour growth depends on its doubling time (T d) and the imaged proliferation. Perfused vessel density recruitment rate depends on the perfused vessel density around the tumour (sMVDtissue) and the maximum VEGF concentration for complete vessel dysfunctionality (VEGFmax). The model parameters were benchmarked to reproduce the dynamics of tumour oxygenation over its entire lifecycle, which is the most challenging test. Tumour oxygenation dynamics were quantified using the peak pO2 (pO2peak) and the time to peak pO2 (t peak). Sensitivity of tumour oxygenation to model parameters was assessed by changing each parameter by 20%. t peak was found to be more sensitive to tumour cell line related doubling time (~30%) as compared to tissue vasculature density (~10%). On the other hand, pO2peak was found to be similarly influenced by the above tumour- and vasculature-associated parameters (~30-40%). Interestingly, both pO2peak and t peak were only marginally affected by VEGFmax (~5%). The development of a poorly oxygenated (hypoxic) core with tumour growth increased VEGF accumulation, thus disrupting the vessel perfusion as well as further increasing hypoxia with time. The model with its benchmarked parameters, is applied to hypoxia imaging data obtained using a [64Cu]Cu-ATSM PET scan of a mouse tumour and the temporal development of the vasculature and hypoxia maps are shown. The work underscores the importance of using tumour-specific input for analysing tumour evolution. An extended model incorporating therapeutic effects can serve as a powerful tool for analysing tumour response to anti-angiogenic therapies.
Ashenafi, Michael S.; McDonald, Daniel G.; Vanek, Kenneth N.
2015-01-01
Beam scanning data collected on the tomotherapy linear accelerator using the TomoScanner water scanning system is primarily used to verify the golden beam profiles included in all Helical TomoTherapy treatment planning systems (TOMO TPSs). The user is not allowed to modify the beam profiles/parameters for beam modeling within the TOMO TPSs. The authors report the first feasibility study using the Blue Phantom Helix (BPH) as an alternative to the TomoScanner (TS) system. This work establishes a benchmark dataset using BPH for target commissioning and quality assurance (QA), and quantifies systematic uncertainties between TS and BPH. Reproducibility of scanning with BPH was tested by three experienced physicists taking five sets of measurements over a six‐month period. BPH provides several enhancements over TS, including a 3D scanning arm, which is able to acquire necessary beam‐data with one tank setup, a universal chamber mount, and the OmniPro software, which allows online data collection and analysis. Discrepancies between BPH and TS were estimated by acquiring datasets with each tank. In addition, data measured with BPH and TS was compared to the golden TOMO TPS beam data. The total systematic uncertainty, defined as the combination of scanning system and beam modeling uncertainties, was determined through numerical analysis and tabulated. OmniPro was used for all analysis to eliminate uncertainty due to different data processing algorithms. The setup reproducibility of BPH remained within 0.5 mm/0.5%. Comparing BPH, TS, and Golden TPS for PDDs beyond maximum depth, the total systematic uncertainties were within 1.4 mm/2.1%. Between BPH and TPS golden data, maximum differences in the field width and penumbra of in‐plane profiles were within 0.8 and 1.1 mm, respectively. Furthermore, in cross‐plane profiles, the field width differences increased at depth greater than 10 cm up to 2.5 mm, and maximum penumbra uncertainties were 5.6 mm and 4.6 mm from TS scanning system and TPS modeling, respectively. Use of BPH reduced measurement time by 1–2 hrs per session. The BPH has been assessed as an efficient, reproducible, and accurate scanning system capable of providing a reliable benchmark beam data. With this data, a physicist can utilize the BPH in a clinical setting with an understanding of the scan discrepancy that may be encountered while validating the TPS or during routine machine QA. Without the flexibility of modifying the TPS and without a golden beam dataset from the vendor or a TPS model generated from data collected with the BPH, this represents the best solution for current clinical use of the BPH. PACS number: 87.56.Fc
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzevich, M; Grove, O; Balagurunathan, Y
Purpose: To assess the reproducibility of quantitative structural features using images from the computed tomography thoracic FDA phantom database under different scanning conditions. Methods: Development of quantitative image features to describe lesion shape and size, beyond conventional RECIST measures, is an evolving area of research in need of benchmarking standards. Gavrielides et al. (2010) scanned a FDA-developed thoracic phantom with nodules of various Hounsfield units (HU) values, shapes and sizes close to vascular structures using several scanners and varying scanning conditions/parameters; these images are in the public domain. We tested six structural features, namely, Convexity, Perimeter, Major Axis, Minor Axis,more » Extent Mean and Eccentricity, to characterize lung nodules. Convexity measures lesion irregularity referenced to a convex surface. Previously, we showed it to have prognostic value in lung adenocarcinoma. The above metrics and RECIST measures were evaluated on three spiculated (8mm/-300HU, 12mm/+30HU and 15mm/+30HU) and two non-spiculated (8mm/+100HU and 10mm/+100HU) nodules (from layout 2) imaged at three different mAs values: 25, 100 and 200 mAs; on a Phillips scanner (16-slice Mx8000-IDT; 3mm slice thickness). The nodules were segmented semi-automatically using a commercial software tool; the same HU range was used for all nodules. Results: Analysis showed convexity having the lowest maximum coefficient of variation (MCV): 1.1% and 0.6% for spiculated and non-spiculated nodules, respectively, much lower compared to RECIST Major and Minor axes whose MCV were 10.1% and 13.4% for spiculated, and 1.9% and 2.3% for non-spiculated nodules, respectively, across the various mAs. MCVs were consistently larger for speculated nodules. In general, the dependence of structural features on mAs (noise) was low. Conclusion: The FDA phantom CT database may be used for benchmarking of structural features for various scanners and scanning conditions; we used only a small fraction of available data. Our feature convexity outperformed other structural features including RECIST measures.« less
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen
2015-10-01
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen
2016-01-01
In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501
Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera
Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing
2018-01-01
The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885
Demb, Joshua; Chu, Philip; Nelson, Thomas; Hall, David; Seibert, Anthony; Lamba, Ramit; Boone, John; Krishnam, Mayil; Cagnon, Christopher; Bostani, Maryam; Gould, Robert; Miglioretti, Diana; Smith-Bindman, Rebecca
2017-06-01
Radiation doses for computed tomography (CT) vary substantially across institutions. To assess the impact of institutional-level audit and collaborative efforts to share best practices on CT radiation doses across 5 University of California (UC) medical centers. In this before/after interventional study, we prospectively collected radiation dose metrics on all diagnostic CT examinations performed between October 1, 2013, and December 31, 2014, at 5 medical centers. Using data from January to March (baseline), we created audit reports detailing the distribution of radiation dose metrics for chest, abdomen, and head CT scans. In April, we shared reports with the medical centers and invited radiology professionals from the centers to a 1.5-day in-person meeting to review reports and share best practices. We calculated changes in mean effective dose 12 weeks before and after the audits and meeting, excluding a 12-week implementation period when medical centers could make changes. We compared proportions of examinations exceeding previously published benchmarks at baseline and following the audit and meeting, and calculated changes in proportion of examinations exceeding benchmarks. Of 158 274 diagnostic CT scans performed in the study period, 29 594 CT scans were performed in the 3 months before and 32 839 CT scans were performed 12 to 24 weeks after the audit and meeting. Reductions in mean effective dose were considerable for chest and abdomen. Mean effective dose for chest CT decreased from 13.2 to 10.7 mSv (18.9% reduction; 95% CI, 18.0%-19.8%). Reductions at individual medical centers ranged from 3.8% to 23.5%. The mean effective dose for abdominal CT decreased from 20.0 to 15.0 mSv (25.0% reduction; 95% CI, 24.3%-25.8%). Reductions at individual medical centers ranged from 10.8% to 34.7%. The number of CT scans that had an effective dose measurement that exceeded benchmarks was reduced considerably by 48% and 54% for chest and abdomen, respectively. After the audit and meeting, head CT doses varied less, although some institutions increased and some decreased mean head CT doses and the proportion above benchmarks. Reviewing institutional doses and sharing dose-optimization best practices resulted in lower radiation doses for chest and abdominal CT and more consistent doses for head CT.
[Benchmarking of university trauma centers in Germany. Research and teaching].
Gebhard, F; Raschke, M; Ruchholtz, S; Meffert, R; Marzi, I; Pohlemann, T; Südkamp, N; Josten, C; Zwipp, H
2011-07-01
Benchmarking is a very popular business process and meanwhile is used in research as well. The aim of the present study is to elucidate key numbers of German university trauma departments regarding research and teaching. The data set is based upon the monthly reports given by the administration in each university. As a result the study shows that only well-known parameters such as fund-raising and impact factors can be used to benchmark university-based trauma centers. The German federal system does not allow a nationwide benchmarking.
The light and heavy Higgs interpretation of the MSSM
Bechtle, Philip; Haber, Howard E.; Heinemeyer, Sven; ...
2017-02-03
We perform a parameter scan of the phenomenological Minimal Supersymmetric Standard Model (pMSSM) with eight parameters taking into account the experimental Higgs boson results from Run I of the LHC and further low-energy observables. We investigate various MSSM interpretations of the Higgs signal at 125 GeV. First, the light CP-even Higgs boson being the discovered particle. In this case it can impersonate the SM Higgslike signal either in the decoupling limit, or in the limit of alignment without decoupling. In the latter case, the other states in the Higgs sector can also be light, offering good prospects for upcoming LHCmore » searches and for searches at future colliders. Second, we demonstrate that the heavy CP-even Higgs boson is still a viable candidate to explain the Higgs signal | albeit only in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. As a guidance for such searches we provide new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation.« less
The light and heavy Higgs interpretation of the MSSM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bechtle, Philip; Haber, Howard E.; Heinemeyer, Sven
We perform a parameter scan of the phenomenological Minimal Supersymmetric Standard Model (pMSSM) with eight parameters taking into account the experimental Higgs boson results from Run I of the LHC and further low-energy observables. We investigate various MSSM interpretations of the Higgs signal at 125 GeV. First, the light CP-even Higgs boson being the discovered particle. In this case it can impersonate the SM Higgslike signal either in the decoupling limit, or in the limit of alignment without decoupling. In the latter case, the other states in the Higgs sector can also be light, offering good prospects for upcoming LHCmore » searches and for searches at future colliders. Second, we demonstrate that the heavy CP-even Higgs boson is still a viable candidate to explain the Higgs signal | albeit only in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. As a guidance for such searches we provide new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation.« less
Positively deflected anomaly mediation in the light of the Higgs boson discovery
NASA Astrophysics Data System (ADS)
Okada, Nobuchika; Tran, Hieu Minh
2013-02-01
Anomaly-mediated supersymmetry breaking (AMSB) is a well-known mechanism for flavor-blind transmission of supersymmetry breaking from the hidden sector to the visible sector. However, the pure AMSB scenario suffers from a serious drawback, namely, the tachyonic slepton problem, and needs to be extended. The so-called (positively) deflected AMSB is a simple extension to solve the problem and also provides us with the usual neutralino lightest superpartner as a good candidate for dark matter in the Universe. Motivated by the recent discovery of the Higgs boson at the Large Hadron Collider (LHC) experiments, we perform the parameter scan in the deflected AMSB scenario by taking into account a variety of phenomenological constraints, such as the dark matter relic density and the observed Higgs boson mass around 125-126 GeV. We identify the allowed parameter region and list benchmark mass spectra. We find that in most of the allowed parameter regions, the dark matter neutralino is Higgsino-like and its elastic scattering cross section with nuclei is within the future reach of the direct dark matter search experiments, while (colored) sparticles are quite heavy and their discovery at the LHC is challenging.
Benchmarking biology research organizations using a new, dedicated tool.
van Harten, Willem H; van Bokhorst, Leonard; van Luenen, Henri G A M
2010-02-01
International competition forces fundamental research organizations to assess their relative performance. We present a benchmark tool for scientific research organizations where, contrary to existing models, the group leader is placed in a central position within the organization. We used it in a pilot benchmark study involving six research institutions. Our study shows that data collection and data comparison based on this new tool can be achieved. It proved possible to compare relative performance and organizational characteristics and to generate suggestions for improvement for most participants. However, strict definitions of the parameters used for the benchmark and a thorough insight into the organization of each of the benchmark partners is required to produce comparable data and draw firm conclusions.
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
Validating Cellular Automata Lava Flow Emplacement Algorithms with Standard Benchmarks
NASA Astrophysics Data System (ADS)
Richardson, J. A.; Connor, L.; Charbonnier, S. J.; Connor, C.; Gallant, E.
2015-12-01
A major existing need in assessing lava flow simulators is a common set of validation benchmark tests. We propose three levels of benchmarks which test model output against increasingly complex standards. First, imulated lava flows should be morphologically identical, given changes in parameter space that should be inconsequential, such as slope direction. Second, lava flows simulated in simple parameter spaces can be tested against analytical solutions or empirical relationships seen in Bingham fluids. For instance, a lava flow simulated on a flat surface should produce a circular outline. Third, lava flows simulated over real world topography can be compared to recent real world lava flows, such as those at Tolbachik, Russia, and Fogo, Cape Verde. Success or failure of emplacement algorithms in these validation benchmarks can be determined using a Bayesian approach, which directly tests the ability of an emplacement algorithm to correctly forecast lava inundation. Here we focus on two posterior metrics, P(A|B) and P(¬A|¬B), which describe the positive and negative predictive value of flow algorithms. This is an improvement on less direct statistics such as model sensitivity and the Jaccard fitness coefficient. We have performed these validation benchmarks on a new, modular lava flow emplacement simulator that we have developed. This simulator, which we call MOLASSES, follows a Cellular Automata (CA) method. The code is developed in several interchangeable modules, which enables quick modification of the distribution algorithm from cell locations to their neighbors. By assessing several different distribution schemes with the benchmark tests, we have improved the performance of MOLASSES to correctly match early stages of the 2012-3 Tolbachik Flow, Kamchakta Russia, to 80%. We also can evaluate model performance given uncertain input parameters using a Monte Carlo setup. This illuminates sensitivity to model uncertainty.
Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.
Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J
2016-07-01
Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.
Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2013-01-01
The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.
Ontology for Semantic Data Integration in the Domain of IT Benchmarking.
Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut
2018-01-01
A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.
Benchmarking Ada tasking on tightly coupled multiprocessor architectures
NASA Technical Reports Server (NTRS)
Collard, Philippe; Goforth, Andre; Marquardt, Matthew
1989-01-01
The development of benchmarks and performance measures for parallel Ada tasking is reported with emphasis on the macroscopic behavior of the benchmark across a set of load parameters. The application chosen for the study was the NASREM model for telerobot control, relevant to many NASA missions. The results of the study demonstrate the potential of parallel Ada in accomplishing the task of developing a control system for a system such as the Flight Telerobotic Servicer using the NASREM framework.
Sánchez, Carolina Ramírez; Taurino, Antonietta; Bozzini, Benedetto
2016-01-01
This paper reports on the quantitative assessment of the oxygen reduction reaction (ORR) electrocatalytic activity of electrodeposited Mn/polypyrrole (PPy) nanocomposites for alkaline aqueous solutions, based on the Rotating Disk Electrode (RDE) method and accompanied by structural characterizations relevant to the establishment of structure-function relationships. The characterization of Mn/PPy films is addressed to the following: (i) morphology, as assessed by Field-Emission Scanning Electron Microscopy (FE-SEM) and Atomic Force Microscope (AFM); (ii) local electrical conductivity, as measured by Scanning Probe Microscopy (SPM); and (iii) molecular structure, accessed by Raman Spectroscopy; these data provide the background against which the electrocatalytic activity can be rationalised. For comparison, the properties of Mn/PPy are gauged against those of graphite, PPy, and polycrystalline-Pt (poly-Pt). Due to the literature lack of accepted protocols for precise catalytic activity measurement at poly-Pt electrode in alkaline solution using the RDE methodology, we have also worked on the obtainment of an intralaboratory benchmark by evidencing some of the time-consuming parameters which drastically affect the reliability and repeatability of the measurement. PMID:28042491
Assessment of prostate cancer detection with a visual-search human model observer
NASA Astrophysics Data System (ADS)
Sen, Anando; Kalantari, Faraz; Gifford, Howard C.
2014-03-01
Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.
Resonance Parameter Adjustment Based on Integral Experiments
Sobes, Vladimir; Leal, Luiz; Arbanas, Goran; ...
2016-06-02
Our project seeks to allow coupling of differential and integral data evaluation in a continuous-energy framework and to use the generalized linear least-squares (GLLS) methodology in the TSURFER module of the SCALE code package to update the parameters of a resolved resonance region evaluation. We recognize that the GLLS methodology in TSURFER is identical to the mathematical description of a Bayesian update in SAMMY, the SAMINT code was created to use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Traditionally, SAMMY used differential experimental data to adjust nuclear data parameters. Integral experimental data, suchmore » as in the International Criticality Safety Benchmark Experiments Project, remain a tool for validation of completed nuclear data evaluations. SAMINT extracts information from integral benchmarks to aid the nuclear data evaluation process. Later, integral data can be used to resolve any remaining ambiguity between differential data sets, highlight troublesome energy regions, determine key nuclear data parameters for integral benchmark calculations, and improve the nuclear data covariance matrix evaluation. Moreover, SAMINT is not intended to bias nuclear data toward specific integral experiments but should be used to supplement the evaluation of differential experimental data. Using GLLS ensures proper weight is given to the differential data.« less
Significant Scales in Community Structure
NASA Astrophysics Data System (ADS)
Traag, V. A.; Krings, G.; van Dooren, P.
2013-10-01
Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of ``significance'' of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine ``good'' resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role.
DBH Prediction Using Allometry Described by Bivariate Copula Distribution
NASA Astrophysics Data System (ADS)
Xu, Q.; Hou, Z.; Li, B.; Greenberg, J. A.
2017-12-01
Forest biomass mapping based on single tree detection from the airborne laser scanning (ALS) usually depends on an allometric equation that relates diameter at breast height (DBH) with per-tree aboveground biomass. The incapability of the ALS technology in directly measuring DBH leads to the need to predict DBH with other ALS-measured tree-level structural parameters. A copula-based method is proposed in the study to predict DBH with the ALS-measured tree height and crown diameter using a dataset measured in the Lassen National Forest in California. Instead of exploring an explicit mathematical equation that explains the underlying relationship between DBH and other structural parameters, the copula-based prediction method utilizes the dependency between cumulative distributions of these variables, and solves the DBH based on an assumption that for a single tree, the cumulative probability of each structural parameter is identical. Results show that compared with the bench-marking least-square linear regression and the k-MSN imputation, the copula-based method obtains better accuracy in the DBH for the Lassen National Forest. To assess the generalization of the proposed method, prediction uncertainty is quantified using bootstrapping techniques that examine the variability of the RMSE of the predicted DBH. We find that the copula distribution is reliable in describing the allometric relationship between tree-level structural parameters, and it contributes to the reduction of prediction uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Left-right supersymmetry after the Higgs boson discovery
NASA Astrophysics Data System (ADS)
Frank, Mariana; Ghosh, Dilip Kumar; Huitu, Katri; Rai, Santosh Kumar; Saha, Ipsita; Waltari, Harri
2014-12-01
We perform a thorough analysis of the parameter space of the minimal left-right supersymmetric model in agreement with the LHC data. The model contains left- and right-handed fermionic doublets, two Higgs bidoublets, two Higgs triplet representations, and one singlet, insuring a charge-conserving vacuum. We impose the condition that the model complies with the experimental constraints on supersymmetric particles masses and on the doubly charged Higgs bosons and require that the parameter space of the model satisfies the LHC data on neutral Higgs signal strengths at 2 σ . We choose benchmark scenarios by fixing some basic parameters and scanning over the rest. The lightest supersymmetric particle in our scenarios is always the lightest neutralino. We find that the signals for H →γ γ and H →V V⋆ are correlated, while H →b b ¯ is anticorrelated with all of the other decay modes, and also that the contribution from singly charged scalars dominates that of the doubly charged scalars in H →γ γ and H →Z γ loops, contrary to type II seesaw models. We also illustrate the range for mass spectrum of the LRSUSY model in light of planned measurements of the branching ratio of H →γ γ to 10% level.
2012-11-02
Scanning Technology (3D LST) and Collaborative Product Lifecycle Management (CPLM) are two technologies that are currently being leveraged by international ... international ship construction organizations to achieve significant cost savings. 3D LST dramatically reduces the time required to scan ship surfaces as...technology does not meet the accuracy requirements, 0.030” accuracy minimum , for naval shipbuilding. The report delivered to the CSNT shows that if the
Simulation-based comprehensive benchmarking of RNA-seq aligners
Baruzzo, Giacomo; Hayer, Katharina E; Kim, Eun Ji; Di Camillo, Barbara; FitzGerald, Garret A; Grant, Gregory R
2018-01-01
Alignment is the first step in most RNA-seq analysis pipelines, and the accuracy of downstream analyses depends heavily on it. Unlike most steps in the pipeline, alignment is particularly amenable to benchmarking with simulated data. We performed a comprehensive benchmarking of 14 common splice-aware aligners for base, read, and exon junction-level accuracy and compared default with optimized parameters. We found that performance varied by genome complexity, and accuracy and popularity were poorly correlated. The most widely cited tool underperforms for most metrics, particularly when using default settings. PMID:27941783
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Yibao; Yan Yulong; Nath, Ravinder
2012-08-01
Purpose: To develop a quantitative method for the estimation of kV cone beam computed tomography (kVCBCT) doses in pediatric patients undergoing image-guided radiotherapy. Methods and Materials: Forty-two children were retrospectively analyzed in subgroups of different scanned regions: one group in the head-and-neck and the other group in the pelvis. Critical structures in planning CT images were delineated on an Eclipse treatment planning system before being converted into CT phantoms for Monte Carlo simulations. A benchmarked EGS4 Monte Carlo code was used to calculate three-dimensional dose distributions of kVCBCT scans with full-fan high-quality head or half-fan pelvis protocols predefined by themore » manufacturer. Based on planning CT images and structures exported in DICOM RT format, occipital-frontal circumferences (OFC) were calculated for head-and-neck patients using DICOMan software. Similarly, hip circumferences (HIP) were acquired for the pelvic group. Correlations between mean organ doses and age, weight, OFC, and HIP values were analyzed with SigmaPlot software suite, where regression performances were analyzed with relative dose differences (RDD) and coefficients of determination (R{sup 2}). Results: kVCBCT-contributed mean doses to all critical structures decreased monotonically with studied parameters, with a steeper decrease in the pelvis than in the head. Empirical functions have been developed for a dose estimation of the major organs at risk in the head and pelvis, respectively. If evaluated with physical parameters other than age, a mean RDD of up to 7.9% was observed for all the structures in our population of 42 patients. Conclusions: kVCBCT doses are highly correlated with patient size. According to this study, weight can be used as a primary index for dose assessment in both head and pelvis scans, while OFC and HIP may serve as secondary indices for dose estimation in corresponding regions. With the proposed empirical functions, it is possible to perform an individualized quantitative dose assessment of kVCBCT scans.« less
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Gaia FGK benchmark stars: Metallicity
NASA Astrophysics Data System (ADS)
Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.
2014-04-01
Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133
Machine characterization and benchmark performance prediction
NASA Technical Reports Server (NTRS)
Saavedra-Barrera, Rafael H.
1988-01-01
From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.
Electroweak supersymmetry in the NMSSM
NASA Astrophysics Data System (ADS)
Cheng, Taoli; Li, Tianjun
2013-07-01
To explain all the available experimental results, we have proposed the electroweak supersymmetry (EWSUSY) previously, where the squarks and/or gluino are heavy around a few TeVs while the sleptons, sneutrinos, bino, winos, and/or Higgsinos are light within 1 TeV. In the next to minimal supersymmetric Standard Model, we perform the systematic χ2 analyses on parameter space scan for three EWSUSY scenarios: (I) R-parity conservation and one dark matter candidate, (II) R-parity conservation and multicomponent dark matter, (III) R-parity violation. We obtain the minimal χ2/(degreeoffreedom) of 10.2/15, 9.6/14, and 9.2/14 respectively for scenarios I, II, and III. Considering the constraints from the LHC neutralino/chargino and slepton searches, we find that the majority of viable parameter space preferred by the muon anomalous magnetic moment has been excluded except for the parameter space with moderate to large tanβ(≳8). Especially, the most favorable parameter space has relatively large tanβ, moderate λ, small μeff, heavy squarks/gluino, and the second lightest CP-even neutral Higgs boson with mass around 125 GeV. In addition, if the left-handed smuon is nearly degenerate with or heavier than wino, there is no definite bound on wino mass. Otherwise, the wino with mass up to ˜450GeV has been excluded. Furthermore, we present several benchmark points for scenarios I and II, and briefly discuss the prospects of the EWSUSY searches at the 14 TeV LHC and ILC.
Arithmetic Data Cube as a Data Intensive Benchmark
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabano, Leonid
2003-01-01
Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.
NASA Astrophysics Data System (ADS)
Kaskhedikar, Apoorva Prakash
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
NASA Astrophysics Data System (ADS)
Bäumer, C.; Janson, M.; Timmermann, B.; Wulff, J.
2018-04-01
To assess if apertures shall be mounted upstream or downstream of a range shifting block if these field-shaping devices are combined with the pencil-beam scanning delivery technique (PBS). The lateral dose fall-off served as a benchmark parameter. Both options realizing PBS-with-apertures were compared to the uniform scanning mode. We also evaluated the difference regarding the out-of-field dose caused by interactions of protons in beam-shaping devices. The potential benefit of the downstream configuration over the upstream configuration was estimated analytically. Guided by this theoretical evaluation a mechanical adapter was developed which transforms the upstream configuration provided by the proton machine vendor to a downstream configuration. Transversal dose profiles were calculated with the Monte-Carlo based dose engine of the commercial treatment planning system RayStation 6. Two-dimensional dose planes were measured with an ionization chamber array and a scintillation detector at different depths and compared to the calculation. Additionally, a clinical example for the irradiation of the orbit was compared for both PBS options and a uniform scanning treatment plan. Assuming the same air gap the lateral dose fall-off at the field edge at a few centimeter depth is 20% smaller for the aperture-downstream configuration than for the upstream one. For both options of PBS-with-apertures the dose fall-off is larger than in uniform scanning delivery mode if the minimum accelerator energy is 100 MeV. The RayStation treatment planning system calculated the width of the lateral dose fall-off with an accuracy of typically 0.1 mm–0.3 mm. Although experiments and calculations indicate a ranking of the three delivery options regarding lateral dose fall-off, there seems to be a limited impact on a multi-field treatment plan.
Results of a Multi-Institutional Benchmark Test for Cranial CT/MR Image Registration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulin, Kenneth; Urie, Marcia M., E-mail: murie@qarc.or; Cherlow, Joel M.
2010-08-01
Purpose: Variability in computed tomography/magnetic resonance imaging (CT/MR) cranial image registration was assessed using a benchmark case developed by the Quality Assurance Review Center to credential institutions for participation in Children's Oncology Group Protocol ACNS0221 for treatment of pediatric low-grade glioma. Methods and Materials: Two DICOM image sets, an MR and a CT of the same patient, were provided to each institution. A small target in the posterior occipital lobe was readily visible on two slices of the MR scan and not visible on the CT scan. Each institution registered the two scans using whatever software system and method itmore » ordinarily uses for such a case. The target volume was then contoured on the two MR slices, and the coordinates of the center of the corresponding target in the CT coordinate system were reported. The average of all submissions was used to determine the true center of the target. Results: Results are reported from 51 submissions representing 45 institutions and 11 software systems. The average error in the position of the center of the target was 1.8 mm (1 standard deviation = 2.2 mm). The least variation in position was in the lateral direction. Manual registration gave significantly better results than did automatic registration (p = 0.02). Conclusion: When MR and CT scans of the head are registered with currently available software, there is inherent uncertainty of approximately 2 mm (1 standard deviation), which should be considered when defining planning target volumes and PRVs for organs at risk on registered image sets.« less
Computational scalability of large size image dissemination
NASA Astrophysics Data System (ADS)
Kooper, Rob; Bajcsy, Peter
2011-01-01
We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.
Ultracool dwarf benchmarks with Gaia primaries
NASA Astrophysics Data System (ADS)
Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.
2017-10-01
We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.
Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system
NASA Astrophysics Data System (ADS)
Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.
2017-05-01
We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
Muravyev, Nikita V; Monogarov, Konstantin A; Asachenko, Andrey F; Nechaev, Mikhail S; Ananyev, Ivan V; Fomenkov, Igor V; Kiselev, Vitaly G; Pivkina, Alla N
2016-12-21
Thermal decomposition of a novel promising high-performance explosive dihydroxylammonium 5,5'-bistetrazole-1,1'-diolate (TKX-50) was studied using a number of thermal analysis techniques (thermogravimetry, differential scanning calorimetry, and accelerating rate calorimetry, ARC). To obtain more comprehensive insight into the kinetics and mechanism of TKX-50 decomposition, a variety of complementary thermoanalytical experiments were performed under various conditions. Non-isothermal and isothermal kinetics were obtained at both atmospheric and low (up to 0.3 Torr) pressures. The gas products of thermolysis were detected in situ using IR spectroscopy, and the structure of solid-state decomposition products was determined by X-ray diffraction and scanning electron microscopy. Diammonium 5,5'-bistetrazole-1,1'-diolate (ABTOX) was directly identified to be the most important intermediate of the decomposition process. The important role of bistetrazole diol (BTO) in the mechanism of TKX-50 decomposition was also rationalized by thermolysis experiments with mixtures of TKX-50 and BTO. Several widely used thermoanalytical data processing techniques (Kissinger, isoconversional, formal kinetic approaches, etc.) were independently benchmarked against the ARC data, which are more germane to the real storage and application conditions of energetic materials. Our study revealed that none of the Arrhenius parameters reported before can properly describe the complex two-stage decomposition process of TKX-50. In contrast, we showed the superior performance of the isoconversional methods combined with isothermal measurements, which yielded the most reliable kinetic parameters of TKX-50 thermolysis. In contrast with the existing reports, the thermal stability of TKX-50 was determined in the ARC experiments to be lower than that of hexogen, but close to that of hexanitrohexaazaisowurtzitane (CL-20).
Mojżeszek, N; Farah, J; Kłodowska, M; Ploc, O; Stolarczyk, L; Waligórski, M P R; Olko, P
2017-02-01
To measure the environmental doses from stray neutrons in the vicinity of a solid slab phantom as a function of beam energy, field size and modulation width, using the proton pencil beam scanning (PBS) technique. Measurements were carried out using two extended range WENDI-II rem-counters and three tissue equivalent proportional counters. Detectors were suitably placed at different distances around the RW3 slab phantom. Beam irradiation parameters were varied to cover the clinical ranges of proton beam energies (100-220MeV), field sizes ((2×2)-(20×20)cm 2 ) and modulation widths (0-15cm). For pristine proton peak irradiations, large variations of neutron H ∗ (10)/D were observed with changes in beam energy and field size, while these were less dependent on modulation widths. H ∗ (10)/D for pristine proton pencil beams varied between 0.04μSvGy -1 at beam energy 100MeV and a (2×2)cm 2 field at 2.25m distance and 90° angle with respect to the beam axis, and 72.3μSvGy -1 at beam energy 200MeV and a (20×20) cm 2 field at 1m distance along the beam axis. The obtained results will be useful in benchmarking Monte Carlo calculations of proton radiotherapy in PBS mode and in estimating the exposure to stray radiation of the patient. Such estimates may be facilitated by the obtained best-fitted simple analytical formulae relating the stray neutron doses at points of interest with beam irradiation parameters. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring.
Takahashi, Kunihiko; Kulldorff, Martin; Tango, Toshiro; Yih, Katherine
2008-04-11
Early detection of disease outbreaks enables public health officials to implement disease control and prevention measures at the earliest possible time. A time periodic geographical disease surveillance system based on a cylindrical space-time scan statistic has been used extensively for disease surveillance along with the SaTScan software. In the purely spatial setting, many different methods have been proposed to detect spatial disease clusters. In particular, some spatial scan statistics are aimed at detecting irregularly shaped clusters which may not be detected by the circular spatial scan statistic. Based on the flexible purely spatial scan statistic, we propose a flexibly shaped space-time scan statistic for early detection of disease outbreaks. The performance of the proposed space-time scan statistic is compared with that of the cylindrical scan statistic using benchmark data. In order to compare their performances, we have developed a space-time power distribution by extending the purely spatial bivariate power distribution. Daily syndromic surveillance data in Massachusetts, USA, are used to illustrate the proposed test statistic. The flexible space-time scan statistic is well suited for detecting and monitoring disease outbreaks in irregularly shaped areas.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
The Medical Library Association Benchmarking Network: development and implementation.
Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C; Smith, Bernie Todd
2006-04-01
This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program.
The Medical Library Association Benchmarking Network: development and implementation*
Dudden, Rosalind Farnam; Corcoran, Kate; Kaplan, Janice; Magouirk, Jeff; Rand, Debra C.; Smith, Bernie Todd
2006-01-01
Objective: This article explores the development and implementation of the Medical Library Association (MLA) Benchmarking Network from the initial idea and test survey, to the implementation of a national survey in 2002, to the establishment of a continuing program in 2004. Started as a program for hospital libraries, it has expanded to include other nonacademic health sciences libraries. Methods: The activities and timelines of MLA's Benchmarking Network task forces and editorial board from 1998 to 2004 are described. Results: The Benchmarking Network task forces successfully developed an extensive questionnaire with parameters of size and measures of library activity and published a report of the data collected by September 2002. The data were available to all MLA members in the form of aggregate tables. Utilization of Web-based technologies proved feasible for data intake and interactive display. A companion article analyzes and presents some of the data. MLA has continued to develop the Benchmarking Network with the completion of a second survey in 2004. Conclusions: The Benchmarking Network has provided many small libraries with comparative data to present to their administrators. It is a challenge for the future to convince all MLA members to participate in this valuable program. PMID:16636702
NASA Technical Reports Server (NTRS)
Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.
2009-01-01
A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction when geo-locating the radar beam (i.e., beam height and arc distance), including non-standard refraction based on the user-input temperature profile. In addition to temperature profile adaptivity, this paper will also summarize the other requirements for this scan strategy program such as detection of low-level boundaries, detection of anvil clouds, reducing the Cone Of Silence, and allowing for times when deep convective clouds will not occur. The adaptive technique will be carefully compared to and benchmarked against the new fixed scan strategy. Specific environmental scenarios in which the adaptive scan strategy is able to optimize and improve coverage and resolution at critical heights, scan time, and/or sample numbers relative to the fixed scan strategy will be presented.
Constraining axion-like-particles with hard X-ray emission from magnetars
NASA Astrophysics Data System (ADS)
Fortin, Jean-François; Sinha, Kuver
2018-06-01
Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.
LDEF polymeric materials: A summary of Langley characterization
NASA Technical Reports Server (NTRS)
Young, Philip R.; Slemp, Wayne S.; Whitley, Karen S.; Kalil, Carol R.; Siochi, Emilie J.; Shen, James Y.; Chang, A. C.
1995-01-01
The NASA Long Duration Exposure Facility (LDEF) enabled the exposure of a wide variety of materials to the low earth orbit (LEO) environment. This paper provides a summary of research conducted at the Langley Research Center into the response of selected LDEF polymers to this environment. Materials examined include graphite fiber reinforced epoxy, polysulfone, and additional polyimide matrix composites, films of FEP Teflon, Kapton, several experimental high performance polyimides, and films of more traditional polymers such as poly(vinyl toluene) and polystyrene. Exposure duration was either 10 months or 5.8 years. Flight and control specimens were characterized by a number of analytical techniques including ultraviolet-visible and infrared spectroscopy, thermal analysis, scanning electron and scanning tunneling microscopy, x-ray photoelectron spectroscopy, and, in some instances, selected solution property measurements. Characterized effects were found to be primarily surface phenomena. These effects included atomic oxygen-induced erosion of unprotected surfaces and ultraviolet-induced discoloration and changes in selected molecular level parameters. No gross changes in molecular structure or glass transition temperature were noted. The intent of this characterization is to increase our fundamental knowledge of space environmental effects as an aid in developing new and improved polymers for space application. A secondary objective is to develop benchmarks to enhance our methodology for the ground-based simulation of environmental effects so that polymer performance in space can be more reliably predicted.
Parameters of Higher Education Quality Assessment System at Universities
ERIC Educational Resources Information Center
Savickiene, Izabela
2005-01-01
The article analyses the system of institutional quality assessment at universities and lays foundation to its functional, morphological and processual parameters. It also presents the concept of the system and discusses the distribution of systems into groups, defines information, accountability, improvement and benchmarking functions of higher…
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
Post-LHC7 fine-tuning in the minimal supergravity/CMSSM model with a 125 GeV Higgs boson
NASA Astrophysics Data System (ADS)
Baer, Howard; Barger, Vernon; Huang, Peisi; Mickelson, Dan; Mustafayev, Azar; Tata, Xerxes
2013-02-01
The recent discovery of a 125 GeV Higgs-like resonance at LHC, coupled with the lack of evidence for weak scale supersymmetry (SUSY), has severely constrained SUSY models such as minimal supergravity (mSUGRA)/CMSSM. As LHC probes deeper into SUSY model parameter space, the little hierarchy problem—how to reconcile the Z and Higgs boson mass scale with the scale of SUSY breaking—will become increasingly exacerbated unless a sparticle signal is found. We evaluate two different measures of fine-tuning in the mSUGRA/CMSSM model. The more stringent of these, ΔHS, includes effects that arise from the high-scale origin of the mSUGRA parameters while the second measure, ΔEW, is determined only by weak scale parameters: hence, it is universal to any model with the same particle spectrum and couplings. Our results incorporate the latest constraints from LHC7 sparticle searches, LHCb limits from Bs→μ+μ- and also require a light Higgs scalar with mh˜123-127GeV. We present fine-tuning contours in the m0 vs m1/2 plane for several sets of A0 and tanβ values. We also present results for ΔHS and ΔEW from a scan over the entire viable model parameter space. We find a ΔHS≳103, or at best 0.1%, fine-tuning. For the less stringent electroweak fine-tuning, we find ΔEW≳102, or at best 1%, fine-tuning. Two benchmark points are presented that have the lowest values of ΔHS and ΔEW. Our results provide a quantitative measure for ascertaining whether or not the remaining mSUGRA/CMSSM model parameter space is excessively fine-tuned and so could provide impetus for considering alternative SUSY models.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Lange, Marcos C; Braga, Gabriel Pereira; Nóvak, Edison M; Harger, Rodrigo; Felippe, Maria Justina Dalla Bernardina; Canever, Mariana; Dall'Asta, Isabella; Rauen, Jordana; Bazan, Rodrigo; Zetola, Viviane
2017-06-01
All 16 KPIs were analyzed, including the percentage of patients admitted to the stroke unit, venous thromboembolism prophylaxis in the first 48 hours after admission, pneumonia and hospital mortality due to stroke, and hospital discharge on antithrombotic therapy in patients without cardioembolic mechanism. Both centers admitted over 80% of the patients in their stroke unit. The incidence of venous thromboembolism prophylaxis was > 85%, that of in-hospital pneumonia was < 13%, the hospital mortality for stroke was < 15%, and the hospital discharge on antithrombotic therapy was > 70%. Our results suggest using the parameters of all of the 16 KPIs required by the Ministry of Health of Brazil, and the present results for the two stroke units for future benchmarking.
Bauer, Matthias R; Ibrahim, Tamer M; Vogel, Simon M; Boeckler, Frank M
2013-06-24
The application of molecular benchmarking sets helps to assess the actual performance of virtual screening (VS) workflows. To improve the efficiency of structure-based VS approaches, the selection and optimization of various parameters can be guided by benchmarking. With the DEKOIS 2.0 library, we aim to further extend and complement the collection of publicly available decoy sets. Based on BindingDB bioactivity data, we provide 81 new and structurally diverse benchmark sets for a wide variety of different target classes. To ensure a meaningful selection of ligands, we address several issues that can be found in bioactivity data. We have improved our previously introduced DEKOIS methodology with enhanced physicochemical matching, now including the consideration of molecular charges, as well as a more sophisticated elimination of latent actives in the decoy set (LADS). We evaluate the docking performance of Glide, GOLD, and AutoDock Vina with our data sets and highlight existing challenges for VS tools. All DEKOIS 2.0 benchmark sets will be made accessible at http://www.dekois.com.
NASA Astrophysics Data System (ADS)
Tavakoli, A.; Naeini, H. Moslemi; Roohi, Amir H.; Gollo, M. Hoseinpour; Shahabad, Sh. Imani
2018-01-01
In the 3D laser forming process, developing an appropriate laser scan pattern for producing specimens with high quality and uniformity is critical. This study presents certain principles for developing scan paths. Seven scan path parameters are considered, including: (1) combined linear or curved path; (2) type of combined linear path; (3) order of scan sequences; (4) the position of the start point in each scan; (5) continuous or discontinuous scan path; (6) direction of scan path; and (7) angular arrangement of combined linear scan paths. Regarding these path parameters, ten combined linear scan patterns are presented. Numerical simulations show continuous hexagonal, scan pattern, scanning from outer to inner path, is the optimized. In addition, it is observed the position of the start point and the angular arrangement of scan paths is the most effective path parameters. Also, further experimentations show four sequences due to creat symmetric condition enhance the height of the bowl-shaped products and uniformity. Finally, the optimized hexagonal pattern was compared with the similar circular one. In the hexagonal scan path, distortion value and standard deviation rather to edge height of formed specimen is very low, and the edge height despite of decreasing length of scan path increases significantly compared to the circular scan path. As a result, four-sequence hexagonal scan pattern is proposed as the optimized perimeter scan path to produce bowl-shaped product.
Proposed biopsy performance benchmarks for MRI based on an audit of a large academic center.
Sedora Román, Neda I; Mehta, Tejas S; Sharpe, Richard E; Slanetz, Priscilla J; Venkataraman, Shambhavi; Fein-Zachary, Valerie; Dialani, Vandana
2018-05-01
Performance benchmarks exist for mammography (MG); however, performance benchmarks for magnetic resonance imaging (MRI) are not yet fully developed. The purpose of our study was to perform an MRI audit based on established MG and screening MRI benchmarks and to review whether these benchmarks can be applied to an MRI practice. An IRB approved retrospective review of breast MRIs was performed at our center from 1/1/2011 through 12/31/13. For patients with biopsy recommendation, core biopsy and surgical pathology results were reviewed. The data were used to derive mean performance parameter values, including abnormal interpretation rate (AIR), positive predictive value (PPV), cancer detection rate (CDR), percentage of minimal cancers and axillary node negative cancers and compared with MG and screening MRI benchmarks. MRIs were also divided by screening and diagnostic indications to assess for differences in performance benchmarks amongst these two groups. Of the 2455 MRIs performed over 3-years, 1563 were performed for screening indications and 892 for diagnostic indications. With the exception of PPV2 for screening breast MRIs from 2011 to 2013, PPVs were met for our screening and diagnostic populations when compared to the MRI screening benchmarks established by the Breast Imaging Reporting and Data System (BI-RADS) 5 Atlas ® . AIR and CDR were lower for screening indications as compared to diagnostic indications. New MRI screening benchmarks can be used for screening MRI audits while the American College of Radiology (ACR) desirable goals for diagnostic MG can be used for diagnostic MRI audits. Our study corroborates established findings regarding differences in AIR and CDR amongst screening versus diagnostic indications. © 2017 Wiley Periodicals, Inc.
Suwazono, Yasushi; Dochi, Mirei; Kobayashi, Etsuko; Oishi, Mitsuhiro; Okubo, Yasushi; Tanaka, Kumihiko; Sakata, Kouichi
2008-12-01
The objective of this study was to calculate benchmark durations and lower 95% confidence limits for benchmark durations of working hours associated with subjective fatigue symptoms by applying the benchmark dose approach while adjusting for job-related stress using multiple logistic regression analyses. A self-administered questionnaire was completed by 3,069 male and 412 female daytime workers (age 18-67 years) in a Japanese steel company. The eight dependent variables in the Cumulative Fatigue Symptoms Index were decreased vitality, general fatigue, physical disorders, irritability, decreased willingness to work, anxiety, depressive feelings, and chronic tiredness. Independent variables were daily working hours, four subscales (job demand, job control, interpersonal relationship, and job suitability) of the Brief Job Stress Questionnaire, and other potential covariates. Using significant parameters for working hours and those for other covariates, the benchmark durations of working hours were calculated for the corresponding Index property. Benchmark response was set at 5% or 10%. Assuming a condition of worst job stress, the benchmark duration/lower 95% confidence limit for benchmark duration of working hours per day with a benchmark response of 5% or 10% were 10.0/9.4 or 11.7/10.7 (irritability) and 9.2/8.9 or 10.4/9.8 (chronic tiredness) in men and 8.9/8.4 or 9.8/8.9 (chronic tiredness) in women. The threshold amounts of working hours for fatigue symptoms under the worst job-related stress were very close to the standard daily working hours in Japan. The results strongly suggest that special attention should be paid to employees whose working hours exceed threshold amounts based on individual levels of job-related stress.
GPI Spectroscopy of the Mass, Age, and Metallicity Benchmark Brown Dwarf HD 4747 B
NASA Astrophysics Data System (ADS)
Crepp, Justin R.; Principe, David A.; Wolff, Schuyler; Giorla Godfrey, Paige A.; Rice, Emily L.; Cieza, Lucas; Pueyo, Laurent; Bechter, Eric B.; Gonzales, Erica J.
2018-02-01
The physical properties of brown dwarf companions found to orbit nearby, solar-type stars can be benchmarked against independent measures of their mass, age, chemical composition, and other parameters, offering insights into the evolution of substellar objects. The TRENDS high-contrast imaging survey has recently discovered a (mass/age/metallicity) benchmark brown dwarf orbiting the nearby (d = 18.69 ± 0.19 pc), G8V/K0V star HD 4747. We have acquired follow-up spectroscopic measurements of HD 4747 B using the Gemini Planet Imager to study its spectral type, effective temperature, surface gravity, and cloud properties. Observations obtained in the H-band and K 1-band recover the companion and reveal that it is near the L/T transition (T1 ± 2). Fitting atmospheric models to the companion spectrum, we find strong evidence for the presence of clouds. However, spectral models cannot satisfactorily fit the complete data set: while the shape of the spectrum can be well-matched in individual filters, a joint fit across the full passband results in discrepancies that are a consequence of the inherent color of the brown dwarf. We also find a 2σ tension in the companion mass, age, and surface gravity when comparing to evolutionary models. These results highlight the importance of using benchmark objects to study “secondary effects” such as metallicity, non-equilibrium chemistry, cloud parameters, electron conduction, non-adiabatic cooling, and other subtleties affecting emergent spectra. As a new L/T transition benchmark, HD 4747 B warrants further investigation into the modeling of cloud physics using higher resolution spectroscopy across a broader range of wavelengths, polarimetric observations, and continued Doppler radial velocity and astrometric monitoring.
Scanning microwave microscopy applied to semiconducting GaAs structures
NASA Astrophysics Data System (ADS)
Buchter, Arne; Hoffmann, Johannes; Delvallée, Alexandra; Brinciotti, Enrico; Hapiuk, Dimitri; Licitra, Christophe; Louarn, Kevin; Arnoult, Alexandre; Almuneau, Guilhem; Piquemal, François; Zeier, Markus; Kienberger, Ferry
2018-02-01
A calibration algorithm based on one-port vector network analyzer (VNA) calibration for scanning microwave microscopes (SMMs) is presented and used to extract quantitative carrier densities from a semiconducting n-doped GaAs multilayer sample. This robust and versatile algorithm is instrument and frequency independent, as we demonstrate by analyzing experimental data from two different, cantilever- and tuning fork-based, microscope setups operating in a wide frequency range up to 27.5 GHz. To benchmark the SMM results, comparison with secondary ion mass spectrometry is undertaken. Furthermore, we show SMM data on a GaAs p-n junction distinguishing p- and n-doped layers.
Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F
2016-12-05
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.
Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.
2016-01-01
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911
Preparation and benchmarking of ANSL-V cross sections for advanced neutron source reactor studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arwood, J.W.; Ford, W.E. III; Greene, N.M.
1987-01-01
Validity of selected data from the fine-group neutron library was satisfactorily tested in performance parameter calculations for the BAPL-1, TRX-1, and ZEEP-1 thermal lattice benchmarks. BAPL-2 is an H/sub 2/O moderated, uranium oxide lattice; TRX-1 is an H/sub 2/O moderated, 1.31 weight percent enriched uranium metal lattice; ZEEP-1 is a D/sub 2/O-moderated, natural uranium lattice. 26 refs., 1 tab.
Portfolio selection and asset pricing under a benchmark approach
NASA Astrophysics Data System (ADS)
Platen, Eckhard
2006-10-01
The paper presents classical and new results on portfolio optimization, as well as the fair pricing concept for derivative pricing under the benchmark approach. The growth optimal portfolio is shown to be a central object in a market model. It links asset pricing and portfolio optimization. The paper argues that the market portfolio is a proxy of the growth optimal portfolio. By choosing the drift of the discounted growth optimal portfolio as parameter process, one obtains a realistic theoretical market dynamics.
Uav Cameras: Overview and Geometric Calibration Benchmark
NASA Astrophysics Data System (ADS)
Cramer, M.; Przybilla, H.-J.; Zurhorst, A.
2017-08-01
Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.
Influence of scanning parameters on the estimation accuracy of control points of B-spline surfaces
NASA Astrophysics Data System (ADS)
Aichinger, Julia; Schwieger, Volker
2018-04-01
This contribution deals with the influence of scanning parameters like scanning distance, incidence angle, surface quality and sampling width on the average estimated standard deviations of the position of control points from B-spline surfaces which are used to model surfaces from terrestrial laser scanning data. The influence of the scanning parameters is analyzed by the Monte Carlo based variance analysis. The samples were generated for non-correlated and correlated data, leading to the samples generated by Latin hypercube and replicated Latin hypercube sampling algorithms. Finally, the investigations show that the most influential scanning parameter is the distance from the laser scanner to the object. The angle of incidence shows a significant effect for distances of 50 m and longer, while the surface quality contributes only negligible effects. The sampling width has no influence. Optimal scanning parameters can be found in the smallest possible object distance at an angle of incidence close to 0° in the highest surface quality. The consideration of correlations improves the estimation accuracy and underlines the importance of complete stochastic models for TLS measurements.
Tsimihodimos, Vasilis; Kostapanos, Michael S.; Moulis, Alexandros; Nikas, Nikos; Elisaf, Moses S.
2015-01-01
Objectives: To investigate the effect of benchmarking on the quality of type 2 diabetes (T2DM) care in Greece. Methods: The OPTIMISE (Optimal Type 2 Diabetes Management Including Benchmarking and Standard Treatment) study [ClinicalTrials.gov identifier: NCT00681850] was an international multicenter, prospective cohort study. It included physicians randomized 3:1 to either receive benchmarking for glycated hemoglobin (HbA1c), systolic blood pressure (SBP) and low-density lipoprotein cholesterol (LDL-C) treatment targets (benchmarking group) or not (control group). The proportions of patients achieving the targets of the above-mentioned parameters were compared between groups after 12 months of treatment. Also, the proportions of patients achieving those targets at 12 months were compared with baseline in the benchmarking group. Results: In the Greek region, the OPTIMISE study included 797 adults with T2DM (570 in the benchmarking group). At month 12 the proportion of patients within the predefined targets for SBP and LDL-C was greater in the benchmarking compared with the control group (50.6 versus 35.8%, and 45.3 versus 36.1%, respectively). However, these differences were not statistically significant. No difference between groups was noted in the percentage of patients achieving the predefined target for HbA1c. At month 12 the increase in the percentage of patients achieving all three targets was greater in the benchmarking (5.9–15.0%) than in the control group (2.7–8.1%). In the benchmarking group more patients were on target regarding SBP (50.6% versus 29.8%), LDL-C (45.3% versus 31.3%) and HbA1c (63.8% versus 51.2%) at 12 months compared with baseline (p < 0.001 for all comparisons). Conclusion: Benchmarking may comprise a promising tool for improving the quality of T2DM care. Nevertheless, target achievement rates of each, and of all three, quality indicators were suboptimal, indicating there are still unmet needs in the management of T2DM. PMID:26445642
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujii, K; UCLA School of Medicine, Los Angeles, CA; Bostani, M
Purpose: The aim of this study was to collect CT dose index data from adult head exams to establish benchmarks based on either: (a) values pooled from all head exams or (b) values for specific protocols. One part of this was to investigate differences in scan frequency and CT dose index data for inpatients versus outpatients. Methods: We collected CT dose index data (CTDIvol) from adult head CT examinations performed at our medical facilities from Jan 1st to Dec 31th, 2014. Four of these scanners were used for inpatients, the other five were used for outpatients. All scanners used Tubemore » Current Modulation. We used X-ray dose management software to mine dose index data and evaluate CTDIvol for 15807 inpatients and 4263 outpatients undergoing Routine Brain, Sinus, Facial/Mandible, Temporal Bone, CTA Brain and CTA Brain-Neck protocols, and combined across all protocols. Results: For inpatients, Routine Brain series represented 84% of total scans performed. For outpatients, Sinus scans represented the largest fraction (36%). The CTDIvol (mean ± SD) across all head protocols was 39 ± 30 mGy (min-max: 3.3–540 mGy). The CTDIvol for Routine Brain was 51 ± 6.2 mGy (min-max: 36–84 mGy). The values for Sinus were 24 ± 3.2 mGy (min-max: 13–44 mGy) and for Facial/Mandible were 22 ± 4.3 mGy (min-max: 14–46 mGy). The mean CTDIvol for inpatients and outpatients was similar across protocols with one exception (CTA Brain-Neck). Conclusion: There is substantial dose variation when results from all protocols are pooled together; this is primarily a function of the differences in technical factors of the protocols themselves. When protocols are analyzed separately, there is much less variability. While analyzing pooled data affords some utility, reviewing protocols segregated by clinical indication provides greater opportunity for optimization and establishing useful benchmarks.« less
Sensitivity Analysis of OECD Benchmark Tests in BISON
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016
Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert
2018-01-01
This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294
An approach to radiation safety department benchmarking in academic and medical facilities.
Harvey, Richard P
2015-02-01
Based on anecdotal evidence and networking with colleagues at other facilities, it has become evident that some radiation safety departments are not adequately staffed and radiation safety professionals need to increase their staffing levels. Discussions with management regarding radiation safety department staffing often lead to similar conclusions. Management acknowledges the Radiation Safety Officer (RSO) or Director of Radiation Safety's concern but asks the RSO to provide benchmarking and justification for additional full-time equivalents (FTEs). The RSO must determine a method to benchmark and justify additional staffing needs while struggling to maintain a safe and compliant radiation safety program. Benchmarking and justification are extremely important tools that are commonly used to demonstrate the need for increased staffing in other disciplines and are tools that can be used by radiation safety professionals. Parameters that most RSOs would expect to be positive predictors of radiation safety staff size generally are and can be emphasized in benchmarking and justification report summaries. Facilities with large radiation safety departments tend to have large numbers of authorized users, be broad-scope programs, be subject to increased controls regulations, have large clinical operations, have significant numbers of academic radiation-producing machines, and have laser safety responsibilities.
NASA Astrophysics Data System (ADS)
Braunmueller, F.; Tran, T. M.; Vuillemin, Q.; Alberti, S.; Genoud, J.; Hogge, J.-Ph.; Tran, M. Q.
2015-06-01
A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is the case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.
"First-principles" kinetic Monte Carlo simulations revisited: CO oxidation over RuO2 (110).
Hess, Franziska; Farkas, Attila; Seitsonen, Ari P; Over, Herbert
2012-03-15
First principles-based kinetic Monte Carlo (kMC) simulations are performed for the CO oxidation on RuO(2) (110) under steady-state reaction conditions. The simulations include a set of elementary reaction steps with activation energies taken from three different ab initio density functional theory studies. Critical comparison of the simulation results reveals that already small variations in the activation energies lead to distinctly different reaction scenarios on the surface, even to the point where the dominating elementary reaction step is substituted by another one. For a critical assessment of the chosen energy parameters, it is not sufficient to compare kMC simulations only to experimental turnover frequency (TOF) as a function of the reactant feed ratio. More appropriate benchmarks for kMC simulations are the actual distribution of reactants on the catalyst's surface during steady-state reaction, as determined by in situ infrared spectroscopy and in situ scanning tunneling microscopy, and the temperature dependence of TOF in the from of Arrhenius plots. Copyright © 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braunmueller, F., E-mail: falk.braunmueller@epfl.ch; Tran, T. M.; Alberti, S.
A new gyrotron simulation code for simulating the beam-wave interaction using a monomode time-dependent self-consistent model is presented. The new code TWANG-PIC is derived from the trajectory-based code TWANG by describing the electron motion in a gyro-averaged one-dimensional Particle-In-Cell (PIC) approach. In comparison to common PIC-codes, it is distinguished by its computation speed, which makes its use in parameter scans and in experiment interpretation possible. A benchmark of the new code is presented as well as a comparative study between the two codes. This study shows that the inclusion of a time-dependence in the electron equations, as it is themore » case in the PIC-approach, is mandatory for simulating any kind of non-stationary oscillations in gyrotrons. Finally, the new code is compared with experimental results and some implications of the violated model assumptions in the TWANG code are disclosed for a gyrotron experiment in which non-stationary regimes have been observed and for a critical case that is of interest in high power gyrotron development.« less
Parameter regimes for a single sequential quantum repeater
NASA Astrophysics Data System (ADS)
Rozpędek, F.; Goodenough, K.; Ribeiro, J.; Kalb, N.; Caprara Vivoli, V.; Reiserer, A.; Hanson, R.; Wehner, S.; Elkouss, D.
2018-07-01
Quantum key distribution allows for the generation of a secret key between distant parties connected by a quantum channel such as optical fibre or free space. Unfortunately, the rate of generation of a secret key by direct transmission is fundamentally limited by the distance. This limit can be overcome by the implementation of so-called quantum repeaters. Here, we assess the performance of a specific but very natural setup called a single sequential repeater for quantum key distribution. We offer a fine-grained assessment of the repeater by introducing a series of benchmarks. The benchmarks, which should be surpassed to claim a working repeater, are based on finite-energy considerations, thermal noise and the losses in the setup. In order to boost the performance of the studied repeaters we introduce two methods. The first one corresponds to the concept of a cut-off, which reduces the effect of decoherence during the storage of a quantum state by introducing a maximum storage time. Secondly, we supplement the standard classical post-processing with an advantage distillation procedure. Using these methods, we find realistic parameters for which it is possible to achieve rates greater than each of the benchmarks, guiding the way towards implementing quantum repeaters.
Mean velocity and turbulence measurements in a 90 deg curved duct with thin inlet boundary layer
NASA Technical Reports Server (NTRS)
Crawford, R. A.; Peters, C. E.; Steinhoff, J.; Hornkohl, J. O.; Nourinejad, J.; Ramachandran, K.
1985-01-01
The experimental database established by this investigation of the flow in a large rectangular turning duct is of benchmark quality. The experimental Reynolds numbers, Deans numbers and boundary layer characteristics are significantly different from previous benchmark curved-duct experimental parameters. This investigation extends the experimental database to higher Reynolds number and thinner entrance boundary layers. The 5% to 10% thick boundary layers, based on duct half-width, results in a large region of near-potential flow in the duct core surrounded by developing boundary layers with large crossflows. The turbulent entrance boundary layer case at R sub ed = 328,000 provides an incompressible flowfield which approaches real turbine blade cascade characteristics. The results of this investigation provide a challenging benchmark database for computational fluid dynamics code development.
NASA Astrophysics Data System (ADS)
Moon, Hongsik
What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the changing computer hardware platforms in order to provide fast, accurate and efficient solutions to large, complex electromagnetic problems. The research in this dissertation proves that the performance of parallel code is intimately related to the configuration of the computer hardware and can be maximized for different hardware platforms. To benchmark and optimize the performance of parallel CEM software, a variety of large, complex projects are created and executed on a variety of computer platforms. The computer platforms used in this research are detailed in this dissertation. The projects run as benchmarks are also described in detail and results are presented. The parameters that affect parallel CEM software on High Performance Computing Clusters (HPCC) are investigated. This research demonstrates methods to maximize the performance of parallel CEM software code.
A Simplified Approach for the Rapid Generation of Transient Heat-Shield Environments
NASA Technical Reports Server (NTRS)
Wurster, Kathryn E.; Zoby, E. Vincent; Mills, Janelle C.; Kamhawi, Hilmi
2007-01-01
A simplified approach has been developed whereby transient entry heating environments are reliably predicted based upon a limited set of benchmark radiative and convective solutions. Heating, pressure and shear-stress levels, non-dimensionalized by an appropriate parameter at each benchmark condition are applied throughout the entry profile. This approach was shown to be valid based on the observation that the fully catalytic, laminar distributions examined were relatively insensitive to altitude as well as velocity throughout the regime of significant heating. In order to establish a best prediction by which to judge the results that can be obtained using a very limited benchmark set, predictions based on a series of benchmark cases along a trajectory are used. Solutions which rely only on the limited benchmark set, ideally in the neighborhood of peak heating, are compared against the resultant transient heating rates and total heat loads from the best prediction. Predictions based on using two or fewer benchmark cases at or near the trajectory peak heating condition, yielded results to within 5-10 percent of the best predictions. Thus, the method provides transient heating environments over the heat-shield face with sufficient resolution and accuracy for thermal protection system design and also offers a significant capability to perform rapid trade studies such as the effect of different trajectories, atmospheres, or trim angle of attack, on convective and radiative heating rates and loads, pressure, and shear-stress levels.
Bound on largest r ∼< 0.1 from sub-Planckian excursions of inflaton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Arindam; Mazumdar, Anupam, E-mail: arindam@hri.res.in, E-mail: a.mazumdar@lancaster.ac.uk
2015-01-01
In this paper we will discuss the range of large tensor to scalar ratio, r, obtainable from a sub-Planckian excursion of a single, slow roll driven inflaton field. In order to obtain a large r for such a scenario one has to depart from a monotonic evolution of the slow roll parameters in such a way that one still satisfies all the current constraints of \\texttt(Planck), such as the scalar amplitude, the tilt in the scalar power spectrum, running and running of the tilt close to the pivot scale. Since the slow roll parameters evolve non-monotonically, we will also considermore » the evolution of the power spectrum on the smallest scales, i.e. at P{sub s}(k ∼ 10{sup 16} Mpc{sup −1})∼< 10{sup −2}, to make sure that the amplitude does not become too large. All these constraints tend to keep the tensor to scalar ratio, r ∼< 0.1. We scan three different kinds of potential for supersymmetric flat directions and obtain the benchmark points which satisfy all the constraints. We also show that it is possible to go beyond r ∼> 0.1 provided we relax the upper bound on the power spectrum on the smallest scales.« less
Benchmark solutions for the galactic heavy-ion transport equations with energy and spatial coupling
NASA Technical Reports Server (NTRS)
Ganapol, Barry D.; Townsend, Lawrence W.; Lamkin, Stanley L.; Wilson, John W.
1991-01-01
Nontrivial benchmark solutions are developed for the galactic heavy ion transport equations in the straightahead approximation with energy and spatial coupling. Analytical representations of the ion fluxes are obtained for a variety of sources with the assumption that the nuclear interaction parameters are energy independent. The method utilizes an analytical LaPlace transform inversion to yield a closed form representation that is computationally efficient. The flux profiles are then used to predict ion dose profiles, which are important for shield design studies.
Benchmark tests of JENDL-3.2 for thermal and fast reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takano, Hideki; Akie, Hiroshi; Kikuchi, Yasuyuki
1994-12-31
Benchmark calculations for a variety of thermal and fast reactors have been performed by using the newly evaluated JENDL-3 Version-2 (JENDL-3.2) file. In the thermal reactor calculations for the uranium and plutonium fueled cores of TRX and TCA, the k{sub eff} and lattice parameters were well predicted. The fast reactor calculations for ZPPR-9 and FCA assemblies showed that the k{sub eff} reactivity worths of Doppler, sodium void and control rod, and reaction rate distribution were in a very good agreement with the experiments.
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Verstraelen, Toon; Van Speybroeck, Veronique; Waroquier, Michel
2009-07-28
An extensive benchmark of the electronegativity equalization method (EEM) and the split charge equilibration (SQE) model on a very diverse set of organic molecules is presented. These models efficiently compute atomic partial charges and are used in the development of polarizable force fields. The predicted partial charges that depend on empirical parameters are calibrated to reproduce results from quantum mechanical calculations. Recently, SQE is presented as an extension of the EEM to obtain the correct size dependence of the molecular polarizability. In this work, 12 parametrization protocols are applied to each model and the optimal parameters are benchmarked systematically. The training data for the empirical parameters comprise of MP2/Aug-CC-pVDZ calculations on 500 organic molecules containing the elements H, C, N, O, F, S, Cl, and Br. These molecules have been selected by an ingenious and autonomous protocol from an initial set of almost 500,000 small organic molecules. It is clear that the SQE model outperforms the EEM in all benchmark assessments. When using Hirshfeld-I charges for the calibration, the SQE model optimally reproduces the molecular electrostatic potential from the ab initio calculations. Applications on chain molecules, i.e., alkanes, alkenes, and alpha alanine helices, confirm that the EEM gives rise to a divergent behavior for the polarizability, while the SQE model shows the correct trends. We conclude that the SQE model is an essential component of a polarizable force field, showing several advantages over the original EEM.
Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data
2014-01-01
Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
Access-in-turn test architecture for low-power test application
NASA Astrophysics Data System (ADS)
Wang, Weizheng; Wang, JinCheng; Wang, Zengyun; Xiang, Lingyun
2017-03-01
This paper presents a novel access-in-turn test architecture (AIT-TA) for testing of very large scale integrated (VLSI) designs. In the proposed scheme, each scan cell in a chain receives test data from shift-in line in turn while pushing its test response to the shift-out line. It solves the power problem of conventional scan architecture to a great extent and suppresses significantly the switching activity during shift and capture operation with acceptable hardware overhead. Thus, it can help to implement the test at much higher operation frequencies resulting shorter test application time. The proposed test approach enhances the architecture of conventional scan flip-flops and backward compatible with existing test pattern generation and simulation techniques. Experimental results obtained for some larger ISCAS'89 and ITC'99 benchmark circuits illustrate effectiveness of the proposed low-power test application scheme.
NASA Astrophysics Data System (ADS)
Basler, P.; Mühlleitner, M.; Wittbrodt, J.
2018-03-01
We investigate the strength of the electroweak phase transition (EWPT) within the CP-violating 2-Higgs-Doublet Model (C2HDM). The 2HDM is a simple and well-studied model, which can feature CP violation at tree level in its extended scalar sector. This makes it, in contrast to the Standard Model (SM), a promising candidate for explaining the baryon asymmetry of the universe through electroweak baryogenesis. We apply a renormalisation scheme which allows efficient scans of the C2HDM parameter space by using the loop-corrected masses and mixing matrix as input parameters. This procedure enables us to investigate the possibility of a strong first order EWPT required for baryogenesis and study its phenomenological implications for the LHC. Like in the CP-conserving (real) 2HDM (R2HDM) we find that a strong EWPT favours mass gaps between the non-SM-like Higgs bosons. These lead to prominent final states comprised of gauge+Higgs bosons or pairs of Higgs bosons. In contrast to the R2HDM, the CP-mixing of the C2HDM also favours approximately mass degenerate spectra with dominant decays into SM particles. The requirement of a strong EWPT further allows us to distinguish the C2HDM from the R2HDM using the signal strengths of the SM-like Higgs boson. We additionally find that a strong EWPT requires an enhancement of the SM-like trilinear Higgs coupling at next-to-leading order (NLO) by up to a factor of 2.4 compared to the NLO SM coupling, establishing another link between cosmology and collider phenomenology. We provide several C2HDM benchmark scenarios compatible with a strong EWPT and all experimental and theoretical constraints. We include the dominant branching ratios of the non-SM-like Higgs bosons as well as the Higgs pair production cross section of the SM-like Higgs boson for every benchmark point. The pair production cross sections can be substantially enhanced compared to the SM and could be observable at the high-luminosity LHC, allowing access to the trilinear Higgs couplings.
Perturbed Yukawa textures in the minimal seesaw model
NASA Astrophysics Data System (ADS)
Rink, Thomas; Schmitz, Kai
2017-03-01
We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP -violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O (10%). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Grace L.; Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas; Jiang, Jing
Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt ofmore » chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P{sub trend}=.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P{sub trend}<.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P{sub trend}<.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P{sub trend}<.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25% of patients received treatment that met all benchmarks. Conclusion: In this cohort, most cervical cancer patients received treatment that did not comply with all 3 benchmarks for quality treatment. In contrast to increasing receipt of newer radiation technologies, there was little improvement in receipt of essential treatment benchmarks.« less
A comparison of methods using optical coherence tomography to detect demineralized regions in teeth
Sowa, Michael G.; Popescu, Dan P.; Friesen, Jeri R.; Hewko, Mark D.; Choo-Smith, Lin-P’ing
2013-01-01
Optical coherence tomography (OCT) is a three- dimensional optical imaging technique that can be used to identify areas of early caries formation in dental enamel. The OCT signal at 850 nm back-reflected from sound enamel is attenuated stronger than the signal back-reflected from demineralized regions. To quantify this observation, the OCT signal as a function of depth into the enamel (also known as the A-scan intensity), the histogram of the A-scan intensities and three summary parameters derived from the A-scan are defined and their diagnostic potential compared. A total of 754 OCT A-scans were analyzed. The three summary parameters derived from the A-scans, the OCT attenuation coefficient as well as the mean and standard deviation of the lognormal fit to the histogram of the A-scan ensemble show statistically significant differences (p < 0.01) when comparing parameters from sound enamel and caries. Furthermore, these parameters only show a modest correlation. Based on the area under the curve (AUC) of the receiver operating characteristics (ROC) plot, the OCT attenuation coefficient shows higher discriminatory capacity (AUC=0.98) compared to the parameters derived from the lognormal fit to the histogram of the A-scan. However, direct analysis of the A-scans or the histogram of A-scan intensities using linear support vector machine classification shows diagnostic discrimination (AUC = 0.96) comparable to that achieved using the attenuation coefficient. These findings suggest that either direct analysis of the A-scan, its intensity histogram or the attenuation coefficient derived from the descending slope of the OCT A-scan have high capacity to discriminate between regions of caries and sound enamel. PMID:22052833
Chiu, Su-Chin; Cheng, Cheng-Chieh; Chang, Hing-Chiu; Chung, Hsiao-Wen; Chiu, Hui-Chu; Liu, Yi-Jui; Hsu, Hsian-He; Juan, Chun-Jung
2016-04-01
To verify whether quantification of parotid perfusion is affected by fat signals on non-fat-saturated (NFS) dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and whether the influence of fat is reduced with fat saturation (FS). This study consisted of three parts. First, a retrospective study analyzed DCE-MRI data previously acquired on different patients using NFS (n = 18) or FS (n = 18) scans. Second, a phantom study simulated the signal enhancements in the presence of gadolinium contrast agent at six concentrations and three fat contents. Finally, a prospective study recruited nine healthy volunteers to investigate the influence of fat suppression on perfusion quantification on the same subjects. Parotid perfusion parameters were derived from NFS and FS DCE-MRI data using both pharmacokinetic model analysis and semiquantitative parametric analysis. T tests and linear regression analysis were used for statistical analysis with correction for multiple comparisons. NFS scans showed lower amplitude-related parameters, including parameter A, peak enhancement (PE), and slope than FS scans in the patients (all with P < 0.0167). The relative signal enhancement in the phantoms was proportional to the dose of contrast agent and was lower in NFS scans than in FS scans. The volunteer study showed lower parameter A (6.75 ± 2.38 a.u.), PE (42.12% ± 14.87%), and slope (1.43% ± 0.54% s(-1)) in NFS scans as compared to 17.63 ± 8.56 a.u., 104.22% ± 25.15%, and 9.68% ± 1.67% s(-1), respectively, in FS scans (all with P < 0.005). These amplitude-related parameters were negatively associated with the fat content in NFS scans only (all with P < 0.05). On NFS DCE-MRI, quantification of parotid perfusion is adversely affected by the presence of fat signals for all amplitude-related parameters. The influence could be reduced on FS scans.
TU-F-12A-05: Sensitivity of Textural Features to 3D Vs. 4D FDG-PET/CT Imaging in NSCLC Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Nyflot, M; Bowen, S
2014-06-15
Purpose: Neighborhood Gray-level difference matrices (NGLDM) based texture parameters extracted from conventional (3D) 18F-FDG PET scans in patients with NSCLC have been previously shown to associate with response to chemoradiation and poorer patient outcome. However, the change in these parameters when utilizing respiratory-correlated (4D) FDG-PET scans has not yet been characterized for NSCLC. The Objectives: of this study was to assess the extent to which NGLDM-based texture parameters on 4D PET images vary with reference to values derived from 3D scans in NSCLC. Methods: Eight patients with newly diagnosed NSCLC treated with concomitant chemoradiotherapy were included in this study. 4Dmore » PET scans were reconstructed with OSEM-IR in 5 respiratory phase-binned images and corresponding CT data of each phase were employed for attenuation correction. NGLDM-based texture features, consisting of coarseness, contrast, busyness, complexity and strength, were evaluated for gross tumor volumes defined on 3D/4D PET scans by radiation oncologists. Variation of the obtained texture parameters over the respiratory cycle were examined with respect to values extracted from 3D scans. Results: Differences between texture parameters derived from 4D scans at different respiratory phases and those extracted from 3D scans ranged from −30% to 13% for coarseness, −12% to 40% for contrast, −5% to 50% for busyness, −7% to 38% for complexity, and −43% to 20% for strength. Furthermore, no evident correlations were observed between respiratory phase and 4D scan texture parameters. Conclusion: Results of the current study showed that NGLDM-based texture parameters varied considerably based on choice of 3D PET and 4D PET reconstruction of NSCLC patient images, indicating that standardized image acquisition and analysis protocols need to be established for clinical studies, especially multicenter clinical trials, intending to validate prognostic values of texture features for NSCLC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsi, W; Lee, T; Schultz, T
Purpose: To evaluate the accuracy of a two-dimensional optical dosimeter on measuring lateral profiles for spots and scanned fields of proton pencil beams. Methods: A digital camera with a color image senor was utilized to image proton-induced scintillations on Gadolinium-oxysulfide phosphor reflected by a stainless-steel mirror. Intensities of three colors were summed for each pixel with proper spatial-resolution calibration. To benchmark this dosimeter, the field size and penumbra for 100mm square fields of singleenergy pencil-scan protons were measured and compared between this optical dosimeter and an ionization-chamber profiler. Sigma widths of proton spots in air were measured and compared betweenmore » this dosimeter and a commercial optical dosimeter. Clinical proton beams with ranges between 80 mm and 300 mm at CDH proton center were used for this benchmark. Results: Pixel resolutions vary 1.5% between two perpendicular axes. For a pencil-scan field with 302 mm range, measured field sizes and penumbras between two detection systems agreed to 0.5 mm and 0.3 mm, respectively. Sigma widths agree to 0.3 mm between two optical dosimeters for a proton spot with 158 mm range; having widths of 5.76 mm and 5.92 mm for X and Y axes, respectively. Similar agreements were obtained for others beam ranges. This dosimeter was successfully utilizing on mapping the shapes and sizes of proton spots at the technical acceptance of McLaren proton therapy system. Snow-flake spots seen on images indicated the image sensor having pixels damaged by radiations. Minor variations in intensity between different colors were observed. Conclusions: The accuracy of our dosimeter was in good agreement with other established devices in measuring lateral profiles of pencil-scan fields and proton spots. A precise docking mechanism for camera was designed to keep aligned optical path while replacing damaged image senor. Causes for minor variations between emitted color lights will be investigated.« less
Assessing and benchmarking multiphoton microscopes for biologists
Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.
2017-01-01
Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
Background The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. Methods All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. Results From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p<0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Conclusion Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management. PMID:28031727
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project "Quality Improvement in Postoperative Pain Management" (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p <0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management.
Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.
Sczyrba, Alexander; Hofmann, Peter; Belmann, Peter; Koslicki, David; Janssen, Stefan; Dröge, Johannes; Gregor, Ivan; Majda, Stephan; Fiedler, Jessika; Dahms, Eik; Bremges, Andreas; Fritz, Adrian; Garrido-Oter, Ruben; Jørgensen, Tue Sparholt; Shapiro, Nicole; Blood, Philip D.; Gurevich, Alexey; Bai, Yang; Turaev, Dmitrij; DeMaere, Matthew Z.; Chikhi, Rayan; Nagarajan, Niranjan; Quince, Christopher; Meyer, Fernando; Balvočiūtė, Monika; Hansen, Lars Hestbjerg; Sørensen, Søren J.; Chia, Burton K. H.; Denis, Bertrand; Froula, Jeff L.; Wang, Zhong; Egan, Robert; Kang, Dongwan Don; Cook, Jeffrey J.; Deltel, Charles; Beckstette, Michael; Lemaitre, Claire; Peterlongo, Pierre; Rizk, Guillaume; Lavenier, Dominique; Wu, Yu-Wei; Singer, Steven W.; Jain, Chirag; Strous, Marc; Klingenberg, Heiner; Meinicke, Peter; Barton, Michael; Lingner, Thomas; Lin, Hsin-Hung; Liao, Yu-Chieh; Silva, Genivaldo Gueiros Z.; Cuevas, Daniel A.; Edwards, Robert A.; Saha, Surya; Piro, Vitor C.; Renard, Bernhard Y.; Pop, Mihai; Klenk, Hans-Peter; Göker, Markus; Kyrpides, Nikos C.; Woyke, Tanja; Vorholt, Julia A.; Schulze-Lefert, Paul; Rubin, Edward M.; Darling, Aaron E.; Rattei, Thomas; McHardy, Alice C.
2018-01-01
In metagenome analysis, computational methods for assembly, taxonomic profiling and binning are key components facilitating downstream biological data interpretation. However, a lack of consensus about benchmarking datasets and evaluation metrics complicates proper performance assessment. The Critical Assessment of Metagenome Interpretation (CAMI) challenge has engaged the global developer community to benchmark their programs on datasets of unprecedented complexity and realism. Benchmark metagenomes were generated from ~700 newly sequenced microorganisms and ~600 novel viruses and plasmids, including genomes with varying degrees of relatedness to each other and to publicly available ones and representing common experimental setups. Across all datasets, assembly and genome binning programs performed well for species represented by individual genomes, while performance was substantially affected by the presence of related strains. Taxonomic profiling and binning programs were proficient at high taxonomic ranks, with a notable performance decrease below the family level. Parameter settings substantially impacted performances, underscoring the importance of program reproducibility. While highlighting current challenges in computational metagenomics, the CAMI results provide a roadmap for software selection to answer specific research questions. PMID:28967888
NASA Astrophysics Data System (ADS)
Gillette, V. H.; Patiño, N. E.; Granada, J. R.; Mayer, R. E.
1989-08-01
Using a synthetic incoherent scattering function which describes the interaction of neutrons with molecular gases we provide analytical expressions for zero- and first-order scattering kernels, σ0( E0 → E), σ1( E0 → E), and total cross section σ0( E0). Based on these quantities, we have performed calculations of thermalization parameters and transport coefficients for H 2O, D 2O, C 6H 6 and (CH 2) n at room temperature. Comparison of such values with available experimental data and other calculations is satisfactory. We also generated nuclear data libraries for H 2O with 47 thermal groups at 300 K and performed some benchmark calculations ( 235U, 239Pu, PWR cell and typical APWR cell); the resulting reactivities are compared with experimental data and ENDF/B-IV calculations.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2016-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused 1 by model inputs from uncertainty due to model structural error. We extend this method with a large-sample approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions
Nearing, Grey S.; Mocko, David M.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Xia, Youlong
2018-01-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a “large-sample” approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances. PMID:29697706
Benchmarking NLDAS-2 Soil Moisture and Evapotranspiration to Separate Uncertainty Contributions.
Nearing, Grey S; Mocko, David M; Peters-Lidard, Christa D; Kumar, Sujay V; Xia, Youlong
2016-03-01
Model benchmarking allows us to separate uncertainty in model predictions caused by model inputs from uncertainty due to model structural error. We extend this method with a "large-sample" approach (using data from multiple field sites) to measure prediction uncertainty caused by errors in (i) forcing data, (ii) model parameters, and (iii) model structure, and use it to compare the efficiency of soil moisture state and evapotranspiration flux predictions made by the four land surface models in the North American Land Data Assimilation System Phase 2 (NLDAS-2). Parameters dominated uncertainty in soil moisture estimates and forcing data dominated uncertainty in evapotranspiration estimates; however, the models themselves used only a fraction of the information available to them. This means that there is significant potential to improve all three components of the NLDAS-2 system. In particular, continued work toward refining the parameter maps and look-up tables, the forcing data measurement and processing, and also the land surface models themselves, has potential to result in improved estimates of surface mass and energy balances.
Slob, Wout
2017-04-01
A general theory on effect size for continuous data predicts a relationship between maximum response and within-group variation of biological parameters, which is empirically confirmed by results from dose-response analyses of 27 different biological parameters. The theory shows how effect sizes observed in distinct biological parameters can be compared and provides a basis for a generic definition of small, intermediate and large effects. While the theory is useful for experimental science in general, it has specific consequences for risk assessment: it solves the current debate on the appropriate metric for the Benchmark response in continuous data. The theory shows that scaling the BMR expressed as a percent change in means to the maximum response (in the way specified) automatically takes "natural variability" into account. Thus, the theory supports the underlying rationale of the BMR 1 SD. For various reasons, it is, however, recommended to use a BMR in terms of a percent change that is scaled to maximum response and/or within group variation (averaged over studies), as a single harmonized approach.
NASA Astrophysics Data System (ADS)
Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.
2016-01-01
Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm3, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique. We therefore conclude that customization parameters must be set with reference to the optimized parameters of the corresponding irradiation technique in order to render them useful for achieving artifact-free MC simulation for use in computational experiments and clinical treatments.
Automatic Keyword Extraction from Individual Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.
2010-05-03
This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.
Length of stay benchmarking in the Australian private hospital sector.
Hanning, Brian W T
2007-02-01
Length of stay (LOS) benchmarking is a means of comparing hospital efficiency. Analysis of private cases in private facilities using Australian Institute of Health and Welfare (AIHW) data shows interstate variation in same-day (SD) cases and overnight average LOS (ONALOS) on an Australian Refined Diagnosis Related Groups version 4 (ARDRGv4) standardised basis. ARDRGv4 standardised analysis from 1998-99 to 2003-04 shows a steady increase in private sector SD cases (approximately 1.4% per annum) and a decrease in ONALOS (approximately 4.3% per annum). Overall, the data show significant variation in LOS parameters between private hospitals.
Isaacs, Eric B.; Wolverton, Chris
2018-06-22
Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isaacs, Eric B.; Wolverton, Chris
Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less
Benchmarking Memory Performance with the Data Cube Operator
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Shabanov, Leonid V.
2004-01-01
Data movement across a computer memory hierarchy and across computational grids is known to be a limiting factor for applications processing large data sets. We use the Data Cube Operator on an Arithmetic Data Set, called ADC, to benchmark capabilities of computers and of computational grids to handle large distributed data sets. We present a prototype implementation of a parallel algorithm for computation of the operatol: The algorithm follows a known approach for computing views from the smallest parent. The ADC stresses all levels of grid memory and storage by producing some of 2d views of an Arithmetic Data Set of d-tuples described by a small number of integers. We control data intensity of the ADC by selecting the tuple parameters, the sizes of the views, and the number of realized views. Benchmarking results of memory performance of a number of computer architectures and of a small computational grid are presented.
Benchmarking study of corporate research management and planning practices
NASA Astrophysics Data System (ADS)
McIrvine, Edward C.
1992-05-01
During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.
Benchmarks for single-phase flow in fractured porous media
NASA Astrophysics Data System (ADS)
Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru
2018-01-01
This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.
A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.
Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas
2014-01-01
The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Tze Yee
Purpose: For postimplant dosimetric assessment, computed tomography (CT) is commonly used to identify prostate brachytherapy seeds, at the expense of accurate anatomical contouring. Magnetic resonance imaging (MRI) is superior to CT for anatomical delineation, but identification of the negative-contrast seeds is challenging. Positive-contrast MRI markers were proposed to replace spacers to assist seed localization on MRI images. Visualization of these markers under varying scan parameters was investigated. Methods: To simulate a clinical scenario, a prostate phantom was implanted with 66 markers and 86 seeds, and imaged on a 3.0T MRI scanner using a 3D fast radiofrequency-spoiled gradient recalled echo acquisitionmore » with various combinations of scan parameters. Scan parameters, including flip angle, number of excitations, bandwidth, field-of-view, slice thickness, and encoding steps were systematically varied to study their effects on signal, noise, scan time, image resolution, and artifacts. Results: The effects of pulse sequence parameter selection on the marker signal strength and image noise were characterized. The authors also examined the tradeoff between signal-to-noise ratio, scan time, and image artifacts, such as the wraparound artifact, susceptibility artifact, chemical shift artifact, and partial volume averaging artifact. Given reasonable scan time and managable artifacts, the authors recommended scan parameter combinations that can provide robust visualization of the MRI markers. Conclusions: The recommended MRI pulse sequence protocol allows for consistent visualization of the markers to assist seed localization, potentially enabling MRI-only prostate postimplant dosimetry.« less
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
2015-09-20
battle force ships . The changes affect a small number of ship classes designated as (very) small combatants or logistics and support ships . Specifically...accurate, and fast method of helping shipbuilders and manufacturers design , redesign, modify, and salvage ships . However, only a handful of several... ship construction to become a fleet of 306 battle force ships over the next 30 years. It is critical that the Navy capture the full benefits of new
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, Su-Chin; Cheng, Cheng-Chieh; Chang, Hing-Chiu
Purpose: To verify whether quantification of parotid perfusion is affected by fat signals on non-fat-saturated (NFS) dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and whether the influence of fat is reduced with fat saturation (FS). Methods: This study consisted of three parts. First, a retrospective study analyzed DCE-MRI data previously acquired on different patients using NFS (n = 18) or FS (n = 18) scans. Second, a phantom study simulated the signal enhancements in the presence of gadolinium contrast agent at six concentrations and three fat contents. Finally, a prospective study recruited nine healthy volunteers to investigate the influence of fatmore » suppression on perfusion quantification on the same subjects. Parotid perfusion parameters were derived from NFS and FS DCE-MRI data using both pharmacokinetic model analysis and semiquantitative parametric analysis. T tests and linear regression analysis were used for statistical analysis with correction for multiple comparisons. Results: NFS scans showed lower amplitude-related parameters, including parameter A, peak enhancement (PE), and slope than FS scans in the patients (all with P < 0.0167). The relative signal enhancement in the phantoms was proportional to the dose of contrast agent and was lower in NFS scans than in FS scans. The volunteer study showed lower parameter A (6.75 ± 2.38 a.u.), PE (42.12% ± 14.87%), and slope (1.43% ± 0.54% s{sup −1}) in NFS scans as compared to 17.63 ± 8.56 a.u., 104.22% ± 25.15%, and 9.68% ± 1.67% s{sup −1}, respectively, in FS scans (all with P < 0.005). These amplitude-related parameters were negatively associated with the fat content in NFS scans only (all with P < 0.05). Conclusions: On NFS DCE-MRI, quantification of parotid perfusion is adversely affected by the presence of fat signals for all amplitude-related parameters. The influence could be reduced on FS scans.« less
n+235U resonance parameters and neutron multiplicities in the energy region below 100 eV
NASA Astrophysics Data System (ADS)
Pigni, Marco T.; Capote, Roberto; Trkov, Andrej; Pronyaev, Vladimir G.
2017-09-01
In August 2016, following the recent effort within the Collaborative International Evaluated Library Organization (CIELO) pilot project to improve the neutron cross sections of 235U, Oak Ridge National Laboratory (ORNL) collaborated with the International Atomic Energy Agency (IAEA) to release a resonance parameter evaluation. This evaluation restores the performance of the evaluated cross sections for the thermal- and above-thermal-solution benchmarks on the basis of newly evaluated thermal neutron constants (TNCs) and thermal prompt fission neutron spectra (PFNS). Performed with support from the US Nuclear Criticality Safety Program (NCSP) in an effort to provide the highest fidelity general purpose nuclear database for nuclear criticality applications, the resonance parameter evaluation was submitted as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The resonance parameter evaluation methodology used the Reich-Moore approximation of the R-matrix formalism implemented in the code SAMMY to fit the available time-of-flight (TOF) measured data for the thermal induced cross section of n+235U up to 100 eV. While maintaining reasonably good agreement with the experimental data, the validation analysis focused on restoring the benchmark performance for 235U solutions by combining changes to the resonance parameters and to the prompt resonance v̅ below 100 eV.
Cotter, Meghan M.; Whyms, Brian J.; Kelly, Michael P.; Doherty, Benjamin M.; Gentry, Lindell R.; Bersu, Edward T.; Vorperian, Houri K.
2015-01-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared to corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. PMID:25810349
Cotter, Meghan M; Whyms, Brian J; Kelly, Michael P; Doherty, Benjamin M; Gentry, Lindell R; Bersu, Edward T; Vorperian, Houri K
2015-08-01
The hyoid bone anchors and supports the vocal tract. Its complex shape is best studied in three dimensions, but it is difficult to capture on computed tomography (CT) images and three-dimensional volume renderings. The goal of this study was to determine the optimal CT scanning and rendering parameters to accurately measure the growth and developmental anatomy of the hyoid and to determine whether it is feasible and necessary to use these parameters in the measurement of hyoids from in vivo CT scans. Direct linear and volumetric measurements of skeletonized hyoid bone specimens were compared with corresponding CT images to determine the most accurate scanning parameters and three-dimensional rendering techniques. A pilot study was undertaken using in vivo scans from a retrospective CT database to determine feasibility of quantifying hyoid growth. Scanning parameters and rendering technique affected accuracy of measurements. Most linear CT measurements were within 10% of direct measurements; however, volume was overestimated when CT scans were acquired with a slice thickness greater than 1.25 mm. Slice-by-slice thresholding of hyoid images decreased volume overestimation. The pilot study revealed that the linear measurements tested correlate with age. A fine-tuned rendering approach applied to small slice thickness CT scans produces the most accurate measurements of hyoid bones. However, linear measurements can be accurately assessed from in vivo CT scans at a larger slice thickness. Such findings imply that investigation into the growth and development of the hyoid bone, and the vocal tract as a whole, can now be performed using these techniques. © 2015 Wiley Periodicals, Inc.
Landsat-5 bumper-mode geometric correction
Storey, James C.; Choate, Michael J.
2004-01-01
The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.
NASA Astrophysics Data System (ADS)
Mendoza, Sergio; Rothenberger, Michael; Hake, Alison; Fathy, Hosam
2016-03-01
This article presents a framework for optimizing the thermal cycle to estimate a battery cell's entropy coefficient at 20% state of charge (SOC). Our goal is to maximize Fisher identifiability: a measure of the accuracy with which a parameter can be estimated. Existing protocols in the literature for estimating entropy coefficients demand excessive laboratory time. Identifiability optimization makes it possible to achieve comparable accuracy levels in a fraction of the time. This article demonstrates this result for a set of lithium iron phosphate (LFP) cells. We conduct a 24-h experiment to obtain benchmark measurements of their entropy coefficients. We optimize a thermal cycle to maximize parameter identifiability for these cells. This optimization proceeds with respect to the coefficients of a Fourier discretization of this thermal cycle. Finally, we compare the estimated parameters using (i) the benchmark test, (ii) the optimized protocol, and (iii) a 15-h test from the literature (by Forgez et al.). The results are encouraging for two reasons. First, they confirm the simulation-based prediction that the optimized experiment can produce accurate parameter estimates in 2 h, compared to 15-24. Second, the optimized experiment also estimates a thermal time constant representing the effects of thermal capacitance and convection heat transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinilla, Maria Isabel
This report seeks to study and benchmark code predictions against experimental data; determine parameters to match MCNP-simulated detector response functions to experimental stilbene measurements; add stilbene processing capabilities to DRiFT; and improve NEUANCE detector array modeling and analysis using new MCNP6 and DRiFT features.
Accuracy of parameter estimates for closely spaced optical targets using multiple detectors
NASA Astrophysics Data System (ADS)
Dunn, K. P.
1981-10-01
In order to obtain the cross-scan position of an optical target, more than one scanning detector is used. As expected, the cross-scan position estimation performance degrades when two nearby optical targets interfere with each other. Theoretical bounds on the two-dimensional parameter estimation performance for two closely spaced optical targets are found. Two particular classes of scanning detector arrays, namely, the crow's foot and the brickwall (or mosaic) patterns, are considered.
Lazaris, Charalampos; Kelly, Stephen; Ntziachristos, Panagiotis; Aifantis, Iannis; Tsirigos, Aristotelis
2017-01-05
Chromatin conformation capture techniques have evolved rapidly over the last few years and have provided new insights into genome organization at an unprecedented resolution. Analysis of Hi-C data is complex and computationally intensive involving multiple tasks and requiring robust quality assessment. This has led to the development of several tools and methods for processing Hi-C data. However, most of the existing tools do not cover all aspects of the analysis and only offer few quality assessment options. Additionally, availability of a multitude of tools makes scientists wonder how these tools and associated parameters can be optimally used, and how potential discrepancies can be interpreted and resolved. Most importantly, investigators need to be ensured that slight changes in parameters and/or methods do not affect the conclusions of their studies. To address these issues (compare, explore and reproduce), we introduce HiC-bench, a configurable computational platform for comprehensive and reproducible analysis of Hi-C sequencing data. HiC-bench performs all common Hi-C analysis tasks, such as alignment, filtering, contact matrix generation and normalization, identification of topological domains, scoring and annotation of specific interactions using both published tools and our own. We have also embedded various tasks that perform quality assessment and visualization. HiC-bench is implemented as a data flow platform with an emphasis on analysis reproducibility. Additionally, the user can readily perform parameter exploration and comparison of different tools in a combinatorial manner that takes into account all desired parameter settings in each pipeline task. This unique feature facilitates the design and execution of complex benchmark studies that may involve combinations of multiple tool/parameter choices in each step of the analysis. To demonstrate the usefulness of our platform, we performed a comprehensive benchmark of existing and new TAD callers exploring different matrix correction methods, parameter settings and sequencing depths. Users can extend our pipeline by adding more tools as they become available. HiC-bench consists an easy-to-use and extensible platform for comprehensive analysis of Hi-C datasets. We expect that it will facilitate current analyses and help scientists formulate and test new hypotheses in the field of three-dimensional genome organization.
Construction of Polarimetric Radar-Based Reference Rain Maps for the Iowa Flood Studies Campaign
NASA Technical Reports Server (NTRS)
Petersen, Walter; Wolff, David; Krajewski, Witek; Gatlin, Patrick
2015-01-01
The Global Precipitation Measurement (GPM) Mission Iowa Flood Studies (IFloodS) campaign was conducted in central and northeastern Iowa during the months of April-June, 2013. Specific science objectives for IFloodS included quantification of uncertainties in satellite and ground-based estimates of precipitation, 4-D characterization of precipitation physical processes and associated parameters (e.g., size distributions, water contents, types, structure etc.), assessment of the impact of precipitation estimation uncertainty and physical processes on hydrologic predictive skill, and refinement of field observations and data analysis approaches as they pertain to future GPM integrated hydrologic validation and related field studies. In addition to field campaign archival of raw and processed satellite data (including precipitation products), key ground-based platforms such as the NASA NPOL S-band and D3R Ka/Ku-band dual-polarimetric radars, University of Iowa X-band dual-polarimetric radars, a large network of paired rain gauge platforms, and a large network of 2D Video and Parsivel disdrometers were deployed. In something of a canonical approach, the radar (NPOL in particular), gauge and disdrometer observational assets were deployed to create a consistent high-quality distributed (time and space sampling) radar-based ground "reference" rainfall dataset, with known uncertainties, that could be used for assessing the satellite-based precipitation products at a range of space/time scales. Subsequently, the impact of uncertainties in the satellite products could be evaluated relative to the ground-benchmark in coupled weather, land-surface and distributed hydrologic modeling frameworks as related to flood prediction. Relative to establishing the ground-based "benchmark", numerous avenues were pursued in the making and verification of IFloodS "reference" dual-polarimetric radar-based rain maps, and this study documents the process and results as they pertain specifically to efforts using the NPOL radar dataset. The initial portions of the "process" involved dual-polarimetric quality control procedures which employed standard phase and correlation-based approaches to removal of clutter and non-meteorological echo. Calculation of a scale-adaptive KDP was accomplished using the method of Wang and Chandrasekar (2009; J. Atmos. Oceanic Tech.). A dual-polarimetric blockage algorithm based on Lang et al. (2009; J. Atmos. Oceanic Tech.) was then implemented to correct radar reflectivity and differential reflectivity at low elevation angles. Next, hydrometeor identification algorithms were run to identify liquid and ice hydrometeors. After the quality control and data preparation steps were completed several different dual-polarimetric rain estimation algorithms were employed to estimate rainfall rates using rainfall scans collected approximately every two to three minutes throughout the campaign. These algorithms included a polarimetrically-tuned Z-R algorithm that adjusts for drop oscillations (via Bringi et al., 2004, J. Atmos. Oceanic Tech.), and several different hybrid polarimetric variable approaches, including one that made use of parameters tuned to IFloodS 2D Video Disdrometer measurements. Finally, a hybrid scan algorithm was designed to merge the rain rate estimates from multiple low level elevation angle scans (where blockages could not be appropriately corrected) in order to create individual low-level rain maps. Individual rain maps at each time step were subsequently accumulated over multiple time scales for comparison to gauge network data. The comparison results and overall error character depended strongly on rain event type, polarimetric estimator applied, and range from the radar. We will present the outcome of these comparisons and their impact on constructing composited "reference" rainfall maps at select time and space scales.
NASA Technical Reports Server (NTRS)
Colver, Gerald M.; Greene, Nathanael; Shoemaker, David; Xu, Hua
2003-01-01
The Electric Particulate Suspension (EPS) is a combustion ignition system being developed at Iowa State University for evaluating quenching effects of powders in microgravity (quenching distance, ignition energy, flammability limits). Because of the high cloud uniformity possible and its simplicity, the EPS method has potential for "benchmark" design of quenching flames that would provide NASA and the scientific community with a new fire standard. Microgravity is expected to increase suspension uniformity even further and extend combustion testing to higher concentrations (rich fuel limit) than is possible at normal gravity. Two new combustion parameters are being investigated with this new method: (1) the particle velocity distribution and (2) particle-oxidant slip velocity. Both walls and (inert) particles can be tested as quenching media. The EPS method supports combustion modeling by providing accurate measurement of flame-quenching distance as a parameter in laminar flame theory as it closely relates to characteristic flame thickness and flame structure. Because of its design simplicity, EPS is suitable for testing on the International Space Station (ISS). Laser scans showing stratification effects at 1-g have been studied for different materials, aluminum, glass, and copper. PTV/PIV and a leak hole sampling rig give particle velocity distribution with particle slip velocity evaluated using LDA. Sample quenching and ignition energy curves are given for aluminum powder. Testing is planned for the KC-135 and NASA s two second drop tower. Only 1-g ground-based data have been reported to date.
An Automated and Intelligent Medical Decision Support System for Brain MRI Scans Classification.
Siddiqui, Muhammad Faisal; Reza, Ahmed Wasif; Kanesan, Jeevan
2015-01-01
A wide interest has been observed in the medical health care applications that interpret neuroimaging scans by machine learning systems. This research proposes an intelligent, automatic, accurate, and robust classification technique to classify the human brain magnetic resonance image (MRI) as normal or abnormal, to cater down the human error during identifying the diseases in brain MRIs. In this study, fast discrete wavelet transform (DWT), principal component analysis (PCA), and least squares support vector machine (LS-SVM) are used as basic components. Firstly, fast DWT is employed to extract the salient features of brain MRI, followed by PCA, which reduces the dimensions of the features. These reduced feature vectors also shrink the memory storage consumption by 99.5%. At last, an advanced classification technique based on LS-SVM is applied to brain MR image classification using reduced features. For improving the efficiency, LS-SVM is used with non-linear radial basis function (RBF) kernel. The proposed algorithm intelligently determines the optimized values of the hyper-parameters of the RBF kernel and also applied k-fold stratified cross validation to enhance the generalization of the system. The method was tested by 340 patients' benchmark datasets of T1-weighted and T2-weighted scans. From the analysis of experimental results and performance comparisons, it is observed that the proposed medical decision support system outperformed all other modern classifiers and achieves 100% accuracy rate (specificity/sensitivity 100%/100%). Furthermore, in terms of computation time, the proposed technique is significantly faster than the recent well-known methods, and it improves the efficiency by 71%, 3%, and 4% on feature extraction stage, feature reduction stage, and classification stage, respectively. These results indicate that the proposed well-trained machine learning system has the potential to make accurate predictions about brain abnormalities from the individual subjects, therefore, it can be used as a significant tool in clinical practice.
Experimental benchmarking of a Monte Carlo dose simulation code for pediatric CT
NASA Astrophysics Data System (ADS)
Li, Xiang; Samei, Ehsan; Yoshizumi, Terry; Colsher, James G.; Jones, Robert P.; Frush, Donald P.
2007-03-01
In recent years, there has been a desire to reduce CT radiation dose to children because of their susceptibility and prolonged risk for cancer induction. Concerns arise, however, as to the impact of dose reduction on image quality and thus potentially on diagnostic accuracy. To study the dose and image quality relationship, we are developing a simulation code to calculate organ dose in pediatric CT patients. To benchmark this code, a cylindrical phantom was built to represent a pediatric torso, which allows measurements of dose distributions from its center to its periphery. Dose distributions for axial CT scans were measured on a 64-slice multidetector CT (MDCT) scanner (GE Healthcare, Chalfont St. Giles, UK). The same measurements were simulated using a Monte Carlo code (PENELOPE, Universitat de Barcelona) with the applicable CT geometry including bowtie filter. The deviations between simulated and measured dose values were generally within 5%. To our knowledge, this work is one of the first attempts to compare measured radial dose distributions on a cylindrical phantom with Monte Carlo simulated results. It provides a simple and effective method for benchmarking organ dose simulation codes and demonstrates the potential of Monte Carlo simulation for investigating the relationship between dose and image quality for pediatric CT patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faillace, E.R.; Cheng, J.J.; Yu, C.
A series of benchmarking runs were conducted so that results obtained with the RESRAD code could be compared against those obtained with six pathway analysis models used to determine the radiation dose to an individual living on a radiologically contaminated site. The RESRAD computer code was benchmarked against five other computer codes - GENII-S, GENII, DECOM, PRESTO-EPA-CPG, and PATHRAE-EPA - and the uncodified methodology presented in the NUREG/CR-5512 report. Estimated doses for the external gamma pathway; the dust inhalation pathway; and the soil, food, and water ingestion pathways were calculated for each methodology by matching, to the extent possible, inputmore » parameters such as occupancy, shielding, and consumption factors.« less
NASA Astrophysics Data System (ADS)
Park, E.; Jeong, J.
2017-12-01
A precise estimation of groundwater fluctuation is studied by considering delayed recharge flux (DRF) and unsaturated zone drainage (UZD). Both DRF and UZD are due to gravitational flow impeded in the unsaturated zone, which may nonnegligibly affect groundwater level changes. In the validation, a previous model without the consideration of unsaturated flow is benchmarked where the actual groundwater level and precipitation data are divided into three periods based on the climatic condition. The estimation capability of the new model is superior to the benchmarked model as indicated by the significantly improved representation of groundwater level with physically interpretable model parameters.
Continuous quality improvement for the clinical decision unit.
Mace, Sharon E
2004-01-01
Clinical decision units (CDUs) are a relatively new and growing area of medicine in which patients undergo rapid evaluation and treatment. Continuous quality improvement (CQI) is important for the establishment and functioning of CDUs. CQI in CDUs has many advantages: better CDU functioning, fulfillment of Joint Commission on Accreditation of Healthcare Organizations mandates, greater efficiency/productivity, increased job satisfaction, better performance improvement, data availability, and benchmarking. Key elements include a database with volume indicators, operational policies, clinical practice protocols (diagnosis specific/condition specific), monitors, benchmarks, and clinical pathways. Examples of these important parameters are given. The CQI process should be individualized for each CDU and hospital.
(U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorite, Jeffrey A.
2017-04-26
The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.
Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique
NASA Astrophysics Data System (ADS)
Kalinovsky, A.; Liauchuk, V.; Tarasau, A.
2017-05-01
In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.
2016-04-30
manufacturing is also commonly referred to as 3D printing . AM differs radically from the currently dominant manufacturing methodologies. Most current...referred to as 3D printing . In the automotive industry, Ford Motor Co. uses 3D printing in several areas, including the tooling used to create production...four months and cost $500,000 to build, while a 3D - printed manifold prototype costs $3,000 to build over four days. Additive Manufacturing in the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranjbar, V. H.; Méot, F.; Bai, M.
Depolarization response for a system of two orthogonal snakes at irrational tunes is studied in depth using lattice independent spin integration. Particularly, we consider the effect of overlapping spin resonances in this system, to understand the impact of phase, tune, relative location and threshold strengths of the spin resonances. Furthermore, these results are benchmarked and compared to two dimensional direct tracking results for the RHIC e-lens lattice and the standard lattice. We then consider the effect of longitudinal motion via chromatic scans using direct six dimensional lattice tracking.
NASA Astrophysics Data System (ADS)
Sillay, Karl; Schomberg, Dominic; Hinchman, Angelica; Kumbier, Lauren; Ross, Chris; Kubota, Ken; Brodsky, Ethan; Miranpuri, Gurwattan
2012-04-01
Convection-enhanced delivery (CED) is an advanced infusion technique used to deliver therapeutic agents into the brain. CED has shown promise in recent clinical trials. Independent verification of published parameters is warranted with benchmark testing of published parameters in applicable models such as gel phantoms, ex vivo tissue and in vivo non-human animal models to effectively inform planned and future clinical therapies. In the current study, specific performance characteristics of two CED infusion catheter systems, such as backflow, infusion cloud morphology, volume of distribution (mm3) versus the infused volume (mm3) (Vd/Vi) ratios, rate of infusion (µl min-1) and pressure (mmHg), were examined to ensure published performance standards for the ERG valve-tip (VT) catheter. We tested the hypothesis that the ERG VT catheter with an infusion protocol of a steady 1 µl min-1 functionality is comparable to the newly FDA approved MRI Interventions Smart Flow (SF) catheter with the UCSF infusion protocol in an agarose gel model. In the gel phantom models, no significant difference was found in performance parameters between the VT and SF catheter. We report, for the first time, such benchmark characteristics in CED between these two otherwise similar single-end port VT with stylet and end-port non-stylet infusion systems. Results of the current study in agarose gel models suggest that the performance of the VT catheter is comparable to the SF catheter and warrants further investigation as a tool in the armamentarium of CED techniques for eventual clinical use and application.
Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients.
Mayer, Markus A; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2010-11-08
Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis.
Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients
Mayer, Markus A.; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2010-01-01
Automated measurements of the retinal nerve fiber layer thickness on circular OCT B-Scans provide physicians additional parameters for glaucoma diagnosis. We propose a novel retinal nerve fiber layer segmentation algorithm for frequency domain data that can be applied on scans from both normal healthy subjects, as well as glaucoma patients, using the same set of parameters. In addition, the algorithm remains almost unaffected by image quality. The main part of the segmentation process is based on the minimization of an energy function consisting of gradient and local smoothing terms. A quantitative evaluation comparing the automated segmentation results to manually corrected segmentations from three reviewers is performed. A total of 72 scans from glaucoma patients and 132 scans from normal subjects, all from different persons, composed the database for the evaluation of the segmentation algorithm. A mean absolute error per A-Scan of 2.9 µm was achieved on glaucomatous eyes, and 3.6 µm on healthy eyes. The mean absolute segmentation error over all A-Scans lies below 10 µm on 95.1% of the images. Thus our approach provides a reliable tool for extracting diagnostic relevant parameters from OCT B-Scans for glaucoma diagnosis. PMID:21258556
Sung, Wonmo; Park, Jong In; Kim, Jung-in; Carlson, Joel; Ye, Sung-Joon
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans. PMID:28493940
Sung, Wonmo; Park, Jong In; Kim, Jung-In; Carlson, Joel; Ye, Sung-Joon; Park, Jong Min
2017-01-01
This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.
Revisiting the PLUMBER Experiments from a Process-Diagnostics Perspective
NASA Astrophysics Data System (ADS)
Nearing, G. S.; Ruddell, B. L.; Clark, M. P.; Nijssen, B.; Peters-Lidard, C. D.
2017-12-01
The PLUMBER benchmarking experiments [1] showed that some of the most sophisticated land models (CABLE, CH-TESSEL, COLA-SSiB, ISBA-SURFEX, JULES, Mosaic, Noah, ORCHIDEE) were outperformed - in simulations of half-hourly surface energy fluxes - by instantaneous, out-of-sample, and globally-stationary regressions with no state memory. One criticism of PLUMBER is that the benchmarking methodology was not derived formally, so that applying a similar methodology with different performance metrics can result in qualitatively different results. Another common criticism of model intercomparison projects in general is that they offer little insight into process-level deficiencies in the models, and therefore are of marginal value for helping to improve the models. We address both of these issues by proposing a formal benchmarking methodology that also yields a formal and quantitative method for process-level diagnostics. We apply this to the PLUMBER experiments to show that (1) the PLUMBER conclusions were generally correct - the models use only a fraction of the information available to them from met forcing data (<50% by our analysis), and (2) all of the land models investigated by PLUMBER have similar process-level error structures, and therefore together do not represent a meaningful sample of structural or epistemic uncertainty. We conclude by suggesting two ways to improve the experimental design of model intercomparison and/or model benchmarking studies like PLUMBER. First, PLUMBER did not report model parameter values, and it is necessary to know these values to separate parameter uncertainty from structural uncertainty. This is a first order requirement if we want to use intercomparison studies to provide feedback to model development. Second, technical documentation of land models is inadequate. Future model intercomparison projects should begin with a collaborative effort by model developers to document specific differences between model structures. This could be done in a reproducible way using a unified, process-flexible system like SUMMA [2]. [1] Best, M.J. et al. (2015) 'The plumbing of land surface models: benchmarking model performance', J. Hydrometeor. [2] Clark, M.P. et al. (2015) 'A unified approach for process-based hydrologic modeling: 1. Modeling concept', Water Resour. Res.
NASA Astrophysics Data System (ADS)
Moorthy, Inian
Spectroscopic observational data for vegetated environments, have been coupled with 3D physically-based radiative transfer models for retrievals of biochemical and biophysical indicators of vegetation health and condition. With the recent introduction of Terrestrial Laser Scanning (TLS) units, there now exists a means of rapidly measuring intricate structural details of vegetation canopies, which can also serve as input into 3D radiative transfer models. In this investigation, Intelligent Laser Ranging and Imaging System (ILRIS-3D) data was acquired of individual tree crowns in laboratory, and field-based experiments. The ILRIS-3D uses the Time-Of-Flight (TOF) principle to measure the distances of objects based on the time interval between laser pulse exitance and return, upon reflection from an object. At the laboratory-level, this exploratory study demonstrated and validated innovative approaches for retrieving crown-level estimates of Leaf Area Index (LAI) (r2 = 0.98, rmse = 0.26m2/m2), a critical biophysical parameter for vegetation monitoring and modeling. These methods were implemented and expanded in field experiments conducted in olive (Olea europaea L.) orchards in Cordoba, Spain, where ILRIS-3D observations for 24 structurally-variable trees were made. Robust methodologies were developed to characterize diagnostic architectural parameters, such as tree height (r2 = 0.97, rmse = 0.21m), crown width (r 2 = 0.98, rmse = 0.12m), crown height (r2 = 0.81, rmse = 0.11m), crown volume (r2 = 0.99, rmse = 2.6m3), and LAI (r2 = 0.76, rmse = 0.27m2/ m2). These parameters were subsequently used as direct inputs into the Forest LIGHT (FLIGHT) 3D ray tracing model for characterization of the spectral behavior of the olive crowns. Comparisons between FLIGHT-simulated spectra and measured data showed small differences in the visible (< 3%) and near infrared (< 10%) spectral ranges. These differences between model simulations and measurements were significantly correlated to TLS-derived tree crown complexity metrics. The specific implications of internal crown complexity on estimating leaf chlorophyll concentration, a pertinent physiological health indicator, is highlighted. This research demonstrates that TLS systems can potentially be the new observational tool and benchmark for precise characterization of vegetation architecture for synergy with 3D radiative transfer models for improved operational management of agricultural crops.
A realistic intersecting D6-brane model after the first LHC run
NASA Astrophysics Data System (ADS)
Li, Tianjun; Nanopoulos, D. V.; Raza, Shabbar; Wang, Xiao-Chuan
2014-08-01
With the Higgs boson mass around 125 GeV and the LHC supersymmetry search constraints, we revisit a three-family Pati-Salam model from intersecting D6-branes in Type IIA string theory on the T 6/(ℤ2 × ℤ2) orientifold which has a realistic phenomenology. We systematically scan the parameter space for μ < 0 and μ > 0, and find that the gravitino mass is generically heavier than about 2 TeV for both cases due to the Higgs mass low bound 123 GeV. In particular, we identify a region of parameter space with the electroweak fine-tuning as small as Δ EW ~ 24-32 (3-4%). In the viable parameter space which is consistent with all the current constraints, the mass ranges for gluino, the first two-generation squarks and sleptons are respectively [3, 18] TeV, [3, 16] TeV, and [2, 7] TeV. For the third-generation sfermions, the light stop satisfying 5 σ WMAP bounds via neutralino-stop coannihilation has mass from 0.5 to 1.2 TeV, and the light stau can be as light as 800 GeV. We also show various coannihilation and resonance scenarios through which the observed dark matter relic density is achieved. Interestingly, the certain portions of parameter space has excellent t- b- τ and b- τ Yukawa coupling unification. Three regions of parameter space are highlighted as well where the dominant component of the lightest neutralino is a bino, wino or higgsino. We discuss various scenarios in which such solutions may avoid recent astrophysical bounds in case if they satisfy or above observed relic density bounds. Prospects of finding higgsino-like neutralino in direct and indirect searches are also studied. And we display six tables of benchmark points depicting various interesting features of our model. Note that the lightest neutralino can be heavy up to 2.8 TeV, and there exists a natural region of parameter space from low-energy fine-tuning definition with heavy gluino and first two-generation squarks/sleptons, we point out that the 33 TeV and 100 TeV proton-proton colliders are indeed needed to probe our D-brane model.
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
Parametrization and Benchmark of Long-Range Corrected DFTB2 for Organic Molecules
Vuong, Van Quan; Akkarapattiakal Kuriappan, Jissy; Kubillus, Maximilian; ...
2017-12-12
In this paper, we present the parametrization and benchmark of long-range corrected second-order density functional tight binding (DFTB), LC-DFTB2, for organic and biological molecules. The LC-DFTB2 model not only improves fundamental orbital energy gaps but also ameliorates the DFT self-interaction error and overpolarization problem, and further improves charge-transfer excited states significantly. Electronic parameters for the construction of the DFTB2 Hamiltonian as well as repulsive potentials were optimized for molecules containing C, H, N, and O chemical elements. We use a semiautomatic parametrization scheme based on a genetic algorithm. With the new parameters, LC-DFTB2 describes geometries and vibrational frequencies of organicmore » molecules similarly well as third-order DFTB3/3OB, the de facto standard parametrization based on a GGA functional. Finally, LC-DFTB2 performs well also for atomization and reaction energies, however, slightly less satisfactorily than DFTB3/3OB.« less
The MCUCN simulation code for ultracold neutron physics
NASA Astrophysics Data System (ADS)
Zsigmond, G.
2018-02-01
Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.
Energy benchmarking in wastewater treatment plants: the importance of site operation and layout.
Belloir, C; Stanford, C; Soares, A
2015-01-01
Energy benchmarking is a powerful tool in the optimization of wastewater treatment plants (WWTPs) in helping to reduce costs and greenhouse gas emissions. Traditionally, energy benchmarking methods focused solely on reporting electricity consumption, however, recent developments in this area have led to the inclusion of other types of energy, including electrical, manual, chemical and mechanical consumptions that can be expressed in kWh/m3. In this study, two full-scale WWTPs were benchmarked, both incorporated preliminary, secondary (oxidation ditch) and tertiary treatment processes, Site 1 also had an additional primary treatment step. The results indicated that Site 1 required 2.32 kWh/m3 against 0.98 kWh/m3 for Site 2. Aeration presented the highest energy consumption for both sites with 2.08 kWh/m3 required for Site 1 and 0.91 kWh/m3 in Site 2. The mechanical energy represented the second biggest consumption for Site 1 (9%, 0.212 kWh/m3) and chemical input was significant in Site 2 (4.1%, 0.026 kWh/m3). The analysis of the results indicated that Site 2 could be optimized by constructing a primary settling tank that would reduce the biochemical oxygen demand, total suspended solids and NH4 loads to the oxidation ditch by 55%, 75% and 12%, respectively, and at the same time reduce the aeration requirements by 49%. This study demonstrated that the effectiveness of the energy benchmarking exercise in identifying the highest energy-consuming assets, nevertheless it points out the need to develop a holistic overview of the WWTP and the need to include parameters such as effluent quality, site operation and plant layout to allow adequate benchmarking.
Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges
NASA Technical Reports Server (NTRS)
Khorrami, Mehdi R.
2010-01-01
The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
NASA Astrophysics Data System (ADS)
Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.
2017-08-01
The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.
Voss, Clifford I.; Simmons, Craig T.; Robinson, Neville I.
2010-01-01
This benchmark for three-dimensional (3D) numerical simulators of variable-density groundwater flow and solute or energy transport consists of matching simulation results with the semi-analytical solution for the transition from one steady-state convective mode to another in a porous box. Previous experimental and analytical studies of natural convective flow in an inclined porous layer have shown that there are a variety of convective modes possible depending on system parameters, geometry and inclination. In particular, there is a well-defined transition from the helicoidal mode consisting of downslope longitudinal rolls superimposed upon an upslope unicellular roll to a mode consisting of purely an upslope unicellular roll. Three-dimensional benchmarks for variable-density simulators are currently (2009) lacking and comparison of simulation results with this transition locus provides an unambiguous means to test the ability of such simulators to represent steady-state unstable 3D variable-density physics.
Liu, Hui; Li, Yingzi; Zhang, Yingxu; Chen, Yifu; Song, Zihang; Wang, Zhenyu; Zhang, Suoxin; Qian, Jianqiang
2018-01-01
Proportional-integral-derivative (PID) parameters play a vital role in the imaging process of an atomic force microscope (AFM). Traditional parameter tuning methods require a lot of manpower and it is difficult to set PID parameters in unattended working environments. In this manuscript, an intelligent tuning method of PID parameters based on iterative learning control is proposed to self-adjust PID parameters of the AFM according to the sample topography. This method gets enough information about the output signals of PID controller and tracking error, which will be used to calculate the proper PID parameters, by repeated line scanning until convergence before normal scanning to learn the topography. Subsequently, the appropriate PID parameters are obtained by fitting method and then applied to the normal scanning process. The feasibility of the method is demonstrated by the convergence analysis. Simulations and experimental results indicate that the proposed method can intelligently tune PID parameters of the AFM for imaging different topographies and thus achieve good tracking performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Indoor Modelling Benchmark for 3D Geometry Extraction
NASA Astrophysics Data System (ADS)
Thomson, C.; Boehm, J.
2014-06-01
A combination of faster, cheaper and more accurate hardware, more sophisticated software, and greater industry acceptance have all laid the foundations for an increased desire for accurate 3D parametric models of buildings. Pointclouds are the data source of choice currently with static terrestrial laser scanning the predominant tool for large, dense volume measurement. The current importance of pointclouds as the primary source of real world representation is endorsed by CAD software vendor acquisitions of pointcloud engines in 2011. Both the capture and modelling of indoor environments require great effort in time by the operator (and therefore cost). Automation is seen as a way to aid this by reducing the workload of the user and some commercial packages have appeared that provide automation to some degree. In the data capture phase, advances in indoor mobile mapping systems are speeding up the process, albeit currently with a reduction in accuracy. As a result this paper presents freely accessible pointcloud datasets of two typical areas of a building each captured with two different capture methods and each with an accurate wholly manually created model. These datasets are provided as a benchmark for the research community to gauge the performance and improvements of various techniques for indoor geometry extraction. With this in mind, non-proprietary, interoperable formats are provided such as E57 for the scans and IFC for the reference model. The datasets can be found at: http://indoor-bench.github.io/indoor-bench.
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.
A Benchmark Problem for Development of Autonomous Structural Modal Identification
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan
1996-01-01
This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.
Rao, Harsha L; Addepalli, Uday K; Yadav, Ravi K; Senthil, Sirisha; Choudhari, Nikhil S; Garudadri, Chandra S
2014-03-01
To evaluate the effect of scan quality on the diagnostic accuracies of optic nerve head (ONH), retinal nerve fiber layer (RNFL), and ganglion cell complex (GCC) parameters of spectral-domain optical coherence tomography (SD OCT) in glaucoma. Cross-sectional study. Two hundred fifty-two eyes of 183 control subjects (mean deviation [MD]: -1.84 dB) and 207 eyes of 159 glaucoma patients (MD: -7.31 dB) underwent ONH, RNFL, and GCC scanning with SD OCT. Scan quality of SD OCT images was based on signal strength index (SSI) values. Influence of SSI on diagnostic accuracy of SD OCT was evaluated by receiver operating characteristic (ROC) regression. Diagnostic accuracies of all SD OCT parameters were better when the SSI values were higher. This effect was statistically significant (P < .05) for ONH and RNFL but not for GCC parameters. In mild glaucoma (MD of -5 dB), area under ROC curve (AUC) for rim area, average RNFL thickness, and average GCC thickness parameters improved from 0.651, 0.678, and 0.726, respectively, at an SSI value of 30 to 0.873, 0.962, and 0.886, respectively, at an SSI of 70. AUCs of the same parameters in advanced glaucoma (MD of -15 dB) improved from 0.747, 0.890, and 0.873, respectively, at an SSI value of 30 to 0.922, 0.994, and 0.959, respectively, at an SSI of 70. Diagnostic accuracies of SD OCT parameters in glaucoma were significantly influenced by the scan quality even when the SSI values were within the manufacturer-recommended limits. These results should be considered while interpreting the SD OCT scans for glaucoma. Copyright © 2014 Elsevier Inc. All rights reserved.
Research on IoT-based water environment benchmark data acquisition management
NASA Astrophysics Data System (ADS)
Yan, Bai; Xue, Bai; Ling, Lin; Jin, Huang; Ren, Liu
2017-11-01
Over the past more than 30 years of reform and opening up, China’s economy has developed at a full speed. However, this rapid growth is under restrictions of resource exhaustion and environmental pollution. Green sustainable development has become a common goal of all humans. As part of environmental resources, water resources are faced with such problems as pollution and shortage, thus hindering sustainable development. The top priority in water resources protection and research is to manage the basic data on water resources, and determine what is the footstone and scientific foundation of water environment management. By studying the aquatic organisms in the Yangtze River Basin, the Yellow River Basin, the Liaohe River Basin and the 5 lake areas, this paper puts forward an IoT-based water environment benchmark data management platform which can transform parameters measured to electric signals by way of chemical probe identification, and then send the benchmark test data of the water environment to node servers. The management platform will provide data and theoretical support for environmental chemistry, toxicology, ecology, etc., promote researches on environmental sciences, lay a solid foundation for comprehensive and systematic research on China’s regional environment characteristics, biotoxicity effects and environment criteria, and provide objective data for compiling standards of the water environment benchmark data.
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
Evaluation of the Pool Critical Assembly Benchmark with Explicitly-Modeled Geometry using MCNP6
Kulesza, Joel A.; Martz, Roger Lee
2017-03-01
Despite being one of the most widely used benchmarks for qualifying light water reactor (LWR) radiation transport methods and data, no benchmark calculation of the Oak Ridge National Laboratory (ORNL) Pool Critical Assembly (PCA) pressure vessel wall benchmark facility (PVWBF) using MCNP6 with explicitly modeled core geometry exists. As such, this paper provides results for such an analysis. First, a criticality calculation is used to construct the fixed source term. Next, ADVANTG-generated variance reduction parameters are used within the final MCNP6 fixed source calculations. These calculations provide unadjusted dosimetry results using three sets of dosimetry reaction cross sections of varyingmore » ages (those packaged with MCNP6, from the IRDF-2002 multi-group library, and from the ACE-formatted IRDFF v1.05 library). These results are then compared to two different sets of measured reaction rates. The comparison agrees in an overall sense within 2% and on a specific reaction- and dosimetry location-basis within 5%. Except for the neptunium dosimetry, the individual foil raw calculation-to-experiment comparisons usually agree within 10% but is typically greater than unity. Finally, in the course of developing these calculations, geometry that has previously not been completely specified is provided herein for the convenience of future analysts.« less
[Variables determining the amount of care for very preterm neonates: the concept of medical stance].
Burguet, A; Menget, A; Chary-Tardy, A-C; Savajols, E; Abed, N; Thiriez, G
2014-02-01
To compare the amount of medical interventions on very preterm neonates (24-31 weeks of gestation) in two French university tertiary care centers, one of which is involved in a Neonatal Developmental Care program. A secondary objective is to assess whether this difference in medical interventions can be linked to a difference in mortality and morbidity rates. We prospectively included all very preterm neonates free from lethal malformation born live in these two centers between 2006 and 2010. These inclusion criteria were met by 1286 patients, for whom we compared the rate of five selected medical interventions: birth by caesarean section, chest intubation in the delivery room, surfactant therapy, pharmacological treatment of patent ductus arteriosus, and red blood cell transfusion. The rates of the five medical interventions were systematically lower in the center that is involved in Neonatal Developmental Care. There was no significant difference in survival at discharge with no severe cerebral ultrasound scan abnormalities between the two centers. There were, however, significantly higher rates of bronchopulmonary dysplasia and nosocomial sepsis and longer hospital stays when the patients were not involved in a Neonatal Developmental Care program. This benchmarking study shows that in France, in the first decade of the 21st century, there are as many ways to handle very preterm neonates as there are centers in which they are born. This brings to light the concept of medical stance, which is the general care approach prior to the treatment itself. This medical stance creates the overall framework for the staff's decision-making regarding neonate care. The different parameters structuring medical stance are discussed. Moreover, this study raises the problematic issue of the aftermath of benchmarking studies when the conclusion is an increase of morbidity in cases where procedure leads to more interventions. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, S; Fujimoto, R
Purpose: The purpose was to demonstrate a developed acceleration technique of dose optimization and to investigate its applicability to the optimization process in a treatment planning system (TPS) for proton therapy. Methods: In the developed technique, the dose matrix is divided into two parts, main and halo, based on beam sizes. The boundary of the two parts is varied depending on the beam energy and water equivalent depth by utilizing the beam size as a singular threshold parameter. The optimization is executed with two levels of iterations. In the inner loop, doses from the main part are updated, whereas dosesmore » from the halo part remain constant. In the outer loop, the doses from the halo part are recalculated. We implemented this technique to the optimization process in the TPS and investigated the dependence on the target volume of the speedup effect and applicability to the worst-case optimization (WCO) in benchmarks. Results: We created irradiation plans for various cubic targets and measured the optimization time varying the target volume. The speedup effect was improved as the target volume increased, and the calculation speed increased by a factor of six for a 1000 cm3 target. An IMPT plan for the RTOG benchmark phantom was created in consideration of ±3.5% range uncertainties using the WCO. Beams were irradiated at 0, 45, and 315 degrees. The target’s prescribed dose and OAR’s Dmax were set to 3 Gy and 1.5 Gy, respectively. Using the developed technique, the calculation speed increased by a factor of 1.5. Meanwhile, no significant difference in the calculated DVHs was found before and after incorporating the technique into the WCO. Conclusion: The developed technique could be adapted to the TPS’s optimization. The technique was effective particularly for large target cases.« less
Optical Quality of High-Power Laser Beams in Lenses
2008-10-31
M 2 - 1 after the third collimating lens. This low-power limit has been successfully benchmarked against the ZEMAX optical design code [11]. In the...York, NY (1995). 11. ZEMAX Development Corporation, http://www.zemax.com Table 1: Thermal and optical parameters for BK7 and uv-grade fused silica
CT dose reduction in children.
Vock, Peter
2005-11-01
World wide, the number of CT studies in children and the radiation exposure by CT increases. The same energy dose has a greater biological impact in children than in adults, and scan parameters have to be adapted to the smaller diameter of the juvenile body. Based on seven rules, a practical approach to paediatric CT is shown: Justification and patient preparation are important steps before scanning, and they differ from the preparation of adult patients. The subsequent choice of scan parameters aims at obtaining the minimal signal-to-noise ratio and volume coverage needed in a specific medical situation; exposure can be divided in two aspects: the CT dose index determining energy deposition per rotation and the dose-length product (DLP) determining the volume dose. DLP closely parallels the effective dose, the best parameter of the biological impact. Modern scanners offer dose modulation to locally minimise exposure while maintaining image quality. Beyond the selection of the physical parameters, the dose can be kept low by scanning the minimal length of the body and by avoiding any non-qualified repeated scanning of parts of the body. Following these rules, paediatric CT examinations of good quality can be obtained at a reasonable cost of radiation exposure.
On the consistency among different approaches for nuclear track scanning and data processing
NASA Astrophysics Data System (ADS)
Inozemtsev, K. O.; Kushin, V. V.; Kodaira, S.; Shurshakov, V. A.
2018-04-01
The article describes various approaches for space radiation track measurement using CR-39™ detector (Tastrak). The results of comparing different methods for track scanning and data processing are presented. Basic algorithms for determination of track parameters are described. Every approach involves individual set of measured track parameters. For two sets, track scanning is sufficient in the plane of detector surface (2-D measurement), third set requires scanning in the additional projection (3-D measurement). An experimental comparison of considered techniques was made with the use of accelerated heavy ions Ar, Fe and Kr.
Dieringer, Matthias A.; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I.; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Introduction Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. Methods T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Results Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Conclusion Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization. PMID:24621588
4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas
2016-04-01
The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.
Dieringer, Matthias A; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollerach, R.; Leszczynski, F.; Fink, J.
2006-07-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less
2011-01-01
Background A clinical study was conducted to determine the intra and inter-rater reliability of digital scanning and the neutral suspension casting technique to measure six foot parameters. The neutral suspension casting technique is a commonly utilised method for obtaining a negative impression of the foot prior to orthotic fabrication. Digital scanning offers an alternative to the traditional plaster of Paris techniques. Methods Twenty one healthy participants volunteered to take part in the study. Six casts and six digital scans were obtained from each participant by two raters of differing clinical experience. The foot parameters chosen for investigation were cast length (mm), forefoot width (mm), rearfoot width (mm), medial arch height (mm), lateral arch height (mm) and forefoot to rearfoot alignment (degrees). Intraclass correlation coefficients (ICC) with 95% confidence intervals (CI) were calculated to determine the intra and inter-rater reliability. Measurement error was assessed through the calculation of the standard error of the measurement (SEM) and smallest real difference (SRD). Results ICC values for all foot parameters using digital scanning ranged between 0.81-0.99 for both intra and inter-rater reliability. For neutral suspension casting technique inter-rater reliability values ranged from 0.57-0.99 and intra-rater reliability values ranging from 0.36-0.99 for rater 1 and 0.49-0.99 for rater 2. Conclusions The findings of this study indicate that digital scanning is a reliable technique, irrespective of clinical experience, with reduced measurement variability in all foot parameters investigated when compared to neutral suspension casting. PMID:21375757
Information-Theoretic Benchmarking of Land Surface Models
NASA Astrophysics Data System (ADS)
Nearing, Grey; Mocko, David; Kumar, Sujay; Peters-Lidard, Christa; Xia, Youlong
2016-04-01
Benchmarking is a type of model evaluation that compares model performance against a baseline metric that is derived, typically, from a different existing model. Statistical benchmarking was used to qualitatively show that land surface models do not fully utilize information in boundary conditions [1] several years before Gong et al [2] discovered the particular type of benchmark that makes it possible to *quantify* the amount of information lost by an incorrect or imperfect model structure. This theoretical development laid the foundation for a formal theory of model benchmarking [3]. We here extend that theory to separate uncertainty contributions from the three major components of dynamical systems models [4]: model structures, model parameters, and boundary conditions describe time-dependent details of each prediction scenario. The key to this new development is the use of large-sample [5] data sets that span multiple soil types, climates, and biomes, which allows us to segregate uncertainty due to parameters from the two other sources. The benefit of this approach for uncertainty quantification and segregation is that it does not rely on Bayesian priors (although it is strictly coherent with Bayes' theorem and with probability theory), and therefore the partitioning of uncertainty into different components is *not* dependent on any a priori assumptions. We apply this methodology to assess the information use efficiency of the four land surface models that comprise the North American Land Data Assimilation System (Noah, Mosaic, SAC-SMA, and VIC). Specifically, we looked at the ability of these models to estimate soil moisture and latent heat fluxes. We found that in the case of soil moisture, about 25% of net information loss was from boundary conditions, around 45% was from model parameters, and 30-40% was from the model structures. In the case of latent heat flux, boundary conditions contributed about 50% of net uncertainty, and model structures contributed about 40%. There was relatively little difference between the different models. 1. G. Abramowitz, R. Leuning, M. Clark, A. Pitman, Evaluating the performance of land surface models. Journal of Climate 21, (2008). 2. W. Gong, H. V. Gupta, D. Yang, K. Sricharan, A. O. Hero, Estimating Epistemic & Aleatory Uncertainties During Hydrologic Modeling: An Information Theoretic Approach. Water Resources Research 49, 2253-2273 (2013). 3. G. S. Nearing, H. V. Gupta, The quantity and quality of information in hydrologic models. Water Resources Research 51, 524-538 (2015). 4. H. V. Gupta, G. S. Nearing, Using models and data to learn: A systems theoretic perspective on the future of hydrological science. Water Resources Research 50(6), 5351-5359 (2014). 5. H. V. Gupta et al., Large-sample hydrology: a need to balance depth with breadth. Hydrology and Earth System Sciences Discussions 10, 9147-9189 (2013).
Optical benchmarking of security document readers for automated border control
NASA Astrophysics Data System (ADS)
Valentín, Kristián.; Wild, Peter; Å tolc, Svorad; Daubner, Franz; Clabian, Markus
2016-10-01
Authentication and optical verification of travel documents upon crossing borders is of utmost importance for national security. Understanding the workflow and different approaches to ICAO 9303 travel document scanning in passport readers, as well as highlighting normalization issues and designing new methods to achieve better harmonization across inspection devices are key steps for the development of more effective and efficient next- generation passport inspection. This paper presents a survey of state-of-the-art document inspection systems, showcasing results of a document reader challenge investigating 9 devices with regards to optical characteristics.
Polarization response of RHIC electron lens lattices
Ranjbar, V. H.; Méot, F.; Bai, M.; ...
2016-10-10
Depolarization response for a system of two orthogonal snakes at irrational tunes is studied in depth using lattice independent spin integration. Particularly, we consider the effect of overlapping spin resonances in this system, to understand the impact of phase, tune, relative location and threshold strengths of the spin resonances. Furthermore, these results are benchmarked and compared to two dimensional direct tracking results for the RHIC e-lens lattice and the standard lattice. We then consider the effect of longitudinal motion via chromatic scans using direct six dimensional lattice tracking.
Predictive Trip Detection for Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Rankin, Drew J.; Jiang, Jin
2016-08-01
This paper investigates the use of a Kalman filter (KF) to predict, within the shutdown system (SDS) of a nuclear power plant (NPP), whether safety parameter measurements have reached a trip set-point. In addition, least squares (LS) estimation compensates for prediction error due to system-model mismatch. The motivation behind predictive shutdown is to reduce the amount of time between the occurrence of a fault or failure and the time of trip detection, referred to as time-to-trip. These reductions in time-to-trip can ultimately lead to increases in safety and productivity margins. The proposed predictive SDS differs from conventional SDSs in that it compares point-predictions of the measurements, rather than sensor measurements, against trip set-points. The predictive SDS is validated through simulation and experiments for the steam generator water level safety parameter. Performance of the proposed predictive SDS is compared against benchmark conventional SDS with respect to time-to-trip. In addition, this paper analyzes: prediction uncertainty, as well as; the conditions under which it is possible to achieve reduced time-to-trip. Simulation results demonstrate that on average the predictive SDS reduces time-to-trip by an amount of time equal to the length of the prediction horizon and that the distribution of times-to-trip is approximately Gaussian. Experimental results reveal that a reduced time-to-trip can be achieved in a real-world system with unknown system-model mismatch and that the predictive SDS can be implemented with a scan time of under 100ms. Thus, this paper is a proof of concept for KF/LS-based predictive trip detection.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Dynamical sensitivity control of a single-spin quantum sensor.
Lazariev, Andrii; Arroyo-Camejo, Silvia; Rahane, Ganesh; Kavatamane, Vinaya Kumar; Balasubramanian, Gopalakrishnan
2017-07-26
The Nitrogen-Vacancy (NV) defect in diamond is a unique quantum system that offers precision sensing of nanoscale physical quantities at room temperature beyond the current state-of-the-art. The benchmark parameters for nanoscale magnetometry applications are sensitivity, spectral resolution, and dynamic range. Under realistic conditions the NV sensors controlled by conventional sensing schemes suffer from limitations of these parameters. Here we experimentally show a new method called dynamical sensitivity control (DYSCO) that boost the benchmark parameters and thus extends the practical applicability of the NV spin for nanoscale sensing. In contrast to conventional dynamical decoupling schemes, where π pulse trains toggle the spin precession abruptly, the DYSCO method allows for a smooth, analog modulation of the quantum probe's sensitivity. Our method decouples frequency selectivity and spectral resolution unconstrained over the bandwidth (1.85 MHz-392 Hz in our experiments). Using DYSCO we demonstrate high-accuracy NV magnetometry without |2π| ambiguities, an enhancement of the dynamic range by a factor of 4 · 10 3 , and interrogation times exceeding 2 ms in off-the-shelf diamond. In a broader perspective the DYSCO method provides a handle on the inherent dynamics of quantum systems offering decisive advantages for NV centre based applications notably in quantum information and single molecule NMR/MRI.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Effect of parameters on picosecond laser ablation of Cr12MoV cold work mold steel
NASA Astrophysics Data System (ADS)
Wu, Baoye; Liu, Peng; Zhang, Fei; Duan, Jun; Wang, Xizhao; Zeng, Xiaoyan
2018-01-01
Cr12MoV cold work mold steel, which is a difficult-to-machining material, is widely used in the mold and dye industry. A picosecond pulse Nd:YVO4 laser at 1064 nm was used to conduct the study. Effects of operation parameters (i.e., laser fluence, scanning speed, hatched space and number of scans) were studied on ablation depth and quality of Cr12MoV at the repetition rate of 20 MHz. The experimental results reveal that all the four parameters affect the ablation depth significantly. While the surface roughness depends mainly on laser fluence or scanning speed and secondarily on hatched space or number of scans. For laser fluence and scanning speed, three distinct surface morphologies were observed experiencing transition from flat (Ra < 1.40 μm) to bumpy (Ra = 1.40 - 2.40 μm) eventually to rough (Ra > 2.40 μm). However, for hatched space and number of scan, there is a small bumpy and rough zone or even no rough zone. Mechanisms including heat accumulation, plasma shielding and combustion reaction effects are proposed based on the ablation depth and processing morphology. By appropriate management of the laser fluence and scanning speed, high ablation depth with low surface roughness can be obtained at small hatched space and high number of scans.
Analysis of an infinite array of rectangular microstrip patches with idealized probe feeds
NASA Technical Reports Server (NTRS)
Pozar, D. M.; Schaubert, D. H.
1984-01-01
A solution is presented to the problem of an infinite array of microstrip patches fed by idealized current probes. The input reflection coefficient is calculated versus scan angle in an arbitrary scan plane, and the effects of substrate parameters and grid spacing are considered. It is pointed out that even when a Galerkin method is used the impedance matrix is not symmetric due to phasing through a unit cell, as required for scanning. The mechanism by which scan blindness can occur is discussed. Measurement results are presented for the reflection coefficient magnitude variation with angle for E-plane, H-plane, and D-plane scans, for various substrate parameters. Measured results from waveguide simulators are also presented, and the scan blindness phenomenon is observed and discussed in terms of forced surface waves and a modified grating lobe diagram.
Optimising μCT imaging of the middle and inner cat ear.
Seifert, H; Röher, U; Staszyk, C; Angrisani, N; Dziuba, D; Meyer-Lindenberg, A
2012-04-01
This study's aim was to determine the optimal scan parameters for imaging the middle and inner ear of the cat with micro-computertomography (μCT). Besides, the study set out to assess whether adequate image quality can be obtained to use μCT in diagnostics and research on cat ears. For optimisation, μCT imaging of two cat skull preparations was performed using 36 different scanning protocols. The μCT-scans were evaluated by four experienced experts with regard to the image quality and detail detectability. By compiling a ranking of the results, the best possible scan parameters could be determined. From a third cat's skull, a μCT-scan, using these optimised scan parameters, and a comparative clinical CT-scan were acquired. Afterwards, histological specimens of the ears were produced which were compared to the μCT-images. The comparison shows that the osseous structures are depicted in detail. Although soft tissues cannot be differentiated, the osseous structures serve as valuable spatial orientation of relevant nerves and muscles. Clinical CT can depict many anatomical structures which can also be seen on μCT-images, but these appear a lot less sharp and also less detailed than with μCT. © 2011 Blackwell Verlag GmbH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Fontaine, M; Bradshaw, T; Kubicek, L
2014-06-15
Purpose: Regions of poor perfusion within tumors may be associated with higher hypoxic levels. This study aimed to test this hypothesis by comparing measurements of hypoxia from Cu-ATSM PET to vasculature kinetic parameters from DCE-CT kinetic analysis. Methods: Ten canine patients with sinonasal tumors received one Cu-ATSM PET/CT scan and three DCE-CT scans prior to treatment. Cu-ATSM PET/CT and DCE-CT scans were registered and resampled to matching voxel dimensions. Kinetic analysis was performed on DCE-CT scans and for each patient, the resulting kinetic parameter values from the three DCE-CT scans were averaged together. Cu-ATSM SUVs were spatially correlated (r{sub spatial})more » on a voxel-to-voxel basis against the following DCE-CT kinetic parameters: transit time (t{sub 1}), blood flow (F), vasculature fraction (v{sub 1}), and permeability (PS). In addition, whole-tumor comparisons were performed by correlating (r{sub ROI}) the mean Cu-ATSM SUV (SUV{sub mean}) with median kinetic parameter values. Results: The spatial correlations (r{sub spatial}) were poor and ranged from -0.04 to 0.21 for all kinetic parameters. These low spatial correlations may be due to high variability in the DCE-CT kinetic parameter voxel values between scans. In our hypothesis, t{sub 1} was expected to have a positive correlation, while F was expected to have a negative correlation to hypoxia. However, in wholetumor analysis the opposite was found for both t{sub 1} (r{sub ROI} = -0.25) and F (r{sub ROI} = 0.56). PS and v{sub 1} may depict angiogenic responses to hypoxia and found positive correlations to Cu-ATSM SUV for PS (r{sub ROI} = 0.41), and v{sub 1} (r{sub ROI} = 0.57). Conclusion: Low spatial correlations were found between Cu-ATSM uptake and DCE-CT vasculature parameters, implying that poor perfusion is not associated with higher hypoxic regions. Across patients, the most hypoxic tumors tended to have higher blood flow values, which is contrary to our initial hypothesis. Funding: R01 CA136927.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Isabella B.; /Norfolk State U. /SLAC, SSRL
2006-01-04
X-ray fluorescence is being used to detect the ancient Greek copy of Archimedes work. The copy of Archimedes text was erased with a weak acid and written over to make a prayer book in the Middle Ages. The ancient parchment, made of goat skin, has on it some of Archimedes most valuable writings. The ink in the text contains iron which will fluoresce under x-ray radiation. My research project deals with the scanning and imaging process. The palimpsest is put in a stage that moves in a raster format. As the beam hits the parchment, a germanium detector detects themore » iron atoms and discriminates against other elements. Since the computer scans in both forwards and backwards directions, it is imperative that each row of data lines up exactly on top of the next row. There are several parameters to consider when scanning the parchment. These parameters include: speed, count time, shutter time, x-number of points, and acceleration. Formulas were made to relate these parameters together. During the actual beam time of this project, the scanning was very slow going; it took 30 hours to scan 1/2 of a page. Using the formulas, the scientists doubled distance and speed to scan the parchment faster; however, the grey scaled data was not lined up properly causing the images to look blurred. My project was is to find out why doubling the parameters caused blurred images, and to fix the problem if it is fixable.« less
Liu, Jianjun; Song, Rui; Cui, Mengmeng
2014-01-01
A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view.
Liu, Jianjun; Song, Rui; Cui, Mengmeng
2014-01-01
A novel approach of simulating hydromechanical coupling in pore-scale models of porous media is presented in this paper. Parameters of the sandstone samples, such as the stress-strain curve, Poisson's ratio, and permeability under different pore pressure and confining pressure, are tested in laboratory scale. The micro-CT scanner is employed to scan the samples for three-dimensional images, as input to construct the model. Accordingly, four physical models possessing the same pore and rock matrix characteristics as the natural sandstones are developed. Based on the micro-CT images, the three-dimensional finite element models of both rock matrix and pore space are established by MIMICS and ICEM software platform. Navier-Stokes equation and elastic constitutive equation are used as the mathematical model for simulation. A hydromechanical coupling analysis in pore-scale finite element model of porous media is simulated by ANSYS and CFX software. Hereby, permeability of sandstone samples under different pore pressure and confining pressure has been predicted. The simulation results agree well with the benchmark data. Through reproducing its stress state underground, the prediction accuracy of the porous rock permeability in pore-scale simulation is promoted. Consequently, the effects of pore pressure and confining pressure on permeability are revealed from the microscopic view. PMID:24955384
Huang, Chao-Tsung; Wang, Yu-Wen; Huang, Li-Ren; Chin, Jui; Chen, Liang-Gee
2017-02-01
Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- 34× speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.
NASA Astrophysics Data System (ADS)
Magro, G.; Molinelli, S.; Mairani, A.; Mirandola, A.; Panizza, D.; Russo, S.; Ferrari, A.; Valvo, F.; Fossati, P.; Ciocca, M.
2015-09-01
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo® TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus® chamber. An EBT3® film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
Magro, G; Molinelli, S; Mairani, A; Mirandola, A; Panizza, D; Russo, S; Ferrari, A; Valvo, F; Fossati, P; Ciocca, M
2015-09-07
This study was performed to evaluate the accuracy of a commercial treatment planning system (TPS), in optimising proton pencil beam dose distributions for small targets of different sizes (5-30 mm side) located at increasing depths in water. The TPS analytical algorithm was benchmarked against experimental data and the FLUKA Monte Carlo (MC) code, previously validated for the selected beam-line. We tested the Siemens syngo(®) TPS plan optimisation module for water cubes fixing the configurable parameters at clinical standards, with homogeneous target coverage to a 2 Gy (RBE) dose prescription as unique goal. Plans were delivered and the dose at each volume centre was measured in water with a calibrated PTW Advanced Markus(®) chamber. An EBT3(®) film was also positioned at the phantom entrance window for the acquisition of 2D dose maps. Discrepancies between TPS calculated and MC simulated values were mainly due to the different lateral spread modeling and resulted in being related to the field-to-spot size ratio. The accuracy of the TPS was proved to be clinically acceptable in all cases but very small and shallow volumes. In this contest, the use of MC to validate TPS results proved to be a reliable procedure for pre-treatment plan verification.
NASA Astrophysics Data System (ADS)
Pei, Youbin; Xiang, Nong; Shen, Wei; Hu, Youjun; Todo, Y.; Zhou, Deng; Huang, Juan
2018-05-01
Kinetic-MagnetoHydroDynamic (MHD) hybrid simulations are carried out to study fast ion driven toroidal Alfvén eigenmodes (TAEs) on the Experimental Advanced Superconducting Tokamak (EAST). The first part of this article presents the linear benchmark between two kinetic-MHD codes, namely MEGA and M3D-K, based on a realistic EAST equilibrium. Parameter scans show that the frequency and the growth rate of the TAE given by the two codes agree with each other. The second part of this article discusses the resonance interaction between the TAE and fast ions simulated by the MEGA code. The results show that the TAE exchanges energy with the co-current passing particles with the parallel velocity |v∥ | ≈VA 0/3 or |v∥ | ≈VA 0/5 , where VA 0 is the Alfvén speed on the magnetic axis. The TAE destabilized by the counter-current passing ions is also analyzed and found to have a much smaller growth rate than the co-current ions driven TAE. One of the reasons for this is found to be that the overlapping region of the TAE spatial location and the counter-current ion orbits is narrow, and thus the wave-particle energy exchange is not efficient.
Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) PARM tape user's guide
NASA Technical Reports Server (NTRS)
Han, D.; Gloersen, P.; Kim, S. T.; Fu, C. C.; Cebula, R. P.; Macmillan, D.
1992-01-01
The Scanning Multichannel Microwave Radiometer (SMMR) instrument, onboard the Nimbus-7 spacecraft, collected data from Oct. 1978 until Jun. 1986. The data were processed to physical parameter level products. Geophysical parameters retrieved include the following: sea-surface temperatures, sea-surface windspeed, total column water vapor, and sea-ice parameters. These products are stored on PARM-LO, PARM-SS, and PARM-30 tapes. The geophysical parameter retrieval algorithms and the quality of these products are described for the period between Nov. 1978 and Oct 1985. Additionally, data formats and data availability are included.
Benchmark Calibration Tests Completed for Stirling Convertor Heater Head Life Assessment
NASA Technical Reports Server (NTRS)
Krause, David L.; Halford, Gary R.; Bowman, Randy R.
2005-01-01
A major phase of benchmark testing has been completed at the NASA Glenn Research Center (http://www.nasa.gov/glenn/), where a critical component of the Stirling Radioisotope Generator (SRG) is undergoing extensive experimentation to aid the development of an analytical life-prediction methodology. Two special-purpose test rigs subjected SRG heater-head pressure-vessel test articles to accelerated creep conditions, using the standard design temperatures to stay within the wall material s operating creep-response regime, but increasing wall stresses up to 7 times over the design point. This resulted in well-controlled "ballooning" of the heater-head hot end. The test plan was developed to provide critical input to analytical parameters in a reasonable period of time.
An Effect Size Measure for Raju's Differential Functioning for Items and Tests
ERIC Educational Resources Information Center
Wright, Keith D.; Oshima, T. C.
2015-01-01
This study established an effect size measure for differential functioning for items and tests' noncompensatory differential item functioning (NCDIF). The Mantel-Haenszel parameter served as the benchmark for developing NCDIF's effect size measure for reporting moderate and large differential item functioning in test items. The effect size of…
Kohonen Self-Organizing Feature Maps as a Means to Benchmark College and University Websites
ERIC Educational Resources Information Center
Cooper, Cameron; Burns, Andrew
2007-01-01
Websites for colleges and universities have become the primary means for students to obtain information in the college search process. Consequently, institutions of higher education should target their websites toward prospective and current students' needs, interests, and tastes. Numerous parameters must be determined in creating a school website…
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Fission barriers at the end of the chart of the nuclides
NASA Astrophysics Data System (ADS)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; Iwamoto, Akira; Mumpower, Matthew
2015-02-01
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤A ≤330 . The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop model with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than 5 000 000 different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ɛ ) and the spherical-harmonic (β ) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ɛ ,γ ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about 1 MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β -delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. These studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
Learned Compact Local Feature Descriptor for Tls-Based Geodetic Monitoring of Natural Outdoor Scenes
NASA Astrophysics Data System (ADS)
Gojcic, Z.; Zhou, C.; Wieser, A.
2018-05-01
The advantages of terrestrial laser scanning (TLS) for geodetic monitoring of man-made and natural objects are not yet fully exploited. Herein we address one of the open challenges by proposing feature-based methods for identification of corresponding points in point clouds of two or more epochs. We propose a learned compact feature descriptor tailored for point clouds of natural outdoor scenes obtained using TLS. We evaluate our method both on a benchmark data set and on a specially acquired outdoor dataset resembling a simplified monitoring scenario where we successfully estimate 3D displacement vectors of a rock that has been displaced between the scans. We show that the proposed descriptor has the capacity to generalize to unseen data and achieves state-of-the-art performance while being time efficient at the matching step due the low dimension.
MUSiC - Model-independent search for deviations from Standard Model predictions in CMS
NASA Astrophysics Data System (ADS)
Pieta, Holger
2010-02-01
We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )
Roller-transducer scanning of wooden pallet parts for defect detection
Mohammed F. Kabir; Daniel L. Schmoldt; Mark E. Schafer
2001-01-01
Ultrasonic scanning experiments were conducted on two species of pallet deckboards using rolling transducers in a pitch-catch arrangement. Sound and unsound knots, cross grain, bark pockets, holes, splits, decay, and wane were characterized using several ultrasound parameters. Almost all parameters displayed sensitivity to defects distinctly from clear wood regionsâ...
Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) Instrument Improvements
NASA Technical Reports Server (NTRS)
Dunagan, Stephen E.; Redemann, Jens; Chang, Cecilia; Dahlgren, Robert; Fahey, Lauren; Flynn, Connor; Johnson, Roy; Kacenelenbogen, Meloe; Leblanc, Samuel; Liss, Jordan;
2017-01-01
The Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR) combines airborne sun tracking and sky scanning with grating spectroscopy to improve knowledge of atmospheric constituents and their links to air-pollution and climate. Hyper-spectral measurements of direct-beam solar irradiance provide retrievals of gas constituents, aerosol optical depth, and aerosol and thin cloud optical properties. Sky radiance measurements in the principal and almucantar planes enhance retrievals of aerosol absorption, aerosol type, and size mode distribution. Zenith radiance measurements are used to retrieve cloud properties and phase, which in turn are used to quantify the radiative transfer below cloud layers. These airborne measurements tighten the closure between satellite and ground-based measurements. In contrast to the Ames Airborne Tracking Sunphotometer (AATS-14) predecessor instrument, new technologies for each subsystem have been incorporated into 4STAR. In particular, 4STAR utilizes a modular sun-trackingsky-scanning optical head with fiber optic signal transmission to rack mounted spectrometers, permitting miniaturization of the external optical head, and spectrometerdetector configurations that may be tailored for specific scientific objectives. This paper discusses technical challenges relating to compact optical collector design, radiometric dynamic range and stability, and broad spectral coverage at high resolution. Test results benchmarking the performance of the instrument against the AATS-14 standard and emerging science requirements are presented.
Automatic Clustering Using FSDE-Forced Strategy Differential Evolution
NASA Astrophysics Data System (ADS)
Yasid, A.
2018-01-01
Clustering analysis is important in datamining for unsupervised data, cause no adequate prior knowledge. One of the important tasks is defining the number of clusters without user involvement that is known as automatic clustering. This study intends on acquiring cluster number automatically utilizing forced strategy differential evolution (AC-FSDE). Two mutation parameters, namely: constant parameter and variable parameter are employed to boost differential evolution performance. Four well-known benchmark datasets were used to evaluate the algorithm. Moreover, the result is compared with other state of the art automatic clustering methods. The experiment results evidence that AC-FSDE is better or competitive with other existing automatic clustering algorithm.
A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.
A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization
Xu, Qingyang; Zhang, Chengjin; Zhang, Li
2014-01-01
Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059
Stanford, Robert E
2004-05-01
This paper uses a non-parametric frontier model and adaptations of the concepts of cross-efficiency and peer-appraisal to develop a formal methodology for benchmarking provider performance in the treatment of Acute Myocardial Infarction (AMI). Parameters used in the benchmarking process are the rates of proper recognition of indications of six standard treatment processes for AMI; the decision making units (DMUs) to be compared are the Medicare eligible hospitals of a particular state; the analysis produces an ordinal ranking of individual hospital performance scores. The cross-efficiency/peer-appraisal calculation process is constructed to accommodate DMUs that experience no patients in some of the treatment categories. While continuing to rate highly the performances of DMUs which are efficient in the Pareto-optimal sense, our model produces individual DMU performance scores that correlate significantly with good overall performance, as determined by a comparison of the sums of the individual DMU recognition rates for the six standard treatment processes. The methodology is applied to data collected from 107 state Medicare hospitals.
Campanella, Gabriele; Rajanna, Arjun R; Corsale, Lorraine; Schüffler, Peter J; Yagi, Yukako; Fuchs, Thomas J
2018-04-01
Pathology is on the verge of a profound change from an analog and qualitative to a digital and quantitative discipline. This change is mostly driven by the high-throughput scanning of microscope slides in modern pathology departments, reaching tens of thousands of digital slides per month. The resulting vast digital archives form the basis of clinical use in digital pathology and allow large scale machine learning in computational pathology. One of the most crucial bottlenecks of high-throughput scanning is quality control (QC). Currently, digital slides are screened manually to detected out-of-focus regions, to compensate for the limitations of scanner software. We present a solution to this problem by introducing a benchmark dataset for blur detection, an in-depth comparison of state-of-the art sharpness descriptors and their prediction performance within a random forest framework. Furthermore, we show that convolution neural networks, like residual networks, can be used to train blur detectors from scratch. We thoroughly evaluate the accuracy of feature based and deep learning based approaches for sharpness classification (99.74% accuracy) and regression (MSE 0.004) and additionally compare them to domain experts in a comprehensive human perception study. Our pipeline outputs spacial heatmaps enabling to quantify and localize blurred areas on a slide. Finally, we tested the proposed framework in the clinical setting and demonstrate superior performance over the state-of-the-art QC pipeline comprising commercial software and human expert inspection by reducing the error rate from 17% to 4.7%. Copyright © 2017. Published by Elsevier Ltd.
Non-LTE aluminium abundances in late-type stars
NASA Astrophysics Data System (ADS)
Nordlander, T.; Lind, K.
2017-11-01
Aims: Aluminium plays a key role in studies of the chemical enrichment of the Galaxy and of globular clusters. However, strong deviations from LTE (non-LTE) are known to significantly affect the inferred abundances in giant and metal-poor stars. Methods: We present non-local thermodynamic equilibrium (NLTE) modeling of aluminium using recent and accurate atomic data, in particular utilizing new transition rates for collisions with hydrogen atoms, without the need for any astrophysically calibrated parameters. For the first time, we perform 3D NLTE modeling of aluminium lines in the solar spectrum. We also compute and make available extensive grids of abundance corrections for lines in the optical and near-infrared using one-dimensional model atmospheres, and apply grids of precomputed departure coefficients to direct line synthesis for a set of benchmark stars with accurately known stellar parameters. Results: Our 3D NLTE modeling of the solar spectrum reproduces observed center-to-limb variations in the solar spectrum of the 7835 Å line as well as the mid-infrared photospheric emission line at 12.33 μm. We infer a 3D NLTE solar photospheric abundance of A(Al) = 6.43 ± 0.03, in exact agreement with the meteoritic abundance. We find that abundance corrections vary rapidly with stellar parameters; for the 3961 Å resonance line, corrections are positive and may be as large as +1 dex, while corrections for subordinate lines generally have positive sign for warm stars but negative for cool stars. Our modeling reproduces the observed line profiles of benchmark K-giants, and we find abundance corrections as large as -0.3 dex for Arcturus. Our analyses of four metal-poor benchmark stars yield consistent abundances between the 3961 Å resonance line and lines in the UV, optical and near-infrared regions. Finally, we discuss implications for the galactic chemical evolution of aluminium.
Optimization of dose and image quality in adult and pediatric computed tomography scans
NASA Astrophysics Data System (ADS)
Chang, Kwo-Ping; Hsu, Tzu-Kun; Lin, Wei-Ting; Hsu, Wen-Lin
2017-11-01
Exploration to maximize CT image and reduce radiation dose was conducted while controlling for multiple factors. The kVp, mAs, and iteration reconstruction (IR), affect the CT image quality and radiation dose absorbed. The optimal protocols (kVp, mAs, IR) are derived by figure of merit (FOM) based on CT image quality (CNR) and CT dose index (CTDIvol). CT image quality metrics such as CT number accuracy, SNR, low contrast materials' CNR and line pair resolution were also analyzed as auxiliary assessments. CT protocols were carried out with an ACR accreditation phantom and a five-year-old pediatric head phantom. The threshold values of the adult CT scan parameters, 100 kVp and 150 mAs, were determined from the CT number test and line pairs in ACR phantom module 1and module 4 respectively. The findings of this study suggest that the optimal scanning parameters for adults be set at 100 kVp and 150-250 mAs. However, for improved low- contrast resolution, 120 kVp and 150-250 mAs are optimal. Optimal settings for pediatric head CT scan were 80 kVp/50 mAs, for maxillary sinus and brain stem, while 80 kVp /300 mAs for temporal bone. SNR is not reliable as the independent image parameter nor the metric for determining optimal CT scan parameters. The iteration reconstruction (IR) approach is strongly recommended for both adult and pediatric CT scanning as it markedly improves image quality without affecting radiation dose.
NASA Astrophysics Data System (ADS)
Rodriguez, Tony F.; Cushman, David A.
2003-06-01
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
Experimental benchmark of kinetic simulations of capacitively coupled plasmas in molecular gases
NASA Astrophysics Data System (ADS)
Donkó, Z.; Derzsi, A.; Korolov, I.; Hartmann, P.; Brandt, S.; Schulze, J.; Berger, B.; Koepke, M.; Bruneau, B.; Johnson, E.; Lafleur, T.; Booth, J.-P.; Gibson, A. R.; O'Connell, D.; Gans, T.
2018-01-01
We discuss the origin of uncertainties in the results of numerical simulations of low-temperature plasma sources, focusing on capacitively coupled plasmas. These sources can be operated in various gases/gas mixtures, over a wide domain of excitation frequency, voltage, and gas pressure. At low pressures, the non-equilibrium character of the charged particle transport prevails and particle-based simulations become the primary tools for their numerical description. The particle-in-cell method, complemented with Monte Carlo type description of collision processes, is a well-established approach for this purpose. Codes based on this technique have been developed by several authors/groups, and have been benchmarked with each other in some cases. Such benchmarking demonstrates the correctness of the codes, but the underlying physical model remains unvalidated. This is a key point, as this model should ideally account for all important plasma chemical reactions as well as for the plasma-surface interaction via including specific surface reaction coefficients (electron yields, sticking coefficients, etc). In order to test the models rigorously, comparison with experimental ‘benchmark data’ is necessary. Examples will be given regarding the studies of electron power absorption modes in O2, and CF4-Ar discharges, as well as on the effect of modifications of the parameters of certain elementary processes on the computed discharge characteristics in O2 capacitively coupled plasmas.
Can online benchmarking increase rates of thrombolysis? Data from the Austrian stroke unit registry.
Ferrari, Julia; Seyfang, Leonhard; Lang, Wilfried
2013-09-01
Despite its widespread availability and known safety and efficacy, a therapy with intravenous thrombolysis is still undergiven. We aimed to identify whether nationwide quality projects--like the stroke registry in Austria--as well as online benchmarking and predefined target values can increase rates of thrombolysis. Therefore, we assessed 6,394 out of 48,462 patients with ischemic stroke from the Austrian stroke registry (study period from March 2003 to December 2011) who had undergone thrombolysis treatment. We defined lower level and target values as quality parameters and evaluated whether or not these parameters could be achieved in the past years. We were able to show that rates of thrombolysis in Austria increased from 4.9% in 2003 to 18.3% in 2011. In a multivariate regression model, the main impact seen was the increase over the years [the OR ranges from 0.47 (95% CI 0.32-0.68) in 2003 to 2.51 (95% CI 2.20-2.87) in 2011). The predefined lower and target levels of thrombolysis were achieved at the majority of participating centers: in 2011 the lower value of 5% was achieved at all stroke units, and the target value of 15% was observed at 21 of 34 stroke units. We conclude that online benchmarking and the concept of defining target values as a tool for nationwide acute stroke care appeared to result in an increase in the rate of thrombolysis over the last few years while the variability between the stroke units has not yet been reduced.
Benchmarking in Thoracic Surgery. Third Edition.
Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás
2016-04-01
Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.
Dynamic 68Ga-DOTATOC PET/CT and static image in NET patients. Correlation of parameters during PRRT.
Van Binnebeek, Sofie; Koole, Michel; Terwinghe, Christelle; Baete, Kristof; Vanbilloen, Bert; Haustermans, Karine; Clement, Paul M; Bogaerts, Kris; Verbruggen, Alfons; Nackaerts, Kris; Van Cutsem, Eric; Verslype, Chris; Mottaghy, Felix M; Deroose, Christophe M
2016-06-28
To investigate the relationship between the dynamic parameters (Ki) and static image-derived parameters of 68Ga-DOTATOC-PET, to determine which static parameter best reflects underlying somatostatin-receptor-expression (SSR) levels on neuroendocrine tumours (NETs). 20 patients with metastasized NETs underwent a dynamic and static 68Ga-DOTATOC-PET before PRRT and at 7 and 40 weeks after the first administration of 90Y-DOTATOC (in total 4 cycles were planned); 175 lesions were defined and analyzed on the dynamic as well as static scans. Quantitative analysis was performed using the software PMOD. One to five target lesions per patient were chosen and delineated manually on the baseline dynamic scan and further, on the corresponding static 68Ga-DOTATOC-PET and the dynamic and static 68Ga-DOTATOC-PET at the other time-points; SUVmax and SUVmean of the lesions was assessed on the other six scans. The input function was retrieved from the abdominal aorta on the images. Further on, Ki was calculated using the Patlak-Plot. At last, 5 reference regions for normalization of SUVtumour were delineated on the static scans resulting in 5 ratios (SUVratio). SUVmax and SUVmean of the tumoural lesions on the dynamic 68Ga-DOTATOC-PET had a very strong correlation with the corresponding parameters in the static scan (R²: 0.94 and 0.95 respectively). SUVmax, SUVmean and Ki of the lesions showed a good linear correlation; the SUVratios correlated poorly with Ki. A significantly better correlation was noticed between Ki and SUVtumour(max and mean) (p < 0.0001). As the dynamic parameter Ki correlates best with the absolute SUVtumour, SUVtumour best reflects underlying SSR-levels in NETs.
Study on the parameters of the scanning system for the 300 keV electron accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leo, K. W.; Chulan, R. M., E-mail: leo@nm.gov.my; Hashim, S. A.
2016-01-22
This paper describes the method to identify the magnetic coil parameters of the scanning system. This locally designed low energy electron accelerator with the present energy of 140 keV will be upgraded to 300 keV. In this accelerator, scanning system is required to deflect the energetic electron beam across a titanium foil in vertical and horizontal direction. The excitation current of the magnetic coil is determined by the energy of the electron beam. Therefore, the magnetic coil parameters must be identified to ensure the matching of the beam energy and excitation coil current. As the result, the essential parameters ofmore » the effective lengths for X-axis and Y-axis have been found as 0.1198 m and 0.1134 m and the required excitation coil currents which is dependenton the electron beam energies have be identified.« less
Computational study of some fluoroquinolones: Structural, spectral and docking investigations
NASA Astrophysics Data System (ADS)
Sayin, Koray; Karakaş, Duran; Kariper, Sultan Erkan; Sayin, Tuba Alagöz
2018-03-01
Quantum chemical calculations are performed over norfloxacin, tosufloxacin and levofloxacin. The most stable structures for each molecule are determined by thermodynamic parameters. Then the best level for calculations is determined by benchmark analysis. M062X/6-31 + G(d) level is used in calculations. IR, UV-VIS and NMR spectrum are calculated and examined in detail. Some quantum chemical parameters are calculated and the tendency of activity is recommended. Additionally, molecular docking calculations are performed between related compounds and a protein (ID: 2J9N).
Key aspects of cost effective collector and solar field design
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Nicodemo, Dario; Keck, Thomas; Weinrebe, Gerhard; Balz, Markus
2016-05-01
A study has been performed where different key parameters influencing solar field cost are varied. By using levelised cost of energy as figure of merit it is shown that parameters like GoToStow wind speed, heliostat stiffness or tower height should be adapted to respective site conditions from an economical point of view. The benchmark site Redstone (Northern Cape Province, South Africa) has been compared to an alternate site close to Phoenix (AZ, USA) regarding site conditions and their effect on cost-effective collector and solar field design.
The mass storage testing laboratory at GSFC
NASA Technical Reports Server (NTRS)
Venkataraman, Ravi; Williams, Joel; Michaud, David; Gu, Heng; Kalluri, Atri; Hariharan, P. C.; Kobler, Ben; Behnke, Jeanne; Peavey, Bernard
1998-01-01
Industry-wide benchmarks exist for measuring the performance of processors (SPECmarks), and of database systems (Transaction Processing Council). Despite storage having become the dominant item in computing and IT (Information Technology) budgets, no such common benchmark is available in the mass storage field. Vendors and consultants provide services and tools for capacity planning and sizing, but these do not account for the complete set of metrics needed in today's archives. The availability of automated tape libraries, high-capacity RAID systems, and high- bandwidth interconnectivity between processor and peripherals has led to demands for services which traditional file systems cannot provide. File Storage and Management Systems (FSMS), which began to be marketed in the late 80's, have helped to some extent with large tape libraries, but their use has introduced additional parameters affecting performance. The aim of the Mass Storage Test Laboratory (MSTL) at Goddard Space Flight Center is to develop a test suite that includes not only a comprehensive check list to document a mass storage environment but also benchmark code. Benchmark code is being tested which will provide measurements for both baseline systems, i.e. applications interacting with peripherals through the operating system services, and for combinations involving an FSMS. The benchmarks are written in C, and are easily portable. They are initially being aimed at the UNIX Open Systems world. Measurements are being made using a Sun Ultra 170 Sparc with 256MB memory running Solaris 2.5.1 with the following configuration: 4mm tape stacker on SCSI 2 Fast/Wide; 4GB disk device on SCSI 2 Fast/Wide; and Sony Petaserve on Fast/Wide differential SCSI 2.
Electric load shape benchmarking for small- and medium-sized commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Hong, Tianzhen; Chen, Yixing
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Electric load shape benchmarking for small- and medium-sized commercial buildings
Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...
2017-07-28
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
ABSTRACT: Total Petroleum hydrocarbons (TPH) as a lumped parameter can be easily and rapidly measured or monitored. Despite interpretational problems, it has become an accepted regulatory benchmark used widely to evaluate the extent of petroleum product contamination. Three cu...
Auditing Organizational Security
2017-01-01
Managing organizational security is no different from managing any other of the command’s missions. Establish your policies, goals and risk...parameters; implement, train, measure and benchmark them. And then audit, audit, audit. Today, more than ever, Organizational Security is an essential...not be regarded as independent or standing alone. Cybersecurity is an indispensable element of organizational security, which is the subject of
NASA Astrophysics Data System (ADS)
Rees, Sian; Dobre, George
2014-01-01
When using scanning laser ophthalmoscopy to produce images of the eye fundus, maximum permissible exposure (MPE) limits must be considered. These limits are set out in international standards such as the National Standards Institute ANSI Z136.1 Safe Use of Lasers (USA) and BS EN 60825-1: 1994 (UK) and corresponding Euro norms but these documents do not explicitly consider the case of scanned beams. Our study aims to show how MPE values can be calculated for the specific case of retinal scanning by taking into account an array of parameters, such as wavelength, exposure duration, type of scanning, line rate and field size, and how each set of initial parameters results in MPE values that correspond to thermal or photochemical damage to the retina.
Farr, J B; Dessy, F; De Wilde, O; Bietzer, O; Schönenberg, D
2013-07-01
The purpose of this investigation was to compare and contrast the measured fundamental properties of two new types of modulated proton scanning systems. This provides a basis for clinical expectations based on the scanned beam quality and a benchmark for computational models. Because the relatively small beam and fast scanning gave challenges to the characterization, a secondary purpose was to develop and apply new approaches where necessary to do so. The following performances of the proton scanning systems were investigated: beamlet alignment, static in-air beamlet size and shape, scanned in-air penumbra, scanned fluence map accuracy, geometric alignment of scanning system to isocenter, maximum field size, lateral and longitudinal field uniformity of a 1 l cubic uniform field, output stability over time, gantry angle invariance, monitoring system linearity, and reproducibility. A range of detectors was used: film, ionization chambers, lateral multielement and longitudinal multilayer ionization chambers, and a scintillation screen combined with a digital video camera. Characterization of the scanned fluence maps was performed with a software analysis tool. The resulting measurements and analysis indicated that the two types of delivery systems performed within specification for those aspects investigated. The significant differences were observed between the two types of scanning systems where one type exhibits a smaller spot size and associated penumbra than the other. The differential is minimum at maximum energy and increases inversely with decreasing energy. Additionally, the large spot system showed an increase in dose precision to a static target with layer rescanning whereas the small spot system did not. The measured results from the two types of modulated scanning types of system were consistent with their designs under the conditions tested. The most significant difference between the types of system was their proton spot size and associated resolution, factors of magnetic optics, and vacuum length. The need and benefit of mutielement detectors and high-resolution sensors was also shown. The use of a fluence map analytical software tool was particularly effective in characterizing the dynamic proton energy-layer scanning.
Sub-pixel analysis to support graphic security after scanning at low resolution
NASA Astrophysics Data System (ADS)
Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve
2006-02-01
Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.
Improved artificial bee colony algorithm for vehicle routing problem with time windows
Yan, Qianqian; Zhang, Mengjie; Yang, Yunong
2017-01-01
This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252
Benchmarking contactless acquisition sensor reproducibility for latent fingerprint trace evidence
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Dittmann, Jana
2015-03-01
Optical, nano-meter range, contactless, non-destructive sensor devices are promising acquisition techniques in crime scene trace forensics, e.g. for digitizing latent fingerprint traces. Before new approaches are introduced in crime investigations, innovations need to be positively tested and quality ensured. In this paper we investigate sensor reproducibility by studying different scans from four sensors: two chromatic white light sensors (CWL600/CWL1mm), one confocal laser scanning microscope, and one NIR/VIS/UV reflection spectrometer. Firstly, we perform an intra-sensor reproducibility testing for CWL600 with a privacy conform test set of artificial-sweat printed, computer generated fingerprints. We use 24 different fingerprint patterns as original samples (printing samples/templates) for printing with artificial sweat (physical trace samples) and their acquisition with contactless sensory resulting in 96 sensor images, called scan or acquired samples. The second test set for inter-sensor reproducibility assessment consists of the first three patterns from the first test set, acquired in two consecutive scans using each device. We suggest using a simple feature space set in spatial and frequency domain known from signal processing and test its suitability for six different classifiers classifying scan data into small differences (reproducible) and large differences (non-reproducible). Furthermore, we suggest comparing the classification results with biometric verification scores (calculated with NBIS, with threshold of 40) as biometric reproducibility score. The Bagging classifier is nearly for all cases the most reliable classifier in our experiments and the results are also confirmed with the biometric matching rates.
Sin, Wai Jack; Nai, Mui Ling Sharon; Wei, Jun
2017-01-01
As one of the powder bed fusion additive manufacturing technologies, electron beam melting (EBM) is gaining more and more attention due to its near-net-shape production capacity with low residual stress and good mechanical properties. These characteristics also allow EBM built parts to be used as produced without post-processing. However, the as-built rough surface introduces a detrimental influence on the mechanical properties of metallic alloys. Thereafter, understanding the effects of processing parameters on the part’s surface roughness, in turn, becomes critical. This paper has focused on varying the processing parameters of two types of contouring scanning strategies namely, multispot and non-multispot, in EBM. The results suggest that the beam current and speed function are the most significant processing parameters for non-multispot contouring scanning strategy. While for multispot contouring scanning strategy, the number of spots, spot time, and spot overlap have greater effects than focus offset and beam current. The improved surface roughness has been obtained in both contouring scanning strategies. Furthermore, non-multispot contouring scanning strategy gives a lower surface roughness value and poorer geometrical accuracy than the multispot counterpart under the optimized conditions. These findings could be used as a guideline for selecting the contouring type used for specific industrial parts that are built using EBM. PMID:28937638
Benchmark studies of induced radioactivity produced in LHC materials, Part II: Remanent dose rates.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
A new method to estimate remanent dose rates, to be used with the Monte Carlo code FLUKA, was benchmarked against measurements from an experiment that was performed at the CERN-EU high-energy reference field facility. An extensive collection of samples of different materials were placed downstream of, and laterally to, a copper target, intercepting a positively charged mixed hadron beam with a momentum of 120 GeV c(-1). Emphasis was put on the reduction of uncertainties by taking measures such as careful monitoring of the irradiation parameters, using different instruments to measure dose rates, adopting detailed elemental analyses of the irradiated materials and making detailed simulations of the irradiation experiment. The measured and calculated dose rates are in good agreement.
The fast and accurate 3D-face scanning technology based on laser triangle sensors
NASA Astrophysics Data System (ADS)
Wang, Jinjiang; Chang, Tianyu; Ge, Baozhen; Tian, Qingguo; Chen, Yang; Kong, Bin
2013-08-01
A laser triangle scanning method and the structure of 3D-face measurement system were introduced. In presented system, a liner laser source was selected as an optical indicated signal in order to scanning a line one times. The CCD image sensor was used to capture image of the laser line modulated by human face. The system parameters were obtained by system calibrated calculated. The lens parameters of image part of were calibrated with machine visual image method and the triangle structure parameters were calibrated with fine wire paralleled arranged. The CCD image part and line laser indicator were set with a linear motor carry which can achieve the line laser scanning form top of the head to neck. For the nose is ledge part and the eyes are sunk part, one CCD image sensor can not obtain the completed image of laser line. In this system, two CCD image sensors were set symmetric at two sides of the laser indicator. In fact, this structure includes two laser triangle measure units. Another novel design is there laser indicators were arranged in order to reduce the scanning time for it is difficult for human to keep static for longer time. The 3D data were calculated after scanning. And further data processing include 3D coordinate refine, mesh calculate and surface show. Experiments show that this system has simply structure, high scanning speed and accurate. The scanning range covers the whole head of adult, the typical resolution is 0.5mm.
Using string invariants for prediction searching for optimal parameters
NASA Astrophysics Data System (ADS)
Bundzel, Marek; Kasanický, Tomáš; Pinčák, Richard
2016-02-01
We have developed a novel prediction method based on string invariants. The method does not require learning but a small set of parameters must be set to achieve optimal performance. We have implemented an evolutionary algorithm for the parametric optimization. We have tested the performance of the method on artificial and real world data and compared the performance to statistical methods and to a number of artificial intelligence methods. We have used data and the results of a prediction competition as a benchmark. The results show that the method performs well in single step prediction but the method's performance for multiple step prediction needs to be improved. The method works well for a wide range of parameters.
Comparative study of image contrast in scanning electron microscope and helium ion microscope.
O'Connell, R; Chen, Y; Zhang, H; Zhou, Y; Fox, D; Maguire, P; Wang, J J; Rodenburg, C
2017-12-01
Images of Ga + -implanted amorphous silicon layers in a 110 n-type silicon substrate have been collected by a range of detectors in a scanning electron microscope and a helium ion microscope. The effects of the implantation dose and imaging parameters (beam energy, dwell time, etc.) on the image contrast were investigated. We demonstrate a similar relationship for both the helium ion microscope Everhart-Thornley and scanning electron microscope Inlens detectors between the contrast of the images and the Ga + density and imaging parameters. These results also show that dynamic charging effects have a significant impact on the quantification of the helium ion microscope and scanning electron microscope contrast. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Fission barriers at the end of the chart of the nuclides
Möller, Peter; Sierk, Arnold J.; Ichikawa, Takatoshi; ...
2015-02-12
We present calculated fission-barrier heights for 5239 nuclides for all nuclei between the proton and neutron drip lines with 171 ≤ A ≤ 330. The barriers are calculated in the macroscopic-microscopic finite-range liquid-drop (FRLDM) with a 2002 set of macroscopic-model parameters. The saddle-point energies are determined from potential-energy surfaces based on more than five million different shapes, defined by five deformation parameters in the three-quadratic-surface shape parametrization: elongation, neck diameter, left-fragment spheroidal deformation, right-fragment spheroidal deformation, and nascent-fragment mass asymmetry. The energy of the ground state is determined by calculating the lowest-energy configuration in both the Nilsson perturbed-spheroid (ϵ) andmore » the spherical-harmonic (β) parametrizations, including axially asymmetric deformations. The lower of the two results (correcting for zero-point motion) is defined as the ground-state energy. The effect of axial asymmetry on the inner barrier peak is calculated in the (ϵ,γ) parametrization. We have earlier benchmarked our calculated barrier heights to experimentally extracted barrier parameters and found average agreement to about one MeV for known data across the nuclear chart. Here we do additional benchmarks and investigate the qualitative and, when possible, quantitative agreement and/or consistency with data on β-delayed fission, isotope generation along prompt-neutron-capture chains in nuclear-weapons tests, and superheavy-element stability. In addition these studies all indicate that the model is realistic at considerable distances in Z and N from the region of nuclei where its parameters were determined.« less
NASA Astrophysics Data System (ADS)
Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal
2011-06-01
We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.
NASA Astrophysics Data System (ADS)
Hausmann, Michael; Doelle, Juergen; Arnold, Armin; Stepanow, Boris; Wickert, Burkhard; Boscher, Jeannine; Popescu, Paul C.; Cremer, Christoph
1992-07-01
Laser fluorescence activated slit-scan flow cytometry offers an approach to a fast, quantitative characterization of chromosomes due to morphological features. It can be applied for screening of chromosomal abnormalities. We give a preliminary report on the development of the Heidelberg slit-scan flow cytometer. Time-resolved measurement of the fluorescence intensity along the chromosome axis can be registered simultaneously for two parameters when the chromosome axis can be registered simultaneously for two parameters when the chromosome passes perpendicularly through a narrowly focused laser beam combined by a detection slit in the image plane. So far automated data analysis has been performed off-line on a PC. In its final performance, the Heidelberg slit-scan flow cytometer will achieve on-line data analysis that allows an electro-acoustical sorting of chromosomes of interest. Interest is high in the agriculture field to study chromosome aberrations that influence the size of litters in pig (Sus scrofa domestica) breeding. Slit-scan measurements have been performed to characterize chromosomes of pigs; we present results for chromosome 1 and a translocation chromosome 6/15.
Automated Guided-Wave Scanning Developed to Characterize Materials and Detect Defects
NASA Technical Reports Server (NTRS)
Martin, Richard E.; Gyekenyeski, Andrew L.; Roth, Don J.
2004-01-01
The Nondestructive Evaluation (NDE) Group of the Optical Instrumentation Technology Branch at the NASA Glenn Research Center has developed a scanning system that uses guided waves to characterize materials and detect defects. The technique uses two ultrasonic transducers to interrogate the condition of a material. The sending transducer introduces an ultrasonic pulse at a point on the surface of the specimen, and the receiving transducer detects the signal after it has passed through the material. The aim of the method is to correlate certain parameters in both the time and frequency domains of the detected waveform to characteristics of the material between the two transducers. The scanning system is shown. The waveform parameters of interest include the attenuation due to internal damping, waveform shape parameters, and frequency shifts due to material changes. For the most part, guided waves are used to gauge the damage state and defect growth of materials subjected to various mechanical or environmental loads. The technique has been applied to polymer matrix composites, ceramic matrix composites, and metal matrix composites as well as metallic alloys. Historically, guided wave analysis has been a point-by-point, manual technique with waveforms collected at discrete locations and postprocessed. Data collection and analysis of this type limits the amount of detail that can be obtained. Also, the manual movement of the sensors is prone to user error and is time consuming. The development of an automated guided-wave scanning system has allowed the method to be applied to a wide variety of materials in a consistent, repeatable manner. Experimental studies have been conducted to determine the repeatability of the system as well as compare the results obtained using more traditional NDE methods. The following screen capture shows guided-wave scan results for a ceramic matrix composite plate, including images for each of nine calculated parameters. The system can display up to 18 different wave parameters. Multiple scans of the test specimen demonstrated excellent repeatability in the measurement of all the guided-wave parameters, far exceeding the traditional point-by-point technique. In addition, the scan was able to detect a subsurface defect that was confirmed using flash thermography This technology is being further refined to provide a more robust and efficient software environment. Future hardware upgrades will allow for multiple receiving transducers and the ability to scan more complex surfaces. This work supports composite materials development and testing under the Ultra-Efficient Engine Technology (UEET) Project, but it also will be applied to other material systems under development for a wide range of applications.
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui
2018-04-01
Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.
Gradient Evolution-based Support Vector Machine Algorithm for Classification
NASA Astrophysics Data System (ADS)
Zulvia, Ferani E.; Kuo, R. J.
2018-03-01
This paper proposes a classification algorithm based on a support vector machine (SVM) and gradient evolution (GE) algorithms. SVM algorithm has been widely used in classification. However, its result is significantly influenced by the parameters. Therefore, this paper aims to propose an improvement of SVM algorithm which can find the best SVMs’ parameters automatically. The proposed algorithm employs a GE algorithm to automatically determine the SVMs’ parameters. The GE algorithm takes a role as a global optimizer in finding the best parameter which will be used by SVM algorithm. The proposed GE-SVM algorithm is verified using some benchmark datasets and compared with other metaheuristic-based SVM algorithms. The experimental results show that the proposed GE-SVM algorithm obtains better results than other algorithms tested in this paper.
Construction of optimal 3-node plate bending triangles by templates
NASA Astrophysics Data System (ADS)
Felippa, C. A.; Militello, C.
A finite element template is a parametrized algebraic form that reduces to specific finite elements by setting numerical values to the free parameters. The present study concerns Kirchhoff Plate-Bending Triangles (KPT) with 3 nodes and 9 degrees of freedom. A 37-parameter template is constructed using the Assumed Natural Deviatoric Strain (ANDES). Specialization of this template includes well known elements such as DKT and HCT. The question addressed here is: can these parameters be selected to produce high performance elements? The study is carried out by staged application of constraints on the free parameters. The first stage produces element families satisfying invariance and aspect ratio insensitivity conditions. Application of energy balance constraints produces specific elements. The performance of such elements in benchmark tests is presently under study.
Towards accurate modeling of noncovalent interactions for protein rigidity analysis.
Fox, Naomi; Streinu, Ileana
2013-01-01
Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.
Towards accurate modeling of noncovalent interactions for protein rigidity analysis
2013-01-01
Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209
NASA Astrophysics Data System (ADS)
Viereck, R. A.; Azeem, S. I.
2017-12-01
One of the goals of the National Space Weather Action Plan is to establish extreme event benchmarks. These benchmarks are estimates of environmental parameters that impact technologies and systems during extreme space weather events. Quantitative assessment of anticipated conditions during these extreme space weather event will enable operators and users of affected technologies to develop plans for mitigating space weather risks and improve preparedness. The ionosphere is one of the most important regions of space because so many applications either depend on ionospheric space weather for their operation (HF communication, over-the-horizon radars), or can be deleteriously affected by ionospheric conditions (e.g. GNSS navigation and timing, UHF satellite communications, synthetic aperture radar, HF communications). Since the processes that influence the ionosphere vary over time scales from seconds to years, it continues to be a challenge to adequately predict its behavior in many circumstances. Estimates with large uncertainties, in excess of 100%, may result in operators of impacted technologies over or under preparing for such events. The goal of the next phase of the benchmarking activity is to reduce these uncertainties. In this presentation, we will focus on the sources of uncertainty in the ionospheric response to extreme geomagnetic storms. We will then discuss various research efforts required to better understand the underlying processes of ionospheric variability and how the uncertainties in ionospheric response to extreme space weather could be reduced and the estimates improved.
Nonlinear viscoplasticity in ASPECT: benchmarking and applications to subduction
NASA Astrophysics Data System (ADS)
Glerum, Anne; Thieulot, Cedric; Fraters, Menno; Blom, Constantijn; Spakman, Wim
2018-03-01
ASPECT (Advanced Solver for Problems in Earth's ConvecTion) is a massively parallel finite element code originally designed for modeling thermal convection in the mantle with a Newtonian rheology. The code is characterized by modern numerical methods, high-performance parallelism and extensibility. This last characteristic is illustrated in this work: we have extended the use of ASPECT from global thermal convection modeling to upper-mantle-scale applications of subduction. Subduction modeling generally requires the tracking of multiple materials with different properties and with nonlinear viscous and viscoplastic rheologies. To this end, we implemented a frictional plasticity criterion that is combined with a viscous diffusion and dislocation creep rheology. Because ASPECT uses compositional fields to represent different materials, all material parameters are made dependent on a user-specified number of fields. The goal of this paper is primarily to describe and verify our implementations of complex, multi-material rheology by reproducing the results of four well-known two-dimensional benchmarks: the indentor benchmark, the brick experiment, the sandbox experiment and the slab detachment benchmark. Furthermore, we aim to provide hands-on examples for prospective users by demonstrating the use of multi-material viscoplasticity with three-dimensional, thermomechanical models of oceanic subduction, putting ASPECT on the map as a community code for high-resolution, nonlinear rheology subduction modeling.
Pasler, Marlies; Kaas, Jochem; Perik, Thijs; Geuze, Job; Dreindl, Ralf; Künzler, Thomas; Wittkamper, Frits; Georg, Dietmar
2015-12-01
To systematically evaluate machine specific quality assurance (QA) for volumetric modulated arc therapy (VMAT) based on log files by applying a dynamic benchmark plan. A VMAT benchmark plan was created and tested on 18 Elekta linacs (13 MLCi or MLCi2, 5 Agility) at 4 different institutions. Linac log files were analyzed and a delivery robustness index was introduced. For dosimetric measurements an ionization chamber array was used. Relative dose deviations were assessed by mean gamma for each control point and compared to the log file evaluation. Fourteen linacs delivered the VMAT benchmark plan, while 4 linacs failed by consistently terminating the delivery. The mean leaf error (±1SD) was 0.3±0.2 mm for all linacs. Large MLC maximum errors up to 6.5 mm were observed at reversal positions. Delivery robustness index accounting for MLC position correction (0.8-1.0) correlated with delivery time (80-128 s) and depended on dose rate performance. Dosimetric evaluation indicated in general accurate plan reproducibility with γ(mean)(±1 SD)=0.4±0.2 for 1 mm/1%. However single control point analysis revealed larger deviations and attributed well to log file analysis. The designed benchmark plan helped identify linac related malfunctions in dynamic mode for VMAT. Log files serve as an important additional QA measure to understand and visualize dynamic linac parameters. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Bioelectrochemical Systems Workshop:Standardized Analyses, Design Benchmarks, and Reporting
2012-01-01
related to the exoelectrogenic biofilm activity, and to investigate whether the community structure is a function of design and operational parameters...where should biofilm samples be collected? The most prevalent methods of community characterization in BES studies have entailed phylogenetic ...of function associated with this genetic marker, and in methods that involve polymerase chain reaction (PCR) amplification the quantitative
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T.; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J.; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A.; Lempicki, Richard A.; Huang, Da Wei
2013-01-01
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results. PMID:24179701
A Benchmark Study on Error Assessment and Quality Control of CCS Reads Derived from the PacBio RS.
Jiao, Xiaoli; Zheng, Xin; Ma, Liang; Kutty, Geetha; Gogineni, Emile; Sun, Qiang; Sherman, Brad T; Hu, Xiaojun; Jones, Kristine; Raley, Castle; Tran, Bao; Munroe, David J; Stephens, Robert; Liang, Dun; Imamichi, Tomozumi; Kovacs, Joseph A; Lempicki, Richard A; Huang, Da Wei
2013-07-31
PacBio RS, a newly emerging third-generation DNA sequencing platform, is based on a real-time, single-molecule, nano-nitch sequencing technology that can generate very long reads (up to 20-kb) in contrast to the shorter reads produced by the first and second generation sequencing technologies. As a new platform, it is important to assess the sequencing error rate, as well as the quality control (QC) parameters associated with the PacBio sequence data. In this study, a mixture of 10 prior known, closely related DNA amplicons were sequenced using the PacBio RS sequencing platform. After aligning Circular Consensus Sequence (CCS) reads derived from the above sequencing experiment to the known reference sequences, we found that the median error rate was 2.5% without read QC, and improved to 1.3% with an SVM based multi-parameter QC method. In addition, a De Novo assembly was used as a downstream application to evaluate the effects of different QC approaches. This benchmark study indicates that even though CCS reads are post error-corrected it is still necessary to perform appropriate QC on CCS reads in order to produce successful downstream bioinformatics analytical results.
2017-01-01
Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438
Omori, Satoshi; Kitao, Akio
2013-06-01
We propose a fast clustering and reranking method, CyClus, for protein-protein docking decoys. This method enables comprehensive clustering of whole decoys generated by rigid-body docking using cylindrical approximation of the protein-proteininterface and hierarchical clustering procedures. We demonstrate the clustering and reranking of 54,000 decoy structures generated by ZDOCK for each complex within a few minutes. After parameter tuning for the test set in ZDOCK benchmark 2.0 with the ZDOCK and ZRANK scoring functions, blind tests for the incremental data in ZDOCK benchmark 3.0 and 4.0 were conducted. CyClus successfully generated smaller subsets of decoys containing near-native decoys. For example, the number of decoys required to create subsets containing near-native decoys with 80% probability was reduced from 22% to 50% of the number required in the original ZDOCK. Although specific ZDOCK and ZRANK results were demonstrated, the CyClus algorithm was designed to be more general and can be applied to a wide range of decoys and scoring functions by adjusting just two parameters, p and T. CyClus results were also compared to those from ClusPro. Copyright © 2013 Wiley Periodicals, Inc.
Show me the data: advances in multi-model benchmarking, assimilation, and forecasting
NASA Astrophysics Data System (ADS)
Dietze, M.; Raiho, A.; Fer, I.; Cowdery, E.; Kooper, R.; Kelly, R.; Shiklomanov, A. N.; Desai, A. R.; Simkins, J.; Gardella, A.; Serbin, S.
2016-12-01
Researchers want their data to inform carbon cycle predictions, but there are considerable bottlenecks between data collection and the use of data to calibrate and validate earth system models and inform predictions. This talk highlights recent advancements in the PEcAn project aimed at it making it easier for individual researchers to confront models with their own data: (1) The development of an easily extensible site-scale benchmarking system aimed at ensuring that models capture process rather than just reproducing pattern; (2) Efficient emulator-based Bayesian parameter data assimilation to constrain model parameters; (3) A novel, generalized approach to ensemble data assimilation to estimate carbon pools and fluxes and quantify process error; (4) automated processing and downscaling of CMIP climate scenarios to support forecasts that include driver uncertainty; (5) a large expansion in the number of models supported, with new tools for conducting multi-model and multi-site analyses; and (6) a network-based architecture that allows analyses to be shared with model developers and other collaborators. Application of these methods is illustrated with data across a wide range of time scales, from eddy-covariance to forest inventories to tree rings to paleoecological pollen proxies.
A study on the use of Gumbel approximation with the Bernoulli spatial scan statistic.
Read, S; Bath, P A; Willett, P; Maheswaran, R
2013-08-30
The Bernoulli version of the spatial scan statistic is a well established method of detecting localised spatial clusters in binary labelled point data, a typical application being the epidemiological case-control study. A recent study suggests the inferential accuracy of several versions of the spatial scan statistic (principally the Poisson version) can be improved, at little computational cost, by using the Gumbel distribution, a method now available in SaTScan(TM) (www.satscan.org). We study in detail the effect of this technique when applied to the Bernoulli version and demonstrate that it is highly effective, albeit with some increase in false alarm rates at certain significance thresholds. We explain how this increase is due to the discrete nature of the Bernoulli spatial scan statistic and demonstrate that it can affect even small p-values. Despite this, we argue that the Gumbel method is actually preferable for very small p-values. Furthermore, we extend previous research by running benchmark trials on 12 000 synthetic datasets, thus demonstrating that the overall detection capability of the Bernoulli version (i.e. ratio of power to false alarm rate) is not noticeably affected by the use of the Gumbel method. We also provide an example application of the Gumbel method using data on hospital admissions for chronic obstructive pulmonary disease. Copyright © 2013 John Wiley & Sons, Ltd.
3D face recognition based on multiple keypoint descriptors and sparse representation.
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.
Debes, Anders J; Aggarwal, Rajesh; Balasundaram, Indran; Jacobsen, Morten B J
2012-06-01
Surgical training programs are now including simulators as training tools for teaching laparoscopic surgery. The aim of this study was to develop a standardized, graduated, and evidence-based curriculum for the newly developed D-box (D-box Medical, Lier, Norway) for training basic laparoscopic skills. Eighteen interns with no laparoscopic experience completed a training program on the D-box consisting of 8 sessions of 5 tasks with assessment on a sixth task. Performance was measured by the use of 3-dimensional electromagnetic tracking of hand movements, path length, and time taken. Ten experienced surgeons (>100 laparoscopic surgeries, median 250) were recruited for establishing benchmark criteria. Significant learning curves were obtained for all construct valid parameters for tasks 4 (P < .005) and 5 (P < .005) and reached plateau levels between the fifth and sixth session. Within the 8 sessions of this study, between 50% and 89% of the interns reached benchmark criteria on tasks 4 and 5. Benchmark criteria and an evidence-based curriculum have been developed for the D-box. The curriculum is aimed at training and assessing surgical novices in basic laparoscopic skills. Copyright © 2012 Elsevier Inc. All rights reserved.
TRACE/PARCS analysis of the OECD/NEA Oskarshamn-2 BWR stability benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kozlowski, T.; Downar, T.; Xu, Y.
2012-07-01
On February 25, 1999, the Oskarshamn-2 NPP experienced a stability event which culminated in diverging power oscillations with a decay ratio of about 1.4. The event was successfully modeled by the TRACE/PARCS coupled code system, and further analysis of the event is described in this paper. The results show very good agreement with the plant data, capturing the entire behavior of the transient including the onset of instability, growth of the oscillations (decay ratio) and oscillation frequency. This provides confidence in the prediction of other parameters which are not available from the plant records. The event provides coupled code validationmore » for a challenging BWR stability event, which involves the accurate simulation of neutron kinetics (NK), thermal-hydraulics (TH), and TH/NK. coupling. The success of this work has demonstrated the ability of the 3-D coupled systems code TRACE/PARCS to capture the complex behavior of BWR stability events. The problem was released as an international OECD/NEA benchmark, and it is the first benchmark based on measured plant data for a stability event with a DR greater than one. Interested participants are invited to contact authors for more information. (authors)« less
Technologies of polytechnic education in global benchmark higher education institutions
NASA Astrophysics Data System (ADS)
Kurushina, V. A.; Kurushina, E. V.; Zemenkova, M. Y.
2018-05-01
The Russian polytechnic education is going through the sequence of transformations started with introduction of bachelor and master degrees in the higher education instead of the previous “specialists”. The next stage of reformation in the Russian polytechnic education should imply the growth in quality of teaching and learning experience that is possible to achieve by accumulating the best education practices of the world-class universities using the benchmarking method. This paper gives an overview of some major distinctive features of the foreign benchmark higher education institution and the Russian university of polytechnic profile. The parameters that allowed the authors to select the foreign institution for comparison include the scope of educational profile, industrial specialization, connections with the leading regional corporations, size of the city and number of students. When considering the possibilities of using relevant higher education practices of the world level, the authors emphasize the importance of formation of a new mentality of an engineer, the role of computer technologies in engineering education, the provision of licensed software for the educational process which exceeds the level of a regional Russian university, and successful staff technologies (e.g., inviting “guest” lecturers or having 2-3 lecturers per course).
Benchmark fragment-based 1H, 13C, 15N and 17O chemical shift predictions in molecular crystals†
Hartman, Joshua D.; Kudla, Ryan A.; Day, Graeme M.; Mueller, Leonard J.; Beran, Gregory J. O.
2016-01-01
The performance of fragment-based ab initio 1H, 13C, 15N and 17O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. 1H, 13C, 15N, and 17O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same 1H, 13C, 15N, and 17O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tertbutyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2. PMID:27431490
Hartman, Joshua D; Kudla, Ryan A; Day, Graeme M; Mueller, Leonard J; Beran, Gregory J O
2016-08-21
The performance of fragment-based ab initio(1)H, (13)C, (15)N and (17)O chemical shift predictions is assessed against experimental NMR chemical shift data in four benchmark sets of molecular crystals. Employing a variety of commonly used density functionals (PBE0, B3LYP, TPSSh, OPBE, PBE, TPSS), we explore the relative performance of cluster, two-body fragment, and combined cluster/fragment models. The hybrid density functionals (PBE0, B3LYP and TPSSh) generally out-perform their generalized gradient approximation (GGA)-based counterparts. (1)H, (13)C, (15)N, and (17)O isotropic chemical shifts can be predicted with root-mean-square errors of 0.3, 1.5, 4.2, and 9.8 ppm, respectively, using a computationally inexpensive electrostatically embedded two-body PBE0 fragment model. Oxygen chemical shieldings prove particularly sensitive to local many-body effects, and using a combined cluster/fragment model instead of the simple two-body fragment model decreases the root-mean-square errors to 7.6 ppm. These fragment-based model errors compare favorably with GIPAW PBE ones of 0.4, 2.2, 5.4, and 7.2 ppm for the same (1)H, (13)C, (15)N, and (17)O test sets. Using these benchmark calculations, a set of recommended linear regression parameters for mapping between calculated chemical shieldings and observed chemical shifts are provided and their robustness assessed using statistical cross-validation. We demonstrate the utility of these approaches and the reported scaling parameters on applications to 9-tert-butyl anthracene, several histidine co-crystals, benzoic acid and the C-nitrosoarene SnCl2(CH3)2(NODMA)2.
Wegrzyn, Julien; Roux, Jean-Paul; Loriau, Charlotte; Bonin, Nicolas; Pibarot, Vincent
2018-02-22
Using a cementless femoral stem in total hip arthroplasty (THA), optimal filling of the proximal femoral metaphyseal volume (PFMV) and restoration of the extramedullary proximal femoral (PF) parameters (i.e., femoral offset (FO), neck length (FNL), and head height (FHH)) constitute key goals for optimal hip biomechanics, functional outcome, and THA survivorship. However, almost 30% of mismatch between the PF anatomy and implant geometry of the most widely implanted non-modular cementless femoral stem has been demonstrated in a computed tomography scan (CT scan) study. Therefore, this anatomic study aimed to evaluate the relationship between the intra- and extramedullary PF parameters using tridimensional CT scan reconstructions. One hundred fifty-one CT scans of adult healthy hips were obtained from 151 male Caucasian patients (mean age = 66 ± 11 years) undergoing lower limb CT scan arteriography. Tridimensional PF reconstructions and parameter measurements were performed using a corrected PF coronal plane-defined by the femoral neck and diaphyseal canal longitudinal axes-to avoid influence of PF helitorsion and femoral neck version on extramedullary PF parameters. Independently of the femoral neck-shaft angle, the PFMV was significantly and positively correlated with the FO, FNL, and FHH (r = 0.407 to 0.420; p < 0.0001). This study emphasized that the tridimensional PF geometry measurement in the corrected coronal plane of the femoral neck can be useful to determine and optimize the design of a non-modular cementless femoral stem. Particularly, continuous homothetic size progression of the intra- and extramedullary PF parameters should be achieved to assure stem fixation and restore anatomic hip biomechanics.
Multipinhole SPECT helical scan parameters and imaging volume
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang
Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less
Analysis of uncertainties in Monte Carlo simulated organ dose for chest CT
NASA Astrophysics Data System (ADS)
Muryn, John S.; Morgan, Ashraf G.; Segars, W. P.; Liptak, Chris L.; Dong, Frank F.; Primak, Andrew N.; Li, Xiang
2015-03-01
In Monte Carlo simulation of organ dose for a chest CT scan, many input parameters are required (e.g., half-value layer of the x-ray energy spectrum, effective beam width, and anatomical coverage of the scan). The input parameter values are provided by the manufacturer, measured experimentally, or determined based on typical clinical practices. The goal of this study was to assess the uncertainties in Monte Carlo simulated organ dose as a result of using input parameter values that deviate from the truth (clinical reality). Organ dose from a chest CT scan was simulated for a standard-size female phantom using a set of reference input parameter values (treated as the truth). To emulate the situation in which the input parameter values used by the researcher may deviate from the truth, additional simulations were performed in which errors were purposefully introduced into the input parameter values, the effects of which on organ dose per CTDIvol were analyzed. Our study showed that when errors in half value layer were within ± 0.5 mm Al, the errors in organ dose per CTDIvol were less than 6%. Errors in effective beam width of up to 3 mm had negligible effect (< 2.5%) on organ dose. In contrast, when the assumed anatomical center of the patient deviated from the true anatomical center by 5 cm, organ dose errors of up to 20% were introduced. Lastly, when the assumed extra scan length was longer by 4 cm than the true value, dose errors of up to 160% were found. The results answer the important question: to what level of accuracy each input parameter needs to be determined in order to obtain accurate organ dose results.
A Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Lev, Benjamin
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity, high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented DC-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (2 um), or 6 nT / Hz1 / 2 per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly one-hundred points with an effective field sensitivity of 600 pT / Hz1 / 2 each point during the same time as a point-by-point scanner would measure these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly two orders of magnitude improvement in magnetic flux sensitivity (down to 10- 6 Phi0 / Hz1 / 2) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are for the first time carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns and done so using samples that may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge transport images at temperatures from room to \\x9D4K in unconventional superconductors and topologically nontrivial materials.
Scanning Quantum Cryogenic Atom Microscope
NASA Astrophysics Data System (ADS)
Yang, Fan; Kollár, Alicia J.; Taylor, Stephen F.; Turner, Richard W.; Lev, Benjamin L.
2017-03-01
Microscopic imaging of local magnetic fields provides a window into the organizing principles of complex and technologically relevant condensed-matter materials. However, a wide variety of intriguing strongly correlated and topologically nontrivial materials exhibit poorly understood phenomena outside the detection capability of state-of-the-art high-sensitivity high-resolution scanning probe magnetometers. We introduce a quantum-noise-limited scanning probe magnetometer that can operate from room-to-cryogenic temperatures with unprecedented dc-field sensitivity and micron-scale resolution. The Scanning Quantum Cryogenic Atom Microscope (SQCRAMscope) employs a magnetically levitated atomic Bose-Einstein condensate (BEC), thereby providing immunity to conductive and blackbody radiative heating. The SQCRAMscope has a field sensitivity of 1.4 nT per resolution-limited point (approximately 2 μ m ) or 6 nT /√{Hz } per point at its duty cycle. Compared to point-by-point sensors, the long length of the BEC provides a naturally parallel measurement, allowing one to measure nearly 100 points with an effective field sensitivity of 600 pT /√{Hz } for each point during the same time as a point-by-point scanner measures these points sequentially. Moreover, it has a noise floor of 300 pT and provides nearly 2 orders of magnitude improvement in magnetic flux sensitivity (down to 10-6 Φ0/√{Hz } ) over previous atomic probe magnetometers capable of scanning near samples. These capabilities are carefully benchmarked by imaging magnetic fields arising from microfabricated wire patterns in a system where samples may be scanned, cryogenically cooled, and easily exchanged. We anticipate the SQCRAMscope will provide charge-transport images at temperatures from room temperature to 4 K in unconventional superconductors and topologically nontrivial materials.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
NASA Astrophysics Data System (ADS)
Liang, Gui-Yun; Wei, Hui-Gang; Yuan, Da-Wei; Wang, Fei-Lu; Peng, Ji-Min; Zhong, Jia-Yong; Zhu, Xiao-Long; Schmidt, Mike; Zschornack, Günter; Ma, Xin-Wen; Zhao, Gang
2018-01-01
Spectra are fundamental observation data used for astronomical research, but understanding them strongly depends on theoretical models with many fundamental parameters from theoretical calculations. Different models give different insights for understanding a specific object. Hence, laboratory benchmarks for these theoretical models become necessary. An electron beam ion trap is an ideal facility for spectroscopic benchmarks due to its similar conditions of electron density and temperature compared to astrophysical plasmas in stellar coronae, supernova remnants and so on. In this paper, we will describe the performance of a small electron beam ion trap/source facility installed at National Astronomical Observatories, Chinese Academy of Sciences.We present some preliminary experimental results on X-ray emission, ion production, the ionization process of trapped ions as well as the effects of charge exchange on the ionization.
Radiochemical analyses of surface water from U.S. Geological Survey hydrologic bench-mark stations
Janzer, V.J.; Saindon, L.G.
1972-01-01
The U.S. Geological Survey's program for collecting and analyzing surface-water samples for radiochemical constituents at hydrologic bench-mark stations is described. Analytical methods used during the study are described briefly and data obtained from 55 of the network stations in the United States during the period from 1967 to 1971 are given in tabular form.Concentration values are reported for dissolved uranium, radium, gross alpha and gross beta radioactivity. Values are also given for suspended gross alpha radioactivity in terms of natural uranium. Suspended gross beta radioactivity is expressed both as the equilibrium mixture of strontium-90/yttrium-90 and as cesium-137.Other physical parameters reported which describe the samples include the concentrations of dissolved and suspended solids, the water temperature and stream discharge at the time of the sample collection.
Analysis of energy-based algorithms for RNA secondary structure prediction
2012-01-01
Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803
Analysis of energy-based algorithms for RNA secondary structure prediction.
Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H
2012-02-01
RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.
Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint
NASA Astrophysics Data System (ADS)
Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.
2017-09-01
For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.
Goetzel, Ron Z; D'Arco, Malinda; Thomas, Jordana; Wang, Degang; Tabrizi, Maryam J; Roemer, Enid Chung; Prasad, Aishwarya; Yarborough, Charles M
2015-09-01
To determine the prevalence and incidence of low back pain (LBP) among workers in the aerospace and defense industry and in a specific company. Claims and demographic data from the Truven Health MarketScan normative database representing more than 1 million workers were drawn from a group of 18 US benchmark companies and compared with one particular company, Lockheed Martin Corporation. The prevalence of LBP in the MarketScan normative group was 15.6% in the final study year (2012), whereas the incidence of new cases was 7.2% and 7.3% in years 2011 and 2012, respectively. Compared with the normative group, the company's prevalence and incidence rates were lower. Women and older workers were more likely to experience LBP compared with men and younger workers. The analysis was used to inform the company's leadership about the health burden of the condition and evaluate alternative treatment options to prevent the incidences and reduce the prevalence of clinical back pain among workers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunner, D.; LaBombard, B.; Ochoukov, R.
2013-03-15
A new Retarding Field Analyzer (RFA) head has been created for the outer-midplane scanning probe system on the Alcator C-Mod tokamak. The new probe head contains back-to-back retarding field analyzers aligned with the local magnetic field. One faces 'upstream' into the field-aligned plasma flow and the other faces 'downstream' away from the flow. The RFA was created primarily to benchmark ion temperature measurements of an ion sensitive probe; it may also be used to interrogate electrons. However, its construction is robust enough to be used to measure ion and electron temperatures up to the last-closed flux surface in C-Mod. Amore » RFA probe of identical design has been attached to the side of a limiter to explore direct changes to the boundary plasma due to lower hybrid heating and current drive. Design of the high heat flux (>100 MW/m{sup 2}) handling probe and initial results are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, L; Huang, S; Kang, M
Purpose: Eclipse proton Monte Carlo AcurosPT 13.7 was commissioned and experimentally validated for an IBA dedicated PBS nozzle in water. Topas 1.3 was used to isolate the cause of differences in output and penumbra between simulation and experiment. Methods: The spot profiles were measured in air at five locations using Lynx. PTW-34070 Bragg peak chamber (Freiburg, Germany) was used to collect the relative integral Bragg peak for 15 proton energies from 100 MeV to 225 MeV. The phase space parameters (σx, σθ, ρxθ) number of protons per MU, energy spread and calculated mean energy provided by AcurosPT were identically implementedmore » into Topas. The absolute dose, profiles and field size factors measured using ionization chamber arrays were compared with both AcurosPT and Topas. Results: The beam spot size, σx, and the angular spread, σθ, in air were both energy-dependent: in particular, the spot size in air at isocentre ranged from 2.8 to 5.3 mm, and the angular spread ranged from 2.7 mrad to 6 mrad. The number of protons per MU increased from ∼9E7 at 100 MeV to ∼1.5E8 at 225 MeV. Both AcurosPT and TOPAS agree with experiment within 2 mm penumbra difference or 3% dose difference for scenarios including central axis depth dose and profiles at two depths in multi-spot square fields, from 40 to 200 mm, for all the investigated single-energy and multi-energy beams, indicating clinically acceptable source model and radiation transport algorithm in water. Conclusion: By comparing measured data and TOPAS simulation using the same source model, the AcurosPT 13.7 was validated in water within 2 mm penumbra difference or 3% dose difference. Benchmarks versus an independent Monte Carlo code are recommended to study the agreement in output, filed size factors and penumbra differences. This project is partially supported by the Varian grant under the master agreement between University of Pennsylvania and Varian.« less
Geometry characteristics modeling and process optimization in coaxial laser inside wire cladding
NASA Astrophysics Data System (ADS)
Shi, Jianjun; Zhu, Ping; Fu, Geyan; Shi, Shihong
2018-05-01
Coaxial laser inside wire cladding method is very promising as it has a very high efficiency and a consistent interaction between the laser and wire. In this paper, the energy and mass conservation law, and the regression algorithm are used together for establishing the mathematical models to study the relationship between the layer geometry characteristics (width, height and cross section area) and process parameters (laser power, scanning velocity and wire feeding speed). At the selected parameter ranges, the predicted values from the models are compared with the experimental measured results, and there is minor error existing, but they reflect the same regularity. From the models, it is seen the width of the cladding layer is proportional to both the laser power and wire feeding speed, while it firstly increases and then decreases with the increasing of the scanning velocity. The height of the cladding layer is proportional to the scanning velocity and feeding speed and inversely proportional to the laser power. The cross section area increases with the increasing of feeding speed and decreasing of scanning velocity. By using the mathematical models, the geometry characteristics of the cladding layer can be predicted by the known process parameters. Conversely, the process parameters can be calculated by the targeted geometry characteristics. The models are also suitable for multi-layer forming process. By using the optimized process parameters calculated from the models, a 45 mm-high thin-wall part is formed with smooth side surfaces.
Descriptor Fingerprints and Their Application to WhiteWine Clustering and Discrimination.
NASA Astrophysics Data System (ADS)
Bangov, I. P.; Moskovkina, M.; Stojanov, B. P.
2018-03-01
This study continues the attempt to use the statistical process for a large-scale analytical data. A group of 3898 white wines, each with 11 analytical laboratory benchmarks was analyzed by a fingerprint similarity search in order to be grouped into separate clusters. A characterization of the wine's quality in each individual cluster was carried out according to individual laboratory parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oterkus, Selda; Madenci, Erdogan, E-mail: madenci@email.arizona.edu; Agwai, Abigail
This study presents the derivation of ordinary state-based peridynamic heat conduction equation based on the Lagrangian formalism. The peridynamic heat conduction parameters are related to those of the classical theory. An explicit time stepping scheme is adopted for numerical solution of various benchmark problems with known solutions. It paves the way for applying the peridynamic theory to other physical fields such as neutronic diffusion and electrical potential distribution.
Stellar Parameters, Chemical composition and Models of chemical evolution
NASA Astrophysics Data System (ADS)
Mishenina, T.; Pignatari, M.; Côté, B.; Thielemann, F.-K.; Soubiran, C.; Basak, N.; Gorbaneva, T.; Korotin, S. A.; Kovtyukh, V. V.; Wehmeyer, B.; Bisterzo, S.; Travaglio, C.; Gibson, B. K.; Jordan, C.; Paul, A.; Ritter, C.; Herwig, F.
2018-04-01
We present an in-depth study of metal-poor stars, based high resolution spectra combined with newly released astrometric data from Gaia, with special attention to observational uncertainties. The results are compared to those of other studies, including Gaia benchmark stars. Chemical evolution models are discussed, highlighting few puzzles that are still affecting our understanding of stellar nucleosynthesis and of the evolution of our Galaxy.
Strömgren survey for asteroseismology and galactic archaeology: Let the saga begin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casagrande, L.; Dotter, A.; Milone, A. P.
2014-06-01
Asteroseismology has the capability of precisely determining stellar properties that would otherwise be inaccessible, such as radii, masses, and thus ages of stars. When coupling this information with classical determinations of stellar parameters, such as metallicities, effective temperatures, and angular diameters, powerful new diagnostics for Galactic studies can be obtained. The ongoing Strömgren survey for Asteroseismology and Galactic Archaeology has the goal of transforming the Kepler field into a new benchmark for Galactic studies, similar to the solar neighborhood. Here we present the first results from a stripe centered at a Galactic longitude of 74° and covering latitude from aboutmore » 8° to 20°, which includes almost 1000 K giants with seismic information and the benchmark open cluster NGC 6819. We describe the coupling of classical and seismic parameters, the accuracy as well as the caveats of the derived effective temperatures, metallicities, distances, surface gravities, masses, and radii. Confidence in the achieved precision is corroborated by the detection of the first and secondary clumps in a population of field stars with a ratio of 2 to 1 and by the negligible scatter in the seismic distances among NGC 6819 member stars. An assessment of the reliability of stellar parameters in the Kepler Input Catalog is also performed, and the impact of our results for population studies in the Milky Way is discussed, along with the importance of an all-sky Strömgren survey.« less
Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2009-01-01
The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.
ff14ipq: A Self-Consistent Force Field for Condensed-Phase Simulations of Proteins
2015-01-01
We present the ff14ipq force field, implementing the previously published IPolQ charge set for simulations of complete proteins. Minor modifications to the charge derivation scheme and van der Waals interactions between polar atoms are introduced. Torsion parameters are developed through a generational learning approach, based on gas-phase MP2/cc-pVTZ single-point energies computed of structures optimized by the force field itself rather than the quantum benchmark. In this manner, we sacrifice information about the true quantum minima in order to ensure that the force field maintains optimal agreement with the MP2/cc-pVTZ benchmark for the ensembles it will actually produce in simulations. A means of making the gas-phase torsion parameters compatible with solution-phase IPolQ charges is presented. The ff14ipq model is an alternative to ff99SB and other Amber force fields for protein simulations in programs that accommodate pair-specific Lennard–Jones combining rules. The force field gives strong performance on α-helical and β-sheet oligopeptides as well as globular proteins over microsecond time scale simulations, although it has not yet been tested in conjunction with lipid and nucleic acid models. We show how our choices in parameter development influence the resulting force field and how other choices that may have appeared reasonable would actually have led to poorer results. The tools we developed may also aid in the development of future fixed-charge and even polarizable biomolecular force fields. PMID:25328495
NASA Astrophysics Data System (ADS)
Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.
2016-12-01
The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The results are intended to give practical insight into tradeoffs between rSAS model structure and skill, and define a new performance benchmark to which other transit time models can be compared.
NASA Astrophysics Data System (ADS)
junfeng, Li; zhengying, Wei
2017-11-01
Process optimization and microstructure characterization of Ti6Al4V manufactured by selective laser melting (SLM) were investigated in this article. The relative density of sampled fabricated by SLM is influenced by the main process parameters, including laser power, scan speed and hatch distance. The volume energy density (VED) was defined to account for the combined effect of the main process parameters on the relative density. The results shown that the relative density changed with the change of VED and the optimized process interval is 55˜60J/mm3. Furthermore, compared with laser power, scan speed and hatch distance by taguchi method, it was found that the scan speed had the greatest effect on the relative density. Compared with the microstructure of the cross-section of the specimen at different scanning speeds, it was found that the microstructures at different speeds had similar characteristics, all of them were needle-like martensite distributed in the β matrix, but with the increase of scanning speed, the microstructure is finer and the lower scan speed leads to coarsening of the microstructure.
NASA Astrophysics Data System (ADS)
Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward
2016-12-01
SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.
Hetzel, Juergen; Boeckeler, Michael; Horger, Marius; Ehab, Ahmed; Kloth, Christopher; Wagner, Robert; Freitag, Lutz; Slebos, Dirk-Jan; Lewis, Richard Alexander; Haentschel, Maik
2017-01-01
Lung volume reduction (LVR) improves breathing mechanics by reducing hyperinflation. Lobar selection usually focuses on choosing the most destroyed emphysematous lobes as seen on an inspiratory CT scan. However, it has never been shown to what extent these densitometric CT parameters predict the least deflation of an individual lobe during expiration. The addition of expiratory CT analysis allows measurement of the extent of lobar air trapping and could therefore provide additional functional information for choice of potential treatment targets. To determine lobar vital capacity/lobar total capacity (LVC/LTC) as a functional parameter for lobar air trapping using on an inspiratory and expiratory CT scan. To compare lobar selection by LVC/LTC with the established morphological CT density parameters. 36 patients referred for endoscopic LVR were studied. LVC/LTC, defined as delta volume over maximum volume of a lobe, was calculated using inspiratory and expiratory CT scans. The CT morphological parameters of mean lung density (MLD), low attenuation volume (LAV), and 15th percentile of Hounsfield units (15%P) were determined on an inspiratory CT scan for each lobe. We compared and correlated LVC/LTC with MLD, LAV, and 15%P. There was a weak correlation between the functional parameter LVC/LTC and all inspiratory densitometric parameters. Target lobe selection using lowest lobar deflation (lowest LVC/LTC) correlated with target lobe selection based on lowest MLD in 18 patients (50.0%), with the highest LAV in 13 patients (36.1%), and with the lowest 15%P in 12 patients (33.3%). CT-based measurement of deflation (LVC/LTC) as a functional parameter correlates weakly with all densitometric CT parameters on a lobar level. Therefore, morphological criteria based on inspiratory CT densitometry partially reflect the deflation of particular lung lobes, and may be of limited value as a sole predictor for target lobe selection in LVR.
Contruction and physical parameters of multiscan whole-body scanner (in Czech)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silar, J.; Smidova, M.; Vacek, J.
The construction of a commercial whole-body scanner which permits scanning in the form of a photographic picture, and the distribution in the human body of the activity of gamma emitters having an energy of up to 1.3 MeV, at relatively short intervals are described. The results are presented of the measurement of physical parameters affecting the scanning possibilities of a Model No. 602 Multiscan, produced by Cyclotron Corporation. The resulting radiometric parameters are listed. The results of measurement show that the device can be used in the whole-body scanning of the distribution of the activity of gamma emitters applied inmore » routine procedures, such as 100 mu Ci of /sup 85/ Sr, with a position resolution of 25 to 50 mm in a tissue layer in a height of up to 100 mm above the Multiscan table. (INIS)« less
Aziz, Farooq; Bano, Khizra; Siddique, Ahmad Hassan; Bajwa, Sadia Zafar; Nazir, Aalia; Munawar, Anam; Shaheen, Ayesha; Saeed, Madiha; Afzal, Muhammad; Iqbal, M Zubair; Wu, Aiguo; Khan, Waheed S
2018-01-09
We report a novel strategy for the fabrication of lecithin-coated gold nanoflowers (GNFs) via single-step design for CT imaging application. Field-emission electron microscope confirmed flowers like morphology of the as-synthesized nanostructures. Furthermore, these show absorption peak in near-infrared (NIR) region at λ max 690 nm Different concentrations of GNFs are tested as a contrast agent in CT scans at tube voltage 135 kV and tube current 350 mA. These results are compared with same amount of iodine at same CT scan parameters. The results of in vitro CT scan study show that GNFs have good contrast enhancement properties, whereas in vivo study of rabbits CT scan shows that GNFs enhance the CT image clearly at 135 kV as compared to that of iodine. Cytotoxicity was studied and blood profile show minor increase of white blood cells and haemoglobin, whereas decrease of red blood cells and platelets.
Wallwiener, Markus; Brucker, Sara Y; Wallwiener, Diethelm
2012-06-01
This review summarizes the rationale for the creation of breast centres and discusses the studies conducted in Germany to obtain proof of principle for a voluntary, external benchmarking programme and proof of concept for third-party dual certification of breast centres and their mandatory quality management systems to the German Cancer Society (DKG) and German Society of Senology (DGS) Requirements of Breast Centres and ISO 9001 or similar. In addition, we report the most recent data on benchmarking and certification of breast centres in Germany. Review and summary of pertinent publications. Literature searches to identify additional relevant studies. Updates from the DKG/DGS programmes. Improvements in surrogate parameters as represented by structural and process quality indicators suggest that outcome quality is improving. The voluntary benchmarking programme has gained wide acceptance among DKG/DGS-certified breast centres. This is evidenced by early results from one of the largest studies in multidisciplinary cancer services research, initiated by the DKG and DGS to implement certified breast centres. The goal of establishing a nationwide network of certified breast centres in Germany can be considered largely achieved. Nonetheless the network still needs to be improved, and there is potential for optimization along the chain of care from mammography screening, interventional diagnosis and treatment through to follow-up. Specialization, guideline-concordant procedures as well as certification and recertification of breast centres remain essential to achieve further improvements in quality of breast cancer care and to stabilize and enhance the nationwide provision of high-quality breast cancer care.
Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall
2014-01-01
Background: Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Methods: Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non–critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. Results: In all, 76 hospitals have uploaded at least 12 months of data for non–critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. Conclusions: This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. PMID:24876426
Maynard, Greg; Schnipper, Jeffrey Lawrence; Messler, Jordan; Ramos, Pedro; Kulasa, Kristen; Nolan, Ann; Rogers, Kendall
2014-07-01
Insulin is a top source of adverse drug events in the hospital, and glycemic control is a focus of improvement efforts across the country. Yet, the majority of hospitals have no data to gauge their performance on glycemic control, hypoglycemia rates, or hypoglycemic management. Current tools to outsource glucometrics reports are limited in availability or function. Society of Hospital Medicine (SHM) faculty designed and implemented a web-based data and reporting center that calculates glucometrics on blood glucose data files securely uploaded by users. Unit labels, care type (critical care, non-critical care), and unit type (eg, medical, surgical, mixed, pediatrics) are defined on upload allowing for robust, flexible reporting. Reports for any date range, care type, unit type, or any combination of units are available on demand for review or downloading into a variety of file formats. Four reports with supporting graphics depict glycemic control, hypoglycemia, and hypoglycemia management by patient day or patient stay. Benchmarking and performance ranking reports are generated periodically for all hospitals in the database. In all, 76 hospitals have uploaded at least 12 months of data for non-critical care areas and 67 sites have uploaded critical care data. Critical care benchmarking reveals wide variability in performance. Some hospitals achieve top quartile performance in both glycemic control and hypoglycemia parameters. This new web-based glucometrics data and reporting tool allows hospitals to track their performance with a flexible reporting system, and provides them with external benchmarking. Tools like this help to establish standardized glucometrics and performance standards. © 2014 Diabetes Technology Society.
New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)
NASA Astrophysics Data System (ADS)
Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.
2017-09-01
Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.
NASA Astrophysics Data System (ADS)
Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.
2008-12-01
Over the last few decades, rapid improvement of computer capabilities has allowed impact cratering to be modeled with increasing complexity and realism, and has paved the way for a new era of numerical modeling of the impact process, including full, three-dimensional (3D) simulations. When properly benchmarked and validated against observation, computer models offer a powerful tool for understanding the mechanics of impact crater formation. This work presents results from the first phase of a project to benchmark and validate shock codes. A variety of 2D and 3D codes were used in this study, from commercial products like AUTODYN, to codes developed within the scientific community like SOVA, SPH, ZEUS-MP, iSALE, and codes developed at U.S. National Laboratories like CTH, SAGE/RAGE, and ALE3D. Benchmark calculations of shock wave propagation in aluminum-on-aluminum impacts were performed to examine the agreement between codes for simple idealized problems. The benchmark simulations show that variability in code results is to be expected due to differences in the underlying solution algorithm of each code, artificial stability parameters, spatial and temporal resolution, and material models. Overall, the inter-code variability in peak shock pressure as a function of distance is around 10 to 20%. In general, if the impactor is resolved by at least 20 cells across its radius, the underestimation of peak shock pressure due to spatial resolution is less than 10%. In addition to the benchmark tests, three validation tests were performed to examine the ability of the codes to reproduce the time evolution of crater radius and depth observed in vertical laboratory impacts in water and two well-characterized aluminum alloys. Results from these calculations are in good agreement with experiments. There appears to be a general tendency of shock physics codes to underestimate the radius of the forming crater. Overall, the discrepancy between the model and experiment results is between 10 and 20%, similar to the inter-code variability.
NASA Astrophysics Data System (ADS)
Ryan, D. A.; Liu, Y. Q.; Li, L.; Kirk, A.; Dunne, M.; Dudson, B.; Piovesan, P.; Suttrop, W.; Willensdorfer, M.; the ASDEX Upgrade Team; the EUROfusion MST1 Team
2017-02-01
Edge localised modes (ELMs) are a repetitive MHD instability, which may be mitigated or suppressed by the application of resonant magnetic perturbations (RMPs). In tokamaks which have an upper and lower set of RMP coils, the applied spectrum of the RMPs can be tuned for optimal ELM control, by introducing a toroidal phase difference {{Δ }}{{Φ }} between the upper and lower rows. The magnitude of the outermost resonant component of the RMP field | {b}{{res}}1| (other proposed criteria are discussed herein) has been shown experimentally to correlate with mitigated ELM frequency, and to be controllable by {{Δ }}{{Φ }} (Kirk et al 2013 Plasma Phys. Control. Fusion 53 043007). This suggests that ELM mitigation may be optimised by choosing {{Δ }}{{Φ }}={{Δ }}{{{Φ }}}{{opt}}, such that | {b}{{res}}1| is maximised. However it is currently impractical to compute {{Δ }}{{{Φ }}}{{opt}} in advance of experiments. This motivates this computational study of the dependence of the optimal coil phase difference {{Δ }}{{{Φ }}}{{opt}}, on global plasma parameters {β }N and q 95, in order to produce a simple parametrisation of {{Δ }}{{{Φ }}}{{opt}}. In this work, a set of tokamak equilibria spanning a wide range of ({β }N, q 95) is produced, based on a reference equilibrium from an ASDEX Upgrade experiment. The MARS-F code (Liu et al 2000 Phys. Plasmas 7 3681) is then used to compute {{Δ }}{{{Φ }}}{{opt}} across this equilibrium set for toroidal mode numbers n = 1-4, both for the vacuum field and including the plasma response. The computational scan finds that for fixed plasma boundary shape, rotation profiles and toroidal mode number n, {{Δ }}{{{Φ }}}{{opt}} is a smoothly varying function of ({β }N, q 95). A 2D quadratic function in ({β }N, q 95) is used to parametrise {{Δ }}{{{Φ }}}{{opt}}, such that for given ({β }N, q 95) and n, an estimate of {{Δ }}{{{Φ }}}{{opt}} may be made without requiring a plasma response computation. To quantify the uncertainty of the parametrisation relative to a plasma response computation, {{Δ }}{{{Φ }}}{{opt}} is also computed using MARS-F for a set of benchmarking points. Each benchmarking point consists of a distinct free boundary equilibrium reconstructed from an ASDEX Upgrade RMP experiment, and set of experimental kinetic profiles and coil currents. Comparing the MARS-F predictions of {{Δ }}{{{Φ }}}{{opt}} for these benchmarking points to predictions of the 2D quadratic, shows that relative to a plasma response computation with MARS-F the 2D quadratic is accurate to 26.5° for n = 1, and 20.6° for n = 2. Potential sources for uncertainty are assessed.
Albert, Ryan J; McLaughlin, Christine; Falatko, Debra
2014-10-15
Fish hold effluent and the effluent produced from the cleaning of fish holds may contain organic material resulting from the degradation of seafood and cleaning products (e.g., soaps and detergents). This effluent is often discharged by vessels into near shore waters and, therefore, could have the potential to contribute to water pollution in bays and estuaries. We characterized effluent from commercial fishing vessels with holds containing refrigerated seawater, ice slurry, or chipped ice. Concentrations of trace heavy metals, wet chemistry parameters, and nutrients in effluent were compared to screening benchmarks to determine if there is a reasonable potential for effluent discharge to contribute to nonattainment of water quality standards. Most analytes (67%) exceeded their benchmark concentration and, therefore, may have the potential to pose risk to human health or the environment if discharges are in significant quantities or there are many vessels discharging in the same areas. Published by Elsevier Ltd.
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Thermodynamic analyses of a biomass-coal co-gasification power generation system.
Yan, Linbo; Yue, Guangxi; He, Boshu
2016-04-01
A novel chemical looping power generation system is presented based on the biomass-coal co-gasification with steam. The effects of different key operation parameters including biomass mass fraction (Rb), steam to carbon mole ratio (Rsc), gasification temperature (Tg) and iron to fuel mole ratio (Rif) on the system performances like energy efficiency (ηe), total energy efficiency (ηte), exergy efficiency (ηex), total exergy efficiency (ηtex) and carbon capture rate (ηcc) are analyzed. A benchmark condition is set, under which ηte, ηtex and ηcc are found to be 39.9%, 37.6% and 96.0%, respectively. Furthermore, detailed energy Sankey diagram and exergy Grassmann diagram are drawn for the entire system operating under the benchmark condition. The energy and exergy efficiencies of the units composing the system are also predicted. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nema, Vijay; Pal, Sudhir Kumar
2013-01-01
This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)(2)-V(2), Modweb were used for the comparison and model generation. Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure.
Simulated annealing with probabilistic analysis for solving traveling salesman problems
NASA Astrophysics Data System (ADS)
Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.
Modified reactive tabu search for the symmetric traveling salesman problems
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan
2013-09-01
Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.
Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic
NASA Technical Reports Server (NTRS)
Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha
1991-01-01
The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Dose responses in a normoxic polymethacrylic acid gel dosimeter using optimal CT scanning parameters
NASA Astrophysics Data System (ADS)
Cho, K. H.; Cho, S. J.; Lee, S.; Lee, S. H.; Min, C. K.; Kim, Y. H.; Moon, S. K.; Kim, E. S.; Chang, A. R.; Kwon, S. I.
2012-05-01
The dosimetric characteristics of normoxic polymethacrylic acid gels are investigated using optimal CT scanning parameters and the possibility of their clinical application is also considered. The effects of CT scanning parameters (tube voltage, tube current, scan time, slick thickness, field of view, and reconstruction algorithm) are experimentally investigated to determine the optimal parameters for minimizing the amount of noise in images obtained using normoxic polymethacrylic acid gel. In addition, the dose sensitivity, dose response, accuracy, and reproducibility of the normoxic polymethacrylic acid gel are evaluated. CT images are obtained using a head phantom that is fabricated for clinical applications. In addition, IMRT treatment planning is performed using a Tomotherapy radiation treatment planning system. A program for analyzing the results is produced using Visual C. A comparison between the treatment planning and the CT images of irradiated gels is performed. The dose sensitivity is found to be 2.41±0.04 HGy-1. The accuracies of dose evaluation at doses of 2 Gy and 4 Gy are 3.0% and 2.6%, respectively, and their reproducibilities are 2.0% and 2.1%, respectively. In the comparison of gel and Tomotherpay planning, the pass rate of the γ-index, based on the reference values of a dose error of 3% and a DTA of 3 mm, is 93.7%.
Testaverde, Lorenzo; Perrone, Anna; Caporali, Laura; Ermini, Antonella; Izzo, Luciano; D'Angeli, Ilaria; Impara, Luca; Mazza, Dario; Izzo, Paolo; Marini, Mario
2011-06-01
To compare Computed Tomography (CT) and Magnetic Resonance (MR) features and their diagnostic potential in the assessment of Synovial Chondromatosis (SC) of the Temporo-Mandibular Joint (TMJ). Eight patients with symptoms and signs compatible with dysfunctional disorders of the TMJ underwent CT and MR scan. We considered the following parameters: soft tissue involvement (disk included), osteostructural alterations of the joints, loose bodies and intra-articular fluid. These parameters were evaluated separately by two radiologists with a "double blinded method" and then, after agreement, definitive assessment of the parameters was given. CT and MR findings were compared. Histopathological results showed metaplastic synovia in all patients and therefore confirmed diagnosis of SC. MR resulted better than CT in the evaluation of all parameters except the osteostructural alterations of the joints, estimated with more accuracy by CT scan. CT scan is excellent to define bony surfaces of the articular joints and flogistic tissue but it fails in the detection of loose bodies when these are not yet calcified. MR scan therefore is the gold standard when SC is suspected since it can visualize loose bodies at early stage and also evaluate disk condition and eventual extra-articular tissues involvement. The use of T2-weighted images and contrast medium allows identifying intra-articular fluid, estimating its entity and discriminating from sinovial tissue. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Chen, Yuntian; Zhang, Yan; Femius Koenderink, A
2017-09-04
We study semi-analytically the light emission and absorption properties of arbitrary stratified photonic structures with embedded two-dimensional magnetoelectric point scattering lattices, as used in recent plasmon-enhanced LEDs and solar cells. By employing dyadic Green's function for the layered structure in combination with the Ewald lattice summation to deal with the particle lattice, we develop an efficient method to study the coupling between planar 2D scattering lattices of plasmonic, or metamaterial point particles, coupled to layered structures. Using the 'array scanning method' we deal with localized sources. Firstly, we apply our method to light emission enhancement of dipole emitters in slab waveguides, mediated by plasmonic lattices. We benchmark the array scanning method against a reciprocity-based approach to find that the calculated radiative rate enhancement in k-space below the light cone shows excellent agreement. Secondly, we apply our method to study absorption-enhancement in thin-film solar cells mediated by periodic Ag nanoparticle arrays. Lastly, we study the emission distribution in k-space of a coupled waveguide-lattice system. In particular, we explore the dark mode excitation on the plasmonic lattice using the so-called array scanning method. Our method could be useful for simulating a broad range of complex nanophotonic structures, i.e., metasurfaces, plasmon-enhanced light emitting systems and photovoltaics.
Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope.
Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T C
2015-10-01
Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.
Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope
NASA Astrophysics Data System (ADS)
Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.
2015-10-01
Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation
Zhang, Lin; Ding, Zhixuan; Li, Hongyu; Shen, Ying; Lu, Jianwei
2014-01-01
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm. PMID:24940876
The Gaia-ESO Survey Astrophysical Calibration
NASA Astrophysics Data System (ADS)
Pancino, E.; Gaia-ESO Survey Consortium
2016-05-01
The Gaia-ESO Survey is a wide field spectroscopic survey recently started with the FLAMES@VLT in Cerro Paranal, Chile. It will produce radial velocities more accurate than Gaia's for faint stars (down to V ≃ 18), and astrophysical parameters and abundances for approximately 100 000 stars, belonging to all Galactic populations. 300 nights were assigned in 5 years (with the last year subject to approval after a detailed report). In particular, to connect with other ongoing and planned spectroscopic surveys, a detailed calibration program — for the astrophysical parameters derivation — is planned, including well known clusters, Gaia benchmark stars, and special equatorial calibration fields designed for wide field/multifiber spectrographs.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Heo, Gil
2015-01-01
Data quality (DQ) has many attributes or facets (i.e., errors, biases, systematic differences, uncertainties, benchmark, false trends, false alarm ratio, etc.)Sources can be complicated (measurements, environmental conditions, surface types, algorithms, etc.) and difficult to be identified especially for multi-sensor and multi-satellite products with bias correction (TMPA, IMERG, etc.) How to obtain DQ info fast and easily, especially quantified info in ROI Existing parameters (random error), literature, DIY, etc.How to apply the knowledge in research and applications.Here, we focus on online systems for integration of products and parameters, visualization and analysis as well as investigation and extraction of DQ information.
NASA Astrophysics Data System (ADS)
Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt
2017-09-01
A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.
Small-amplitude acoustics in bulk granular media
NASA Astrophysics Data System (ADS)
Henann, David L.; Valenza, John J., II; Johnson, David L.; Kamrin, Ken
2013-10-01
We propose and validate a three-dimensional continuum modeling approach that predicts small-amplitude acoustic behavior of dense-packed granular media. The model is obtained through a joint experimental and finite-element study focused on the benchmark example of a vibrated container of grains. Using a three-parameter linear viscoelastic constitutive relation, our continuum model is shown to quantitatively predict the effective mass spectra in this geometry, even as geometric parameters for the environment are varied. Further, the model's predictions for the surface displacement field are validated mode-by-mode against experiment. A primary observation is the importance of the boundary condition between grains and the quasirigid walls.
NASA Technical Reports Server (NTRS)
Deryder, L. J.; Chiger, H. D.; Deryder, D. D.; Detweiler, K. N.; Dupree, R. L.; Gillespie, V. P.; Hall, J. B.; Heck, M. L.; Herrick, D. C.; Katzberg, S. J.
1989-01-01
The results of a NASA in-house team effort to develop a concept definition for a Commercially Developed Space Facility (CDSF) are presented. Science mission utilization definition scenarios are documented, the conceptual configuration definition system performance parameters qualified, benchmark operational scenarios developed, space shuttle interface descriptions provided, and development schedule activity was assessed with respect to the establishment of a proposed launch date.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
Resistive switching near electrode interfaces: Estimations by a current model
NASA Astrophysics Data System (ADS)
Schroeder, Herbert; Zurhelle, Alexander; Stemmer, Stefanie; Marchewka, Astrid; Waser, Rainer
2013-02-01
The growing resistive switching database is accompanied by many detailed mechanisms which often are pure hypotheses. Some of these suggested models can be verified by checking their predictions with the benchmarks of future memory cells. The valence change memory model assumes that the different resistances in ON and OFF states are made by changing the defect density profiles in a sheet near one working electrode during switching. The resulting different READ current densities in ON and OFF states were calculated by using an appropriate simulation model with variation of several important defect and material parameters of the metal/insulator (oxide)/metal thin film stack such as defect density and its profile change in density and thickness, height of the interface barrier, dielectric permittivity, applied voltage. The results were compared to the benchmarks and some memory windows of the varied parameters can be defined: The required ON state READ current density of 105 A/cm2 can only be achieved for barriers smaller than 0.7 eV and defect densities larger than 3 × 1020 cm-3. The required current ratio between ON and OFF states of at least 10 requests defect density reduction of approximately an order of magnitude in a sheet of several nanometers near the working electrode.
Parallel Ada benchmarks for the SVMS
NASA Technical Reports Server (NTRS)
Collard, Philippe E.
1990-01-01
The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.
Brandenburg, Marcus; Hahn, Gerd J
2018-06-01
Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.
Benchmarking variable-density flow in saturated and unsaturated porous media
NASA Astrophysics Data System (ADS)
Guevara Morel, Carlos Roberto; Cremer, Clemens; Graf, Thomas
2015-04-01
In natural environments, fluid density and viscosity can be affected by spatial and temporal variations of solute concentration and/or temperature. These variations can occur, for example, due to salt water intrusion in coastal aquifers, leachate infiltration from waste disposal sites and upconing of saline water from deep aquifers. As a consequence, potentially unstable situations may exist in which a dense fluid overlies a less dense fluid. This situation can produce instabilities that manifest as dense plume fingers that move vertically downwards counterbalanced by vertical upwards flow of the less dense fluid. Resulting free convection increases solute transport rates over large distances and times relative to constant-density flow. Therefore, the understanding of free convection is relevant for the protection of freshwater aquifer systems. The results from a laboratory experiment of saturated and unsaturated variable-density flow and solute transport (Simmons et al., Transp. Porous Medium, 2002) are used as the physical basis to define a mathematical benchmark. The HydroGeoSphere code coupled with PEST are used to estimate the optimal parameter set capable of reproducing the physical model. A grid convergency analysis (in space and time) is also undertaken in order to obtain the adequate spatial and temporal discretizations. The new mathematical benchmark is useful for model comparison and testing of variable-density variably saturated flow in porous media.
NASA Astrophysics Data System (ADS)
Yang, Yong-fa; Li, Qi
2014-12-01
In the practical application of terahertz reflection-mode confocal scanning microscopy, the size of detector pinhole is an important factor that determines the performance of spatial resolution characteristic of the microscopic system. However, the use of physical pinhole brings some inconvenience to the experiment and the adjustment error has a great influence on the experiment result. Through reasonably selecting the parameter of matrix detector virtual pinhole (VPH), it can efficiently approximate the physical pinhole. By using this approach, the difficulty of experimental calibration is reduced significantly. In this article, an imaging scheme of terahertz reflection-mode confocal scanning microscopy that is based on the matrix detector VPH is put forward. The influence of detector pinhole size on the axial resolution of confocal scanning microscopy is emulated and analyzed. Then, the parameter of VPH is emulated when the best axial imaging performance is reached.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants.
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-06-27
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated.
Verification of Geometric Model-Based Plant Phenotyping Methods for Studies of Xerophytic Plants
Drapikowski, Paweł; Kazimierczak-Grygiel, Ewa; Korecki, Dominik; Wiland-Szymańska, Justyna
2016-01-01
This paper presents the results of verification of certain non-contact measurement methods of plant scanning to estimate morphological parameters such as length, width, area, volume of leaves and/or stems on the basis of computer models. The best results in reproducing the shape of scanned objects up to 50 cm in height were obtained with the structured-light DAVID Laserscanner. The optimal triangle mesh resolution for scanned surfaces was determined with the measurement error taken into account. The research suggests that measuring morphological parameters from computer models can supplement or even replace phenotyping with classic methods. Calculating precise values of area and volume makes determination of the S/V (surface/volume) ratio for cacti and other succulents possible, whereas for classic methods the result is an approximation only. In addition, the possibility of scanning and measuring plant species which differ in morphology was investigated. PMID:27355949
Highly Enriched Uranium Metal Cylinders Surrounded by Various Reflector Materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernard Jones; J. Blair Briggs; Leland Monteirth
A series of experiments was performed at Los Alamos Scientific Laboratory in 1958 to determine critical masses of cylinders of Oralloy (Oy) reflected by a number of materials. The experiments were all performed on the Comet Universal Critical Assembly Machine, and consisted of discs of highly enriched uranium (93.3 wt.% 235U) reflected by half-inch and one-inch-thick cylindrical shells of various reflector materials. The experiments were performed by members of Group N-2, particularly K. W. Gallup, G. E. Hansen, H. C. Paxton, and R. H. White. This experiment was intended to ascertain critical masses for criticality safety purposes, as well asmore » to compare neutron transport cross sections to those obtained from danger coefficient measurements with the Topsy Oralloy-Tuballoy reflected and Godiva unreflected critical assemblies. The reflector materials examined in this series of experiments are as follows: magnesium, titanium, aluminum, graphite, mild steel, nickel, copper, cobalt, molybdenum, natural uranium, tungsten, beryllium, aluminum oxide, molybdenum carbide, and polythene (polyethylene). Also included are two special configurations of composite beryllium and iron reflectors. Analyses were performed in which uncertainty associated with six different parameters was evaluated; namely, extrapolation to the uranium critical mass, uranium density, 235U enrichment, reflector density, reflector thickness, and reflector impurities. In addition to the idealizations made by the experimenters (removal of the platen and diaphragm), two simplifications were also made to the benchmark models that resulted in a small bias and additional uncertainty. First of all, since impurities in core and reflector materials are only estimated, they are not included in the benchmark models. Secondly, the room, support structure, and other possible surrounding equipment were not included in the model. Bias values that result from these two simplifications were determined and associated uncertainty in the bias values were included in the overall uncertainty in benchmark keff values. Bias values were very small, ranging from 0.0004 ?k low to 0.0007 ?k low. Overall uncertainties range from ? 0.0018 to ? 0.0030. Major contributors to the overall uncertainty include uncertainty in the extrapolation to the uranium critical mass and the uranium density. Results are summarized in Figure 1. Figure 1. Experimental, Benchmark-Model, and MCNP/KENO Calculated Results The 32 configurations described and evaluated under ICSBEP Identifier HEU-MET-FAST-084 are judged to be acceptable for use as criticality safety benchmark experiments and should be valuable integral benchmarks for nuclear data testing of the various reflector materials. Details of the benchmark models, uncertainty analyses, and final results are given in this paper.« less
Ultra-fast nonlinear optical properties and photophysical mechanism of a novel pyrene derivative
NASA Astrophysics Data System (ADS)
Zhang, Youwei; Yang, Junyi; Xiao, Zhengguo; Song, Yinglin
2016-10-01
The third-order nonlinear optical properties of 1-(pyrene-1-y1)-3-(3-methylthiophene) acrylic keton named PMTAK was investigated by using Z-scan technique. The light sources for picoseconds(ps) and femtosecond(fs) Z-scan were a mode-locked Nd: YAG laser (21 ps, 532 nm,10 Hz) and an Yb: KGW based fiber laser (190 fs, 515 nm,532 nm, 20 Hz), respectively. In the two cases, reverse saturation absorption(RSA) are observed. The dynamics of the sample's optical nonlinearity is discussed via the femtosecond time-resolved pump probe with phase object at 515nm. We believe that the molecules in excited state of particle population count is caused by two-photon absorption(TPA). The five-level theoretical model is used to analysis the optical nonlinear mechanism. Combining with the result of picosecond Z-scan experiment, a set of optical nonlinear parameters are calculated out. The femtosecond Z-scan experiment is taken to confirm these parameters. The obvious excited-state nonlinearity is found by the set of parameters. The result shows that the sample has good optical nonlinearity which indicates it has potential applications in nonlinear optics field.
A Flexile and High Precision Calibration Method for Binocular Structured Light Scanning System
Yuan, Jianying; Wang, Qiong; Li, Bailin
2014-01-01
3D (three-dimensional) structured light scanning system is widely used in the field of reverse engineering, quality inspection, and so forth. Camera calibration is the key for scanning precision. Currently, 2D (two-dimensional) or 3D fine processed calibration reference object is usually applied for high calibration precision, which is difficult to operate and the cost is high. In this paper, a novel calibration method is proposed with a scale bar and some artificial coded targets placed randomly in the measuring volume. The principle of the proposed method is based on hierarchical self-calibration and bundle adjustment. We get initial intrinsic parameters from images. Initial extrinsic parameters in projective space are estimated with the method of factorization and then upgraded to Euclidean space with orthogonality of rotation matrix and rank 3 of the absolute quadric as constraint. Last, all camera parameters are refined through bundle adjustment. Real experiments show that the proposed method is robust, and has the same precision level as the result using delicate artificial reference object, but the hardware cost is very low compared with the current calibration method used in 3D structured light scanning system. PMID:25202736
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Zenkour, A. M.
2018-05-01
The thermal buckling analysis of carbon nanotubes embedded in a visco-Pasternak's medium is investigated. The Eringen's nonlocal elasticity theory, in conjunction with the first-order Donnell's shell theory, is used for this purpose. The surrounding medium is considered as a three-parameter viscoelastic foundation model, Winkler-Pasternak's model as well as a viscous damping coefficient. The governing equilibrium equations are obtained and solved for carbon nanotubes subjected to different thermal and mechanical loads. The effects of nonlocal parameter, radius and length of nanotube, and the three foundation parameters on the thermal buckling of the nanotube are studied. Sample critical buckling loads are reported and graphically illustrated to check the validity of the present results and to present benchmarks for future comparisons.
Hexagonal boron nitride and water interaction parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yanbin; Aluru, Narayana R., E-mail: aluru@illinois.edu; Wagner, Lucas K.
2016-04-28
The study of hexagonal boron nitride (hBN) in microfluidic and nanofluidic applications at the atomic level requires accurate force field parameters to describe the water-hBN interaction. In this work, we begin with benchmark quality first principles quantum Monte Carlo calculations on the interaction energy between water and hBN, which are used to validate random phase approximation (RPA) calculations. We then proceed with RPA to derive force field parameters, which are used to simulate water contact angle on bulk hBN, attaining a value within the experimental uncertainties. This paper demonstrates that end-to-end multiscale modeling, starting at detailed many-body quantum mechanics andmore » ending with macroscopic properties, with the approximations controlled along the way, is feasible for these systems.« less
SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalski, D; Huq, M; Bednarz, G
Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
NASA Astrophysics Data System (ADS)
May, J. C.; Rowley, C. D.; Meyer, H.
2017-12-01
The Naval Research Laboratory (NRL) Ocean Surface Flux System (NFLUX) is an end-to-end data processing and assimilation system used to provide near-real-time satellite-based surface heat flux fields over the global ocean. The first component of NFLUX produces near-real-time swath-level estimates of surface state parameters and downwelling radiative fluxes. The focus here will be on the satellite swath-level state parameter retrievals, namely surface air temperature, surface specific humidity, and surface scalar wind speed over the ocean. Swath-level state parameter retrievals are produced from satellite sensor data records (SDRs) from four passive microwave sensors onboard 10 platforms: the Special Sensor Microwave Imager/Sounder (SSMIS) sensor onboard the DMSP F16, F17, and F18 platforms; the Advanced Microwave Sounding Unit-A (AMSU-A) sensor onboard the NOAA-15, NOAA-18, NOAA-19, Metop-A, and Metop-B platforms; the Advanced Technology Microwave Sounder (ATMS) sensor onboard the S-NPP platform; and the Advanced Microwave Scannin Radiometer 2 (AMSR2) sensor onboard the GCOM-W1 platform. The satellite SDRs are translated into state parameter estimates using multiple polynomial regression algorithms. The coefficients to the algorithms are obtained using a bootstrapping technique with all available brightness temperature channels for a given sensor, in addition to a SST field. For each retrieved parameter for each sensor-platform combination, unique algorithms are developed for ascending and descending orbits, as well as clear vs cloudy conditions. Each of the sensors produces surface air temperature and surface specific humidity retrievals. The SSMIS and AMSR2 sensors also produce surface scalar wind speed retrievals. Improvement is seen in the SSMIS retrievals when separate algorithms are used for the even and odd scans, with the odd scans performing better than the even scans. Currently, NFLUX treats all SSMIS scans as even scans. Additional improvement in all of the surface retrievals comes from using a 3-hourly SST field, as opposed to a daily SST field.
GLISTR: Glioma Image Segmentation and Registration
Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos
2015-01-01
We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965
Uncertainty Quantification Techniques of SCALE/TSUNAMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Mueller, Don
2011-01-01
The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
A community resource benchmarking predictions of peptide binding to MHC-I molecules.
Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro
2006-06-09
Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.
NASA Astrophysics Data System (ADS)
de Freitas, Maria Camila Pruper; Figueiredo Neto, Antonio Martins; Giampaoli, Viviane; da Conceição Quintaneiro Aubin, Elisete; de Araújo Lima Barbosa, Milena Maria; Damasceno, Nágila Raquel Teixeira
2016-04-01
The great atherogenic potential of oxidized low-density lipoprotein has been widely described in the literature. The objective of this study was to investigate whether the state of oxidized low-density lipoprotein in human plasma measured by the Z-scan technique has an association with different cardiometabolic biomarkers. Total cholesterol, high-density lipoprotein cholesterol, triacylglycerols, apolipoprotein A-I and apolipoprotein B, paraoxonase-1, and glucose were analyzed using standard commercial kits, and low-density lipoprotein cholesterol was estimated using the Friedewald equation. A sandwich enzyme-linked immunosorbent assay was used to detect electronegative low-density lipoprotein. Low-density lipoprotein and high-density lipoprotein sizes were determined by Lipoprint® system. The Z-scan technique was used to measure the non-linear optical response of low-density lipoprotein solution. Principal component analysis and correlations were used respectively to resize the data from the sample and test association between the θ parameter, measured with the Z-scan technique, and the principal component. A total of 63 individuals, from both sexes, with mean age 52 years (±11), being overweight and having high levels of total cholesterol and low levels of high-density lipoprotein cholesterol, were enrolled in this study. A positive correlation between the θ parameter and more anti-atherogenic pattern for cardiometabolic biomarkers together with a negative correlation for an atherogenic pattern was found. Regarding the parameters related with an atherogenic low-density lipoprotein profile, the θ parameter was negatively correlated with a more atherogenic pattern. By using Z-scan measurements, we were able to find an association between oxidized low-density lipoprotein state and multiple cardiometabolic biomarkers in samples from individuals with different cardiovascular risk factors.
Temperature profile retrievals with extended Kalman-Bucy filters
NASA Technical Reports Server (NTRS)
Ledsham, W. H.; Staelin, D. H.
1979-01-01
The Extended Kalman-Bucy Filter is a powerful technique for estimating non-stationary random parameters in situations where the received signal is a noisy non-linear function of those parameters. A practical causal filter for retrieving atmospheric temperature profiles from radiances observed at a single scan angle by the Scanning Microwave Spectrometer (SCAMS) carried on the Nimbus 6 satellite typically shows approximately a 10-30% reduction in rms error about the mean at almost all levels below 70 mb when compared with a regression inversion.
Lidar - ND Halo Scanning Doppler, Boardman - Raw Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leo, Laura
2017-10-23
The University of Notre Dame (ND) scanning lidar dataset used for the WFIP2 Campaign is provided. The raw dataset contains the radial velocity and backscatter measurements along with the beam location and other lidar parameters in the header.
Krůček, Martin; Vrška, Tomáš; Král, Kamil
2017-01-01
Terrestrial laser scanning is a powerful technology for capturing the three-dimensional structure of forests with a high level of detail and accuracy. Over the last decade, many algorithms have been developed to extract various tree parameters from terrestrial laser scanning data. Here we present 3D Forest, an open-source non-platform-specific software application with an easy-to-use graphical user interface with the compilation of algorithms focused on the forest environment and extraction of tree parameters. The current version (0.42) extracts important parameters of forest structure from the terrestrial laser scanning data, such as stem positions (X, Y, Z), tree heights, diameters at breast height (DBH), as well as more advanced parameters such as tree planar projections, stem profiles or detailed crown parameters including convex and concave crown surface and volume. Moreover, 3D Forest provides quantitative measures of between-crown interactions and their real arrangement in 3D space. 3D Forest also includes an original algorithm of automatic tree segmentation and crown segmentation. Comparison with field data measurements showed no significant difference in measuring DBH or tree height using 3D Forest, although for DBH only the Randomized Hough Transform algorithm proved to be sufficiently resistant to noise and provided results comparable to traditional field measurements. PMID:28472167
Electron beams scanning: A novel method
NASA Astrophysics Data System (ADS)
Askarbioki, M.; Zarandi, M. B.; Khakshournia, S.; Shirmardi, S. P.; Sharifian, M.
2018-06-01
In this research, a spatial electron beam scanning is reported. There are various methods for ion and electron beam scanning. The best known of these methods is the wire scanning wherein the parameters of beam are measured by one or more conductive wires. This article suggests a novel method for e-beam scanning without the previous errors of old wire scanning. In this method, the techniques of atomic physics are applied so that a knife edge has a scanner role and the wires have detector roles. It will determine the 2D e-beam profile readily when the positions of the scanner and detectors are specified.
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Bizios, Dimitrios; Heijl, Anders; Hougaard, Jesper Leth; Bengtsson, Boel
2010-02-01
To compare the performance of two machine learning classifiers (MLCs), artificial neural networks (ANNs) and support vector machines (SVMs), with input based on retinal nerve fibre layer thickness (RNFLT) measurements by optical coherence tomography (OCT), on the diagnosis of glaucoma, and to assess the effects of different input parameters. We analysed Stratus OCT data from 90 healthy persons and 62 glaucoma patients. Performance of MLCs was compared using conventional OCT RNFLT parameters plus novel parameters such as minimum RNFLT values, 10th and 90th percentiles of measured RNFLT, and transformations of A-scan measurements. For each input parameter and MLC, the area under the receiver operating characteristic curve (AROC) was calculated. There were no statistically significant differences between ANNs and SVMs. The best AROCs for both ANN (0.982, 95%CI: 0.966-0.999) and SVM (0.989, 95% CI: 0.979-1.0) were based on input of transformed A-scan measurements. Our SVM trained on this input performed better than ANNs or SVMs trained on any of the single RNFLT parameters (p < or = 0.038). The performance of ANNs and SVMs trained on minimum thickness values and the 10th and 90th percentiles were at least as good as ANNs and SVMs with input based on the conventional RNFLT parameters. No differences between ANN and SVM were observed in this study. Both MLCs performed very well, with similar diagnostic performance. Input parameters have a larger impact on diagnostic performance than the type of machine classifier. Our results suggest that parameters based on transformed A-scan thickness measurements of the RNFL processed by machine classifiers can improve OCT-based glaucoma diagnosis.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.
Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.
A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm
Shamsi, Mousa; Sedaaghi, Mohammad Hossein
2016-01-01
Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
RERTR-12 Post-irradiation Examination Summary Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rice, Francine; Williams, Walter; Robinson, Adam
2015-02-01
The following report contains the results and conclusions for the post irradiation examinations performed on RERTR-12 Insertion 2 experiment plates. These exams include eddy-current testing to measure oxide growth; neutron radiography for evaluating the condition of the fuel prior to sectioning and determination of fuel relocation and geometry changes; gamma scanning to provide relative measurements for burnup and indication of fuel- and fission-product relocation; profilometry to measure dimensional changes of the fuel plate; analytical chemistry to benchmark the physics burnup calculations; metallography to examine the microstructural changes in the fuel, interlayer and cladding; and microhardness testing to determine the material-propertymore » changes of the fuel and cladding.« less
Electronic structure probed with positronium: Theoretical viewpoint
NASA Astrophysics Data System (ADS)
Kuriplach, Jan; Barbiellini, Bernardo
2018-05-01
We inspect carefully how the positronium can be used to study the electronic structure of materials. Recent combined experimental and computational study [A.C.L. Jones et al., Phys. Rev. Lett. 117, 216402 (2016)] has shown that the positronium affinity can be used to benchmark the exchange-correlation approximations in copper. Here we investigate whether an improvement can be achieved by increasing the numerical precision of calculations and by employing the strongly constrained and appropriately normed (SCAN) scheme, and extend the study to other selected systems like aluminum and high entropy alloys. From the methodological viewpoint, the computations of the positronium affinity are further refined and an alternative way of determining the electron chemical potential using charged supercells is examined.
Pieniazek, Facundo; Messina, Valeria
2016-11-01
In this study the effect of freeze drying on the microstructure, texture, and tenderness of Semitendinous and Gluteus Medius bovine muscles were analyzed applying Scanning Electron Microscopy combined with image analysis. Samples were analyzed by Scanning Electron Microscopy at different magnifications (250, 500, and 1,000×). Texture parameters were analyzed by Texture analyzer and by image analysis. Tenderness by Warner-Bratzler shear force. Significant differences (p < 0.05) were obtained for image and instrumental texture features. A linear trend with a linear correlation was applied for instrumental and image features. Image texture features calculated from Gray Level Co-occurrence Matrix (homogeneity, contrast, entropy, correlation and energy) at 1,000× in both muscles had high correlations with instrumental features (chewiness, hardness, cohesiveness, and springiness). Tenderness showed a positive correlation in both muscles with image features (energy and homogeneity). Combing Scanning Electron Microscopy with image analysis can be a useful tool to analyze quality parameters in meat.Summary SCANNING 38:727-734, 2016. © 2016 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS.
Chen, Maolin; Wang, Siying; Wang, Mingwei; Wan, Youchuan; He, Peipei
2017-01-20
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency.
Characterization of Titanium Oxide Layers Formation Produced by Nanosecond Laser Coloration
NASA Astrophysics Data System (ADS)
Brihmat-Hamadi, F.; Amara, E. H.; Kellou, H.
2017-06-01
Laser marking technique is used to produce colors on titanium while scanning a metallic sample under normal atmospheric conditions. To proceed with different operating conditions related to the laser beam, the parameters of a Q-switched diode-pumped Nd:YAG ( λ = 532 nm) laser, with a pulse duration of τ = 5 ns, are varied. The effect on the resulting mark quality is the aim of the present study which is developed to determine the influence of the operating parameters ( i.e., pulse frequency, beam scanning speed, and pumping intensity) and furthermore their combination, such as the accumulated fluences and the overlapping rate of laser impacts. From the obtained experimental results, it is noted that the accumulated fluences and the scanning speed are the most influential operating parameters during laser marking, since they have a strong effect on the surface roughness and reflectance, and the occurrence of many oxide phases such as TiO, Ti2O3, TiO2 ( γ- phase, anatase, and rutile).
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson; ...
2018-06-14
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
NASA Astrophysics Data System (ADS)
Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang
2017-08-01
Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arthur, Jennifer; Bahran, Rian; Hutchinson, Jesson
Historically, radiation transport codes have uncorrelated fission emissions. In reality, the particles emitted by both spontaneous and induced fissions are correlated in time, energy, angle, and multiplicity. This work validates the performance of various current Monte Carlo codes that take into account the underlying correlated physics of fission neutrons, specifically neutron multiplicity distributions. The performance of 4 Monte Carlo codes - MCNP®6.2, MCNP®6.2/FREYA, MCNP®6.2/CGMF, and PoliMi - was assessed using neutron multiplicity benchmark experiments. In addition, MCNP®6.2 simulations were run using JEFF-3.2 and JENDL-4.0, rather than ENDF/B-VII.1, data for 239Pu and 240Pu. The sensitive benchmark parameters that in this workmore » represent the performance of each correlated fission multiplicity Monte Carlo code include the singles rate, the doubles rate, leakage multiplication, and Feynman histograms. Although it is difficult to determine which radiation transport code shows the best overall performance in simulating subcritical neutron multiplication inference benchmark measurements, it is clear that correlations exist between the underlying nuclear data utilized by (or generated by) the various codes, and the correlated neutron observables of interest. This could prove useful in nuclear data validation and evaluation applications, in which a particular moment of the neutron multiplicity distribution is of more interest than the other moments. It is also quite clear that, because transport is handled by MCNP®6.2 in 3 of the 4 codes, with the 4th code (PoliMi) being based on an older version of MCNP®, the differences in correlated neutron observables of interest are most likely due to the treatment of fission event generation in each of the different codes, as opposed to the radiation transport.« less
Particle swarm optimization with recombination and dynamic linkage discovery.
Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung
2007-12-01
In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.
Reduction in radiation doses from paediatric CT scans in Great Britain.
Lee, Choonsik; Pearce, Mark S; Salotti, Jane A; Harbron, Richard W; Little, Mark P; McHugh, Kieran; Chapple, Claire-Louise; Berrington de Gonzalez, Amy
2016-01-01
Although CT scans provide great medical benefits, concerns have been raised about the magnitude of possible associated cancer risk, particularly in children who are more sensitive to radiation than adults. Unnecessary high doses during CT examinations can also be delivered to children, if the scan parameters are not adjusted for patient age and size. We conducted the first survey to directly assess the trends in CT scan parameters and doses for paediatric CT scans performed in Great Britain between 1978 and 2008. We retrieved 1073 CT film sets from 36 hospitals. The patients were 0-19 years old, and CT scans were conducted between 1978 and 2008. We extracted scan parameters from each film including tube current-time product [milliampere seconds (mAs)], tube potential [peak kilovoltage (kVp)] and manufacturer and model of the CT scanner. We estimated the mean mAs for head and trunk (chest and abdomen/pelvis) scans, according to patient age (0-4, 5-9, 10-14 and 15-19 years) and scan year (<1990, 1990-1994, 1995-1999 and ≥2000), and then derived the volumetric CT dose index and estimated organ doses. For head CT scans, mean mAs decreased by about 47% on average from before 1990 to after 2000, with the decrease starting around 1990. The mean mAs for head CTs did not vary with age before 1990, whereas slightly lower mAs values were used for younger patients after 1990. Similar declines in mAs were observed for trunk CTs: a 46% decline on an average from before 1990 to after 2000. Although mean mAs for trunk CTs did not vary with age before 1990, the value varied markedly by age, from 63 mAs for age 0-4 years compared with 315 mAs for those aged >15 years after 2000. No material changes in kVp were found. Estimated brain-absorbed dose from head CT scans decreased from 62 mGy before 1990 to approximately 30 mGy after 2000. For chest CT scans, the lung dose to children aged 0-4 years decreased from 28 mGy before 1990 to 4 mGy after 2000. We found that mAs for head and trunk CTs was approximately halved starting around 1990, and age-specific mAs was generally used for paediatric scans after this date. These changes will have substantially reduced the radiation exposure to children from CT scans in Great Britain. The study shows that mAs and major organ doses for paediatric CT scans in Great Britain began to decrease around 1990.
Reduction in radiation doses from paediatric CT scans in Great Britain
Pearce, Mark S; Salotti, Jane A; Harbron, Richard W; Little, Mark P; McHugh, Kieran; Chapple, Claire-Louise; Berrington de Gonzalez, Amy
2016-01-01
Objective: Although CT scans provide great medical benefits, concerns have been raised about the magnitude of possible associated cancer risk, particularly in children who are more sensitive to radiation than adults. Unnecessary high doses during CT examinations can also be delivered to children, if the scan parameters are not adjusted for patient age and size. We conducted the first survey to directly assess the trends in CT scan parameters and doses for paediatric CT scans performed in Great Britain between 1978 and 2008. Methods: We retrieved 1073 CT film sets from 36 hospitals. The patients were 0–19 years old, and CT scans were conducted between 1978 and 2008. We extracted scan parameters from each film including tube current–time product [milliampere seconds (mAs)], tube potential [peak kilovoltage (kVp)] and manufacturer and model of the CT scanner. We estimated the mean mAs for head and trunk (chest and abdomen/pelvis) scans, according to patient age (0–4, 5–9, 10–14 and 15–19 years) and scan year (<1990, 1990–1994, 1995–1999 and ≥2000), and then derived the volumetric CT dose index and estimated organ doses. Results: For head CT scans, mean mAs decreased by about 47% on average from before 1990 to after 2000, with the decrease starting around 1990. The mean mAs for head CTs did not vary with age before 1990, whereas slightly lower mAs values were used for younger patients after 1990. Similar declines in mAs were observed for trunk CTs: a 46% decline on an average from before 1990 to after 2000. Although mean mAs for trunk CTs did not vary with age before 1990, the value varied markedly by age, from 63 mAs for age 0–4 years compared with 315 mAs for those aged >15 years after 2000. No material changes in kVp were found. Estimated brain-absorbed dose from head CT scans decreased from 62 mGy before 1990 to approximately 30 mGy after 2000. For chest CT scans, the lung dose to children aged 0–4 years decreased from 28 mGy before 1990 to 4 mGy after 2000. Conclusion: We found that mAs for head and trunk CTs was approximately halved starting around 1990, and age-specific mAs was generally used for paediatric scans after this date. These changes will have substantially reduced the radiation exposure to children from CT scans in Great Britain. Advances in knowledge: The study shows that mAs and major organ doses for paediatric CT scans in Great Britain began to decrease around 1990. PMID:26864156
Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J
2013-04-21
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
TOPSIS based parametric optimization of laser micro-drilling of TBC coated nickel based superalloy
NASA Astrophysics Data System (ADS)
Parthiban, K.; Duraiselvam, Muthukannan; Manivannan, R.
2018-06-01
The technique for order of preference by similarity ideal solution (TOPSIS) approach was used for optimizing the process parameters of laser micro-drilling of nickel superalloy C263 with Thermal Barrier Coating (TBC). Plasma spraying was used to deposit the TBC and a pico-second Nd:YAG pulsed laser was used to drill the specimens. Drilling angle, laser scan speed and number of passes were considered as input parameters. Based on the machining conditions, Taguchi L8 orthogonal array was used for conducting the experimental runs. The surface roughness and surface crack density (SCD) were considered as the output measures. The surface roughness was measured using 3D White Light Interferometer (WLI) and the crack density was measured using Scanning Electron Microscope (SEM). The optimized result achieved from this approach suggests reduced surface roughness and surface crack density. The holes drilled at an inclination angle of 45°, laser scan speed of 3 mm/s and 400 number of passes found to be optimum. From the Analysis of variance (ANOVA), inclination angle and number of passes were identified as the major influencing parameter. The optimized parameter combination exhibited a 19% improvement in surface finish and 12% reduction in SCD.
Detterbeck, Andreas; Hofmeister, Michael; Hofmann, Elisabeth; Haddad, Daniel; Weber, Daniel; Hölzing, Astrid; Zabler, Simon; Schmid, Matthias; Hiller, Karl-Heinz; Jakob, Peter; Engel, Jens; Hiller, Jochen; Hirschfelder, Ursula
2016-07-01
To examine the relative usefulness and suitability of magnetic resonance imaging (MRI) in daily clinical practice as compared to various technologies of computed tomography (CT) in addressing questions of orthodontic interest. Three blinded raters evaluated 2D slices and 3D reconstructions created from scans of two pig heads. Five imaging modalities were used, including three CT technologies-multislice (MSCT), cone-beam CT (CBCT), and industrial (µCT)-and two MRI protocols with different scan durations. Defined orthodontic parameters were rated one by one on the 2D slices and the 3D reconstructions, followed by final overall ratings for each modality. A mixed linear model was used for statistical analysis. Based on the 2D slices, the parameter of visualizing tooth-germ topography did not yield any significantly different ratings for MRI versus any of the CT scans. While some ratings for the other parameters did involve significant differences, how these should be interpreted depends greatly on the relevance of each parameter. Based on the 3D reconstructions, the only significant difference between technologies was noted for the parameter of visualizing root-surface morphology. Based on the final overall ratings, the imaging performance of the standard MRI protocol was noninferior to the performance of the three CT technologies. On comparing the imaging performance of MRI and CT scans, it becomes clear that MRI has a huge potential for applications in daily clinical practice. Given its additional benefits of a good contrast ratio and complete absence of ionizing radiation, further studies are needed to explore this clinical potential in greater detail.
Wang, Chunhao; Subashi, Ergys; Yin, Fang-Fang; Chang, Zheng
2016-01-01
Purpose: To develop a dynamic fractal signature dissimilarity (FSD) method as a novel image texture analysis technique for the quantification of tumor heterogeneity information for better therapeutic response assessment with dynamic contrast-enhanced (DCE)-MRI. Methods: A small animal antiangiogenesis drug treatment experiment was used to demonstrate the proposed method. Sixteen LS-174T implanted mice were randomly assigned into treatment and control groups (n = 8/group). All mice received bevacizumab (treatment) or saline (control) three times in two weeks, and one pretreatment and two post-treatment DCE-MRI scans were performed. In the proposed dynamic FSD method, a dynamic FSD curve was generated to characterize the heterogeneity evolution during the contrast agent uptake, and the area under FSD curve (AUCFSD) and the maximum enhancement (MEFSD) were selected as representative parameters. As for comparison, the pharmacokinetic parameter Ktrans map and area under MR intensity enhancement curve AUCMR map were calculated. Besides the tumor’s mean value and coefficient of variation, the kurtosis, skewness, and classic Rényi dimensions d1 and d2 of Ktrans and AUCMR maps were evaluated for heterogeneity assessment for comparison. For post-treatment scans, the Mann–Whitney U-test was used to assess the differences of the investigated parameters between treatment/control groups. The support vector machine (SVM) was applied to classify treatment/control groups using the investigated parameters at each post-treatment scan day. Results: The tumor mean Ktrans and its heterogeneity measurements d1 and d2 values showed significant differences between treatment/control groups in the second post-treatment scan. In contrast, the relative values (in reference to the pretreatment value) of AUCFSD and MEFSD in both post-treatment scans showed significant differences between treatment/control groups. When using AUCFSD and MEFSD as SVM input for treatment/control classification, the achieved accuracies were 93.8% and 93.8% at first and second post-treatment scan days, respectively. In comparison, the classification accuracies using d1 and d2 of Ktrans map were 87.5% and 100% at first and second post-treatment scan days, respectively. Conclusions: As quantitative metrics of tumor contrast agent uptake heterogeneity, the selected parameters from the dynamic FSD method accurately captured the therapeutic response in the experiment. The potential application of the proposed method is promising, and its addition to the existing DCE-MRI techniques could improve DCE-MRI performance in early assessment of treatment response. PMID:26936718
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
Test suite for image-based motion estimation of the brain and tongue
NASA Astrophysics Data System (ADS)
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-03-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an "image synthesis" test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head- brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield "ghost" shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation.
NASA Astrophysics Data System (ADS)
Kez, V.; Liu, F.; Consalvi, J. L.; Ströhle, J.; Epple, B.
2016-03-01
The oxy-fuel combustion is a promising CO2 capture technology from combustion systems. This process is characterized by much higher CO2 concentrations in the combustion system compared to that of the conventional air-fuel combustion. To accurately predict the enhanced thermal radiation in oxy-fuel combustion, it is essential to take into account the non-gray nature of gas radiation. In this study, radiation heat transfer in a 3D model gas turbine combustor under two test cases at 20 atm total pressure was calculated by various non-gray gas radiation models, including the statistical narrow-band (SNB) model, the statistical narrow-band correlated-k (SNBCK) model, the wide-band correlated-k (WBCK) model, the full spectrum correlated-k (FSCK) model, and several weighted sum of gray gases (WSGG) models. Calculations of SNB, SNBCK, and FSCK were conducted using the updated EM2C SNB model parameters. Results of the SNB model are considered as the benchmark solution to evaluate the accuracy of the other models considered. Results of SNBCK and FSCK are in good agreement with the benchmark solution. The WBCK model is less accurate than SNBCK or FSCK. Considering the three formulations of the WBCK model, the multiple gases formulation is the best choice regarding the accuracy and computational cost. The WSGG model with the parameters of Bordbar et al. (2014) [20] is the most accurate of the three investigated WSGG models. Use of the gray WSSG formulation leads to significant deviations from the benchmark data and should not be applied to predict radiation heat transfer in oxy-fuel combustion systems. A best practice to incorporate the state-of-the-art gas radiation models for high accuracy of radiation heat transfer calculations at minimal increase in computational cost in CFD simulation of oxy-fuel combustion systems for pressure path lengths up to about 10 bar m is suggested.
Test Suite for Image-Based Motion Estimation of the Brain and Tongue
Ramsey, Jordan; Prince, Jerry L.; Gomez, Arnold D.
2017-01-01
Noninvasive analysis of motion has important uses as qualitative markers for organ function and to validate biomechanical computer simulations relative to experimental observations. Tagged MRI is considered the gold standard for noninvasive tissue motion estimation in the heart, and this has inspired multiple studies focusing on other organs, including the brain under mild acceleration and the tongue during speech. As with other motion estimation approaches, using tagged MRI to measure 3D motion includes several preprocessing steps that affect the quality and accuracy of estimation. Benchmarks, or test suites, are datasets of known geometries and displacements that act as tools to tune tracking parameters or to compare different motion estimation approaches. Because motion estimation was originally developed to study the heart, existing test suites focus on cardiac motion. However, many fundamental differences exist between the heart and other organs, such that parameter tuning (or other optimization) with respect to a cardiac database may not be appropriate. Therefore, the objective of this research was to design and construct motion benchmarks by adopting an “image synthesis” test suite to study brain deformation due to mild rotational accelerations, and a benchmark to model motion of the tongue during speech. To obtain a realistic representation of mechanical behavior, kinematics were obtained from finite-element (FE) models. These results were combined with an approximation of the acquisition process of tagged MRI (including tag generation, slice thickness, and inconsistent motion repetition). To demonstrate an application of the presented methodology, the effect of motion inconsistency on synthetic measurements of head-brain rotation and deformation was evaluated. The results indicated that acquisition inconsistency is roughly proportional to head rotation estimation error. Furthermore, when evaluating non-rigid deformation, the results suggest that inconsistent motion can yield “ghost” shear strains, which are a function of slice acquisition viability as opposed to a true physical deformation. PMID:28781414
NASA Astrophysics Data System (ADS)
Feller, D. F.
1993-07-01
This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory's Molecular Science Research Center in late 1992 and early 1993. The 'snapshot' nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.
Nakamura, Manami; Makabe, Takeshi; Tezuka, Hideomi; Miura, Takahiro; Umemura, Takuma; Sugimori, Hiroyuki; Sakata, Motomichi
2013-04-01
The purpose of this study was to optimize scan parameters for evaluation of carotid plaque characteristics by k-space trajectory (radial scan method), using a custom-made carotid plaque phantom. The phantom was composed of simulated sternocleidomastoid muscle and four types of carotid plaque. The effect of chemical shift artifact was compared using T1 weighted images (T1WI) of the phantom obtained with and without fat suppression, and using two types of k-space trajectory (the radial scan method and the Cartesian method). The ratio of signal intensity of simulated sternocleidomastoid muscle to the signal intensity of hematoma, blood (including heparin), lard, and mayonnaise was compared among various repetition times (TR) using T1WI and T2 weighted imaging (T2WI). In terms of chemical shift artifacts, image quality was improved using fat suppression for both the radial scan and Cartesian methods. In terms of signal ratio, the highest values were obtained for the radial scan method with TR of 500 ms for T1WI, and TR of 3000 ms for T2WI. For evaluation of carotid plaque characteristics using the radial scan method, chemical shift artifacts were reduced with fat suppression. Signal ratio was improved by optimizing the TR settings for T1WI and T2WI. These results suggest the potential for using magnetic resonance imaging for detailed evaluation of carotid plaque.
Inversion of scattered radiance horizon profiles for gaseous concentrations and aerosol parameters
NASA Technical Reports Server (NTRS)
Malchow, H. L.; Whitney, C. K.
1977-01-01
Techniques have been developed and used to invert limb scan measurements for vertical profiles of atmospheric state parameters. The parameters which can be found are concentrations of Rayleigh scatters, ozone, NO2, and aerosols, and aerosol physical properties including a Junge-size distribution parameter and real and imaginary parts of the index of refraction.
Using video-oriented instructions to speed up sequence comparison.
Wozniak, A
1997-04-01
This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.
NASA Astrophysics Data System (ADS)
Unger, Jakob; Sun, Tianchen; Chen, Yi-Ling; Phipps, Jennifer E.; Bold, Richard J.; Darrow, Morgan A.; Ma, Kwan-Liu; Marcu, Laura
2018-01-01
An important step in establishing the diagnostic potential for emerging optical imaging techniques is accurate registration between imaging data and the corresponding tissue histopathology typically used as gold standard in clinical diagnostics. We present a method to precisely register data acquired with a point-scanning spectroscopic imaging technique from fresh surgical tissue specimen blocks with corresponding histological sections. Using a visible aiming beam to augment point-scanning multispectral time-resolved fluorescence spectroscopy on video images, we evaluate two different markers for the registration with histology: fiducial markers using a 405-nm CW laser and the tissue block's outer shape characteristics. We compare the registration performance with benchmark methods using either the fiducial markers or the outer shape characteristics alone to a hybrid method using both feature types. The hybrid method was found to perform best reaching an average error of 0.78±0.67 mm. This method provides a profound framework to validate diagnostical abilities of optical fiber-based techniques and furthermore enables the application of supervised machine learning techniques to automate tissue characterization.
NASA Astrophysics Data System (ADS)
Bean, Glenn E.; Witkin, David B.; McLouth, Tait D.; Zaldivar, Rafael J.
2018-02-01
Research on the selective laser melting (SLM) method of laser powder bed fusion additive manufacturing (AM) has shown that surface and internal quality of AM parts is directly related to machine settings such as laser energy density, scanning strategies, and atmosphere. To optimize laser parameters for improved component quality, the energy density is typically controlled via laser power, scanning rate, and scanning strategy, but can also be controlled by changing the spot size via laser focal plane shift. Present work being conducted by The Aerospace Corporation was initiated after observing inconsistent build quality of parts printed using OEM-installed settings. Initial builds of Inconel 718 witness geometries using OEM laser parameters were evaluated for surface roughness, density, and porosity while varying energy density via laser focus shift. Based on these results, hardware and laser parameter adjustments were conducted in order to improve build quality and consistency. Tensile testing was also conducted to investigate the effect of build plate location and laser settings on SLM 718. This work has provided insight into the limitations of OEM parameters compared with optimized parameters towards the goal of manufacturing aerospace-grade parts, and has led to the development of a methodology for laser parameter tuning that can be applied to other alloy systems. Additionally, evidence was found that for 718, which derives its strength from post-manufacturing heat treatment, there is a possibility that tensile testing may not be perceptive to defects which would reduce component performance. Ongoing research is being conducted towards identifying appropriate testing and analysis methods for screening and quality assurance.
Yasaka, Koichiro; Akai, Hiroyuki; Mackin, Dennis; Court, Laurence; Moros, Eduardo; Ohtomo, Kuni; Kiryu, Shigeru
2017-05-01
Quantitative computed tomography (CT) texture analyses for images with and without filtration are gaining attention to capture the heterogeneity of tumors. The aim of this study was to investigate how quantitative texture parameters using image filtering vary among different computed tomography (CT) scanners using a phantom developed for radiomics studies.A phantom, consisting of 10 different cartridges with various textures, was scanned under 6 different scanning protocols using four CT scanners from four different vendors. CT texture analyses were performed for both unfiltered images and filtered images (using a Laplacian of Gaussian spatial band-pass filter) featuring fine, medium, and coarse textures. Forty-five regions of interest were placed for each cartridge (x) in a specific scan image set (y), and the average of the texture values (T(x,y)) was calculated. The interquartile range (IQR) of T(x,y) among the 6 scans was calculated for a specific cartridge (IQR(x)), while the IQR of T(x,y) among the 10 cartridges was calculated for a specific scan (IQR(y)), and the median IQR(y) was then calculated for the 6 scans (as the control IQR, IQRc). The median of their quotient (IQR(x)/IQRc) among the 10 cartridges was defined as the variability index (VI).The VI was relatively small for the mean in unfiltered images (0.011) and for standard deviation (0.020-0.044) and entropy (0.040-0.044) in filtered images. Skewness and kurtosis in filtered images featuring medium and coarse textures were relatively variable across different CT scanners, with VIs of 0.638-0.692 and 0.430-0.437, respectively.Various quantitative CT texture parameters are robust and variable among different scanners, and the behavior of these parameters should be taken into consideration.
Kook, Michael S; Cho, Hyun-soo; Seong, Mincheol; Choi, Jaewan
2005-11-01
To evaluate the ability of scanning laser polarimetry parameters and a novel deviation map algorithm to discriminate between healthy and early glaucomatous eyes with localized visual field (VF) defects confined to one hemifield. Prospective case-control study. Seventy glaucomatous eyes with localized VF defects and 66 normal controls. A Humphrey field analyzer 24-2 full-threshold test and scanning laser polarimetry with variable corneal compensation were used. We assessed the sensitivity and specificity of scanning laser polarimetry parameters, sensitivity and cutoff values for scanning laser polarimetry deviation map algorithms at different specificity values (80%, 90%, and 95%) in the detection of glaucoma, and correlations between the algorithms of scanning laser polarimetry and of the pattern deviation derived from Humphrey field analyzer testing. There were significant differences between the glaucoma group and normal subjects in the mean parametric values of the temporal, superior, nasal, inferior, temporal (TSNIT) average, superior average, inferior average, and TSNIT standard deviation (SD) (P<0.05). The sensitivity and specificity of each scanning laser polarimetry variable was as follows: TSNIT, 44.3% (95% confidence interval [CI], 39.8%-49.8%) and 100% (95.4%-100%); superior average, 30% (25.5%-34.5%) and 97% (93.5%-100%); inferior average, 45.7% (42.2%-49.2%) and 100% (95.8%-100%); and TSNIT SD, 30% (25.9%-34.1%) and 97% (93.2%-100%), respectively (when abnormal was defined as P<0.05). Based on nerve fiber indicator cutoff values of > or =30 and > or =51 to indicate glaucoma, sensitivities were 54.3% (50.1%-58.5%) and 10% (6.4%-13.6%), and specificities were 97% (93.2%-100%) and 100% (95.8%-100%), respectively. The range of areas under the receiver operating characteristic curves using the scanning laser polarimetry deviation map algorithm was 0.790 to 0.879. Overall sensitivities combining each probability scale and severity score at 80%, 90%, and 95% specificities were 90.0% (95% CI, 86.4%-93.6%), 71.4% (67.4%-75.4%), and 60.0% (56.2%-63.8%), respectively. There was a statistically significant correlation between the scanning laser polarimetry severity score and the VF severity score (R2 = 0.360, P<0.001). Scanning laser polarimetry parameters may not be sufficiently sensitive to detect glaucomatous patients with localized VF damage. Our algorithm using the scanning laser polarimetry deviation map may enhance the understanding of scanning laser polarimetry printouts in terms of the locality, deviation size, and severity of localized retinal nerve fiber layer defects in eyes with localized VF loss.
Adaptive firefly algorithm: parameter analysis and its application.
Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise.
Adaptive Firefly Algorithm: Parameter Analysis and its Application
Shen, Hong-Bin
2014-01-01
As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm — adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812
Benchmarking of Touschek Beam Lifetime Calculations for the Advanced Photon Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, A.; Yang, B.
2017-06-25
Particle loss from Touschek scattering is one of the most significant issues faced by present and future synchrotron light source storage rings. For example, the predicted, Touschek-dominated beam lifetime for the Advanced Photon Source (APS) Upgrade lattice in 48-bunch, 200-mA timing mode is only ~ 2 h. In order to understand the reliability of the predicted lifetime, a series of measurements with various beam parameters was performed on the present APS storage ring. This paper first describes the entire process of beam lifetime measurement, then compares measured lifetime with the calculated one by applying the measured beam parameters. The resultsmore » show very good agreement.« less
Determination of the Electrochemical Area of Screen-Printed Electrochemical Sensing Platforms.
García-Miranda Ferrari, Alejandro; Foster, Christopher W; Kelly, Peter J; Brownson, Dale A C; Banks, Craig E
2018-06-08
Screen-printed electrochemical sensing platforms, due to their scales of economy and high reproducibility, can provide a useful approach to translate laboratory-based electrochemistry into the field. An important factor when utilising screen-printed electrodes (SPEs) is the determination of their real electrochemical surface area, which allows for the benchmarking of these SPEs and is an important parameter in quality control. In this paper, we consider the use of cyclic voltammetry and chronocoulometry to allow for the determination of the real electrochemical area of screen-printed electrochemical sensing platforms, highlighting to experimentalists the various parameters that need to be diligently considered and controlled in order to obtain useful measurements of the real electroactive area.
A Self Adaptive Differential Evolution Algorithm for Global Optimization
NASA Astrophysics Data System (ADS)
Kumar, Pravesh; Pant, Millie
This paper presents a new Differential Evolution algorithm based on hybridization of adaptive control parameters and trigonometric mutation. First we propose a self adaptive DE named ADE where choice of control parameter F and Cr is not fixed at some constant value but is taken iteratively. The proposed algorithm is further modified by applying trigonometric mutation in it and the corresponding algorithm is named as ATDE. The performance of ATDE is evaluated on the set of 8 benchmark functions and the results are compared with the classical DE algorithm in terms of average fitness function value, number of function evaluations, convergence time and success rate. The numerical result shows the competence of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Pickalov, Valery; Nagella, Srikanth; Pasca, Edoardo; Withers, Philip J.
2018-01-01
In the field of computerized tomographic imaging, many novel reconstruction techniques are routinely tested using simplistic numerical phantoms, e.g. the well-known Shepp-Logan phantom. These phantoms cannot sufficiently cover the broad spectrum of applications in CT imaging where, for instance, smooth or piecewise-smooth 3D objects are common. TomoPhantom provides quick access to an external library of modular analytical 2D/3D phantoms with temporal extensions. In TomoPhantom, quite complex phantoms can be built using additive combinations of geometrical objects, such as, Gaussians, parabolas, cones, ellipses, rectangles and volumetric extensions of them. Newly designed phantoms are better suited for benchmarking and testing of different image processing techniques. Specifically, tomographic reconstruction algorithms which employ 2D and 3D scanning geometries, can be rigorously analyzed using the software. TomoPhantom also provides a capability of obtaining analytical tomographic projections which further extends the applicability of software towards more realistic, free from the "inverse crime" testing. All core modules of the package are written in the C-OpenMP language and wrappers for Python and MATLAB are provided to enable easy access. Due to C-based multi-threaded implementation, volumetric phantoms of high spatial resolution can be obtained with computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gongzhang, R.; Xiao, B.; Lardner, T.
2014-02-18
This paper presents a robust frequency diversity based algorithm for clutter reduction in ultrasonic A-scan waveforms. The performance of conventional spectral-temporal techniques like Split Spectrum Processing (SSP) is highly dependent on the parameter selection, especially when the signal to noise ratio (SNR) is low. Although spatial beamforming offers noise reduction with less sensitivity to parameter variation, phased array techniques are not always available. The proposed algorithm first selects an ascending series of frequency bands. A signal is reconstructed for each selected band in which a defect is present when all frequency components are in uniform sign. Combining all reconstructed signalsmore » through averaging gives a probability profile of potential defect position. To facilitate data collection and validate the proposed algorithm, Full Matrix Capture is applied on the austenitic steel and high nickel alloy (HNA) samples with 5MHz transducer arrays. When processing A-scan signals with unrefined parameters, the proposed algorithm enhances SNR by 20dB for both samples and consequently, defects are more visible in B-scan images created from the large amount of A-scan traces. Importantly, the proposed algorithm is considered robust, while SSP is shown to fail on the austenitic steel data and achieves less SNR enhancement on the HNA data.« less
Design Optimization of a Hybrid Electric Vehicle Powertrain
NASA Astrophysics Data System (ADS)
Mangun, Firdause; Idres, Moumen; Abdullah, Kassim
2017-03-01
This paper presents an optimization work on hybrid electric vehicle (HEV) powertrain using Genetic Algorithm (GA) method. It focused on optimization of the parameters of powertrain components including supercapacitors to obtain maximum fuel economy. Vehicle modelling is based on Quasi-Static-Simulation (QSS) backward-facing approach. A combined city (FTP-75)-highway (HWFET) drive cycle is utilized for the design process. Seeking global optimum solution, GA was executed with different initial settings to obtain sets of optimal parameters. Starting from a benchmark HEV, optimization results in a smaller engine (2 l instead of 3 l) and a larger battery (15.66 kWh instead of 2.01 kWh). This leads to a reduction of 38.3% in fuel consumption and 30.5% in equivalent fuel consumption. Optimized parameters are also compared with actual values for HEV in the market.
Physical and numerical studies of a fracture system model
NASA Astrophysics Data System (ADS)
Piggott, Andrew R.; Elsworth, Derek
1989-03-01
Physical and numerical studies of transient flow in a model of discretely fractured rock are presented. The physical model is a thermal analogue to fractured media flow consisting of idealized disc-shaped fractures. The numerical model is used to predict the behavior of the physical model. The use of different insulating materials to encase the physical model allows the effects of differing leakage magnitudes to be examined. A procedure for determining appropriate leakage parameters is documented. These parameters are used in forward analysis to predict the thermal response of the physical model. Knowledge of the leakage parameters and of the temporal variation of boundary conditions are shown to be essential to an accurate prediction. Favorable agreement is illustrated between numerical and physical results. The physical model provides a data source for the benchmarking of alternative numerical algorithms.
Determination and correction of persistent biases in quantum annealers
Perdomo-Ortiz, Alejandro; O’Gorman, Bryan; Fluegemann, Joseph; Biswas, Rupak; Smelyanskiy, Vadim N.
2016-01-01
Calibration of quantum computers is essential to the effective utilisation of their quantum resources. Specifically, the performance of quantum annealers is likely to be significantly impaired by noise in their programmable parameters, effectively misspecification of the computational problem to be solved, often resulting in spurious suboptimal solutions. We developed a strategy to determine and correct persistent, systematic biases between the actual values of the programmable parameters and their user-specified values. We applied the recalibration strategy to two D-Wave Two quantum annealers, one at NASA Ames Research Center in Moffett Field, California, and another at D-Wave Systems in Burnaby, Canada. We show that the recalibration procedure not only reduces the magnitudes of the biases in the programmable parameters but also enhances the performance of the device on a set of random benchmark instances. PMID:26783120
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
Strain mapping in TEM using precession electron diffraction
Taheri, Mitra Lenore; Leff, Asher Calvin
2017-02-14
A sample material is scanned with a transmission electron microscope (TEM) over multiple steps having a predetermined size at a predetermined angle. Each scan at a predetermined step and angle is compared to a template, wherein the template is generated from parameters of the material and the scanning. The data is then analyzed using local mis-orientation mapping and/or Nye's tensor analysis to provide information about local strain states.
3D scanning electron microscopy applied to surface characterization of fluorosed dental enamel.
Limandri, Silvina; Galván Josa, Víctor; Valentinuzzi, María Cecilia; Chena, María Emilia; Castellano, Gustavo
2016-05-01
The enamel surfaces of fluorotic teeth were studied by scanning electron stereomicroscopy. Different whitening treatments were applied to 25 pieces to remove stains caused by fluorosis and their surfaces were characterized by stereomicroscopy in order to obtain functional and amplitude parameters. The topographic features resulting for each treatment were determined through these parameters. The results obtained show that the 3D reconstruction achieved from the SEM stereo pairs is a valuable potential alternative for the surface characterization of this kind of samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
The bulk composition of Titan's atmosphere.
NASA Technical Reports Server (NTRS)
Trafton, L.
1972-01-01
Consideration of the physical constraints for Titan's atmosphere leads to a model which describes the bulk composition of the atmosphere in terms of observable parameters. Intermediate-resolution photometric scans of both Saturn and Titan, including scans of the Q branch of Titan's methane band, constrain these parameters in such a way that the model indicates the presence of another important atmospheric gas, namely, another bulk constituent or a significant thermal opacity. Further progress in determining the composition and state of Titan's atmosphere requires additional observations to eliminate present ambiguities. For this purpose, particular observational targets are suggested.
Evaluation of a semiautomated lung mass calculation technique for internal dosimetry applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busse, Nathan; Erwin, William; Pan, Tinsu
2013-12-15
Purpose: The authors sought to evaluate a simple, semiautomated lung mass estimation method using computed tomography (CT) scans obtained using a variety of acquisition techniques and reconstruction parameters for mass correction of medical internal radiation dose-based internal radionuclide radiation absorbed dose estimates.Methods: CT scans of 27 patients with lung cancer undergoing stereotactic body radiation therapy treatment planning with PET/CT were analyzed retrospectively. For each patient, free-breathing (FB) and respiratory-gated 4DCT scans were acquired. The 4DCT scans were sorted into ten respiratory phases, representing one complete respiratory cycle. An average CT reconstruction was derived from the ten-phase reconstructions. Mid expiration breath-holdmore » CT scans were acquired in the same session for many patients. Deep inspiration breath-hold diagnostic CT scans of many of the patients were obtained from different scanning sessions at similar time points to evaluate the effect of contrast administration and maximum inspiration breath-hold. Lung mass estimates were obtained using all CT scan types, and intercomparisons made to assess lung mass variation according to scan type. Lung mass estimates using the FB CT scans from PET/CT examinations of another group of ten male and ten female patients who were 21–30 years old and did not have lung disease were calculated and compared with reference lung mass values. To evaluate the effect of varying CT acquisition and reconstruction parameters on lung mass estimation, an anthropomorphic chest phantom was scanned and reconstructed with different CT parameters. CT images of the lungs were segmented using the OsiriX MD software program with a seed point of about −850 HU and an interval of 1000. Lung volume, and mean lung, tissue, and air HUs were recorded for each scan. Lung mass was calculated by assuming each voxel was a linear combination of only air and tissue. The specific gravity of lung volume was calculated using the formula (lung HU − air HU)/(tissue HU − air HU), and mass = specific gravity × total volume × 1.04 g/cm{sup 3}.Results: The range of calculated lung masses was 0.51–1.29 kg. The average male and female lung masses during FB CT were 0.80 and 0.71 kg, respectively. The calculated lung mass varied across the respiratory cycle but changed to a lesser degree than did lung volume measurements (7.3% versus 15.4%). Lung masses calculated using deep inspiration breath-hold and average CT were significantly larger (p < 0.05) than were some masses calculated using respiratory-phase and FB CT. Increased voxel size and smooth reconstruction kernels led to high lung mass estimates owing to partial volume effects.Conclusions: Organ mass correction is an important component of patient-specific internal radionuclide dosimetry. Lung mass calculation necessitates scan-based density correction to account for volume changes owing to respiration. The range of lung masses in the authors’ patient population represents lung doses for the same absorbed energy differing from 25% below to 64% above the dose found using reference phantom organ masses. With proper management of acquisition parameters and selection of FB or midexpiration breath hold scans, lung mass estimates with about 10% population precision may be achieved.« less
Primary Multi-frequency Data Analyze in Electrical Impedance Scanning.
Liu, Ruigang; Dong, Xiuzhen; Fu, Feng; Shi, Xuetao; You, Fusheng; Ji, Zhenyu
2005-01-01
This paper deduced the Cole-Cole arc equation in form of admittance by the traditional Cole-Cole equation in form of impedance. Comparing to the latter, the former is more adaptive to the electrical impedance scanning which using lower frequency region. When using our own electrical impedance scanning device at 50-5000Hz, the measurement data separated on the arc of the former, while collected near the direct current resistor on the arc of the latter. The four parameters of the former can be evaluated by the least square method. The frequency of the imaginary part of admittance reaching maximum can be calculated by the Cole-Cole parameters. In conclusion, the Cole-Cole arc in form of admittance is more effective to multi-frequency data analyze at lower frequency region, like EIS.
Accurate Nanoscale Crystallography in Real-Space Using Scanning Transmission Electron Microscopy.
Dycus, J Houston; Harris, Joshua S; Sang, Xiahan; Fancher, Chris M; Findlay, Scott D; Oni, Adedapo A; Chan, Tsung-Ta E; Koch, Carl C; Jones, Jacob L; Allen, Leslie J; Irving, Douglas L; LeBeau, James M
2015-08-01
Here, we report reproducible and accurate measurement of crystallographic parameters using scanning transmission electron microscopy. This is made possible by removing drift and residual scan distortion. We demonstrate real-space lattice parameter measurements with <0.1% error for complex-layered chalcogenides Bi2Te3, Bi2Se3, and a Bi2Te2.7Se0.3 nanostructured alloy. Pairing the technique with atomic resolution spectroscopy, we connect local structure with chemistry and bonding. Combining these results with density functional theory, we show that the incorporation of Se into Bi2Te3 causes charge redistribution that anomalously increases the van der Waals gap between building blocks of the layered structure. The results show that atomic resolution imaging with electrons can accurately and robustly quantify crystallography at the nanoscale.
Automatic tool alignment in a backscatter X-ray scanning system
Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.
2015-11-17
Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a medical device is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.
Automatic tool alignment in a backscatter x-ray scanning system
Garretson, Justin; Hobart, Clinton G.; Gladwell, Thomas S.; Monda, Mark J.
2015-06-16
Technologies pertaining to backscatter x-ray scanning systems are described herein. The backscatter x-ray scanning system includes an x-ray source, which directs collimated x-rays along a plurality of output vectors towards a target. A detector detects diffusely reflected x-rays subsequent to respective collimated x-rays impacting the target, and outputs signals indicative of parameters of the detected x-rays. An image processing system generates an x-ray image based upon parameters of the detected x-rays, wherein each pixel in the image corresponds to a respective output vector. A user selects a particular portion of the image, and a tool is positioned such that its directional axis is coincident with the output vector corresponding to at least one pixel in the portion of the image.
Robust automatic measurement of 3D scanned models for the human body fat estimation.
Giachetti, Andrea; Lovato, Christian; Piscitelli, Francesco; Milanese, Chiara; Zancanaro, Carlo
2015-03-01
In this paper, we present an automatic tool for estimating geometrical parameters from 3-D human scans independent on pose and robustly against the topological noise. It is based on an automatic segmentation of body parts exploiting curve skeleton processing and ad hoc heuristics able to remove problems due to different acquisition poses and body types. The software is able to locate body trunk and limbs, detect their directions, and compute parameters like volumes, areas, girths, and lengths. Experimental results demonstrate that measurements provided by our system on 3-D body scans of normal and overweight subjects acquired in different poses are highly correlated with the body fat estimates obtained on the same subjects with dual-energy X-rays absorptiometry (DXA) scanning. In particular, maximal lengths and girths, not requiring precise localization of anatomical landmarks, demonstrate a good correlation (up to 96%) with the body fat and trunk fat. Regression models based on our automatic measurements can be used to predict body fat values reasonably well.
NASA Astrophysics Data System (ADS)
Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu
2018-06-01
A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.
de Oliveira, Marcus Vinicius Linhares; Santos, António Carvalho; Paulo, Graciano; Campos, Paulo Sergio Flores; Santos, Joana
2017-06-01
The purpose of this study was to apply a newly developed free software program, at low cost and with minimal time, to evaluate the quality of dental and maxillofacial cone-beam computed tomography (CBCT) images. A polymethyl methacrylate (PMMA) phantom, CQP-IFBA, was scanned in 3 CBCT units with 7 protocols. A macro program was developed, using the free software ImageJ, to automatically evaluate the image quality parameters. The image quality evaluation was based on 8 parameters: uniformity, the signal-to-noise ratio (SNR), noise, the contrast-to-noise ratio (CNR), spatial resolution, the artifact index, geometric accuracy, and low-contrast resolution. The image uniformity and noise depended on the protocol that was applied. Regarding the CNR, high-density structures were more sensitive to the effect of scanning parameters. There were no significant differences between SNR and CNR in centered and peripheral objects. The geometric accuracy assessment showed that all the distance measurements were lower than the real values. Low-contrast resolution was influenced by the scanning parameters, and the 1-mm rod present in the phantom was not depicted in any of the 3 CBCT units. Smaller voxel sizes presented higher spatial resolution. There were no significant differences among the protocols regarding artifact presence. This software package provided a fast, low-cost, and feasible method for the evaluation of image quality parameters in CBCT.
Scan path entropy and arrow plots: capturing scanning behavior of multiple observers
Hooge, Ignace; Camps, Guido
2013-01-01
Designers of visual communication material want their material to attract and retain attention. In marketing research, heat maps, dwell time, and time to AOI first hit are often used as evaluation parameters. Here we present two additional measures (1) “scan path entropy” to quantify gaze guidance and (2) the “arrow plot” to visualize the average scan path. Both are based on string representations of scan paths. The latter also incorporates transition matrices and time required for 50% of the observers to first hit AOIs (T50). The new measures were tested in an eye tracking study (48 observers, 39 advertisements). Scan path entropy is a sensible measure for gaze guidance and the new visualization method reveals aspects of the average scan path and gives a better indication in what order global scanning takes place. PMID:24399993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Scan blindness in infinite phased arrays of printed dipoles
NASA Technical Reports Server (NTRS)
Pozar, D. M.; Schaubert, D. H.
1984-01-01
A comprehensive study of infinite phased arrays of printed dipole antennas is presented, with emphasis on the scan blindness phenomenon. A rigorous and efficient moment method procedure is used to calculate the array impedance versus scan angle. Data are presented for the input reflection coefficient for various element spacings and substrate parameters. A simple theory, based on coupling from Floquet modes to surface wave modes on the substrate, is shown to predict the occurrence of scan blindness. Measurements from a waveguide simulator of a blindness condition confirm the theory.
Shen, Jiajian; Tryggestad, Erik; Younkin, James E; Keole, Sameer R; Furutani, Keith M; Kang, Yixiu; Herman, Michael G; Bues, Martin
2017-10-01
To accurately model the beam delivery time (BDT) for a synchrotron-based proton spot scanning system using experimentally determined beam parameters. A model to simulate the proton spot delivery sequences was constructed, and BDT was calculated by summing times for layer switch, spot switch, and spot delivery. Test plans were designed to isolate and quantify the relevant beam parameters in the operation cycle of the proton beam therapy delivery system. These parameters included the layer switch time, magnet preparation and verification time, average beam scanning speeds in x- and y-directions, proton spill rate, and maximum charge and maximum extraction time for each spill. The experimentally determined parameters, as well as the nominal values initially provided by the vendor, served as inputs to the model to predict BDTs for 602 clinical proton beam deliveries. The calculated BDTs (T BDT ) were compared with the BDTs recorded in the treatment delivery log files (T Log ): ∆t = T Log -T BDT . The experimentally determined average layer switch time for all 97 energies was 1.91 s (ranging from 1.9 to 2.0 s for beam energies from 71.3 to 228.8 MeV), average magnet preparation and verification time was 1.93 ms, the average scanning speeds were 5.9 m/s in x-direction and 19.3 m/s in y-direction, the proton spill rate was 8.7 MU/s, and the maximum proton charge available for one acceleration is 2.0 ± 0.4 nC. Some of the measured parameters differed from the nominal values provided by the vendor. The calculated BDTs using experimentally determined parameters matched the recorded BDTs of 602 beam deliveries (∆t = -0.49 ± 1.44 s), which were significantly more accurate than BDTs calculated using nominal timing parameters (∆t = -7.48 ± 6.97 s). An accurate model for BDT prediction was achieved by using the experimentally determined proton beam therapy delivery parameters, which may be useful in modeling the interplay effect and patient throughput. The model may provide guidance on how to effectively reduce BDT and may be used to identifying deteriorating machine performance. © 2017 American Association of Physicists in Medicine.
Real-case benchmark for flow and tracer transport in the fractured rock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hokr, M.; Shao, H.; Gardner, W. P.
The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less
Real-case benchmark for flow and tracer transport in the fractured rock
Hokr, M.; Shao, H.; Gardner, W. P.; ...
2016-09-19
The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less
Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction
Laehnemann, David; Borkhardt, Arndt
2016-01-01
Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
Scanning ion-conductance and atomic force microscope with specialized sphere-shaped nanopippettes
NASA Astrophysics Data System (ADS)
Zhukov, M. V.; Sapozhnikov, I. D.; Golubok, A. O.; Chubinskiy-Nadezhdin, V. I.; Komissarenko, F. E.; Lukashenko, S. Y.
2017-11-01
A scanning ion-conductance microscope was designed on the basis of scanning probe microscope NanoTutor. The optimal parameters of nanopipettes fabrication were found according to scanning electron microscopy diagnostics, current-distance I (Z) and current-voltage characteristics. A comparison of images of test objects, including biological samples, was carried out in the modes of optical microscopy, atomic force microscopy and scanning ion-conductance microscopy. Sphere-shaped nanopippettes probes were developed and tested to increase the stability of pipettes, reduce invasiveness and improve image quality of atomic force microscopy in tapping mode. The efficiency of sphere-shaped nanopippettes is shown.
Finite element analysis of flexible, rotating blades
NASA Technical Reports Server (NTRS)
Mcgee, Oliver G.
1987-01-01
A reference guide that can be used when using the finite element method to approximate the static and dynamic behavior of flexible, rotating blades is given. Important parameters such as twist, sweep, camber, co-planar shell elements, centrifugal loads, and inertia properties are studied. Comparisons are made between NASTRAN elements through published benchmark tests. The main purpose is to summarize blade modeling strategies and to document capabilities and limitations (for flexible, rotating blades) of various NASTRAN elements.
Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)
2011-01-01
Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
Bayram, Jamil D; Zuabi, Shawki; Subbarao, Italo
2011-06-01
Hospital surge capacity in multiple casualty events (MCE) is the core of hospital medical response, and an integral part of the total medical capacity of the community affected. To date, however, there has been no consensus regarding the definition or quantification of hospital surge capacity. The first objective of this study was to quantitatively benchmark the various components of hospital surge capacity pertaining to the care of critically and moderately injured patients in trauma-related MCE. The second objective was to illustrate the applications of those quantitative parameters in local, regional, national, and international disaster planning; in the distribution of patients to various hospitals by prehospital medical services; and in the decision-making process for ambulance diversion. A 2-step approach was adopted in the methodology of this study. First, an extensive literature search was performed, followed by mathematical modeling. Quantitative studies on hospital surge capacity for trauma injuries were used as the framework for our model. The North Atlantic Treaty Organization triage categories (T1-T4) were used in the modeling process for simplicity purposes. Hospital Acute Care Surge Capacity (HACSC) was defined as the maximum number of critical (T1) and moderate (T2) casualties a hospital can adequately care for per hour, after recruiting all possible additional medical assets. HACSC was modeled to be equal to the number of emergency department beds (#EDB), divided by the emergency department time (EDT); HACSC = #EDB/EDT. In trauma-related MCE, the EDT was quantitatively benchmarked to be 2.5 (hours). Because most of the critical and moderate casualties arrive at hospitals within a 6-hour period requiring admission (by definition), the hospital bed surge capacity must match the HACSC at 6 hours to ensure coordinated care, and it was mathematically benchmarked to be 18% of the staffed hospital bed capacity. Defining and quantitatively benchmarking the different components of hospital surge capacity is vital to hospital preparedness in MCE. Prospective studies of our mathematical model are needed to verify its applicability, generalizability, and validity.
Kuperman, Roman G; Checkai, Ronald T; Simini, Michael; Phillips, Carlton T; Kolakowski, Jan E; Lanno, Roman
2013-11-01
The authors investigated individual toxicities of 2,4,6-trinitrotoluene (TNT) and hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) to the potworm Enchytraeus crypticus using the enchytraeid reproduction test. Studies were designed to generate ecotoxicological benchmarks that can be used for developing ecological soil-screening levels for ecological risk assessments of contaminated soils and to identify and characterize the predominant soil physicochemical parameters that can affect the toxicities of TNT and RDX to E. crypticus. Soils, which had a wide range of physicochemical parameters, included Teller sandy loam, Sassafras sandy loam, Richfield clay loam, Kirkland clay loam, and Webster clay loam. Analyses of quantitative relationships between the toxicological benchmarks for TNT and soil property measurements identified soil organic matter content as the dominant property mitigating TNT toxicity for juvenile production by E. crypticus in freshly amended soil. Both the clay and organic matter contents of the soil modulated reproduction toxicity of TNT that was weathered and aged in soil for 3 mo. Toxicity of RDX for E. crypticus was greater in the coarse-textured sandy loam soils compared with the fine-textured clay loam soils. The present studies revealed alterations in toxicity to E. crypticus after weathering and aging TNT in soil, and these alterations were soil- and endpoint-specific. © 2013 SETAC.
An Improved Evolutionary Programming with Voting and Elitist Dispersal Scheme
NASA Astrophysics Data System (ADS)
Maity, Sayan; Gunjan, Kumar; Das, Swagatam
Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we present an improved EP variant called Evolutionary Programming with Voting and Elitist Dispersal (EPVE). The scheme encompasses a voting process which not only gives importance to best solutions but also consider those solutions which are converging fast. By introducing Elitist Dispersal Scheme we maintain the elitism by keeping the potential solutions intact and other solutions are perturbed accordingly, so that those come out of the local minima. By applying these two techniques we can be able to explore those regions which have not been explored so far that may contain optima. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real parameter optimization reflects the superiority of the new scheme in terms of final accuracy, speed, and robustness.
Ellis, D W; Srigley, J
2016-01-01
Key quality parameters in diagnostic pathology include timeliness, accuracy, completeness, conformance with current agreed standards, consistency and clarity in communication. In this review, we argue that with worldwide developments in eHealth and big data, generally, there are two further, often overlooked, parameters if our reports are to be fit for purpose. Firstly, population-level studies have clearly demonstrated the value of providing timely structured reporting data in standardised electronic format as part of system-wide quality improvement programmes. Moreover, when combined with multiple health data sources through eHealth and data linkage, structured pathology reports become central to population-level quality monitoring, benchmarking, interventions and benefit analyses in public health management. Secondly, population-level studies, particularly for benchmarking, require a single agreed international and evidence-based standard to ensure interoperability and comparability. This has been taken for granted in tumour classification and staging for many years, yet international standardisation of cancer datasets is only now underway through the International Collaboration on Cancer Reporting (ICCR). In this review, we present evidence supporting the role of structured pathology reporting in quality improvement for both clinical care and population-level health management. Although this review of available evidence largely relates to structured reporting of cancer, it is clear that the same principles can be applied throughout anatomical pathology generally, as they are elsewhere in the health system.
Text, photo, and line extraction in scanned documents
NASA Astrophysics Data System (ADS)
Erkilinc, M. Sezer; Jaber, Mustafa; Saber, Eli; Bauer, Peter; Depalov, Dejan
2012-07-01
We propose a page layout analysis algorithm to classify a scanned document into different regions such as text, photo, or strong lines. The proposed scheme consists of five modules. The first module performs several image preprocessing techniques such as image scaling, filtering, color space conversion, and gamma correction to enhance the scanned image quality and reduce the computation time in later stages. Text detection is applied in the second module wherein wavelet transform and run-length encoding are employed to generate and validate text regions, respectively. The third module uses a Markov random field based block-wise segmentation that employs a basis vector projection technique with maximum a posteriori probability optimization to detect photo regions. In the fourth module, methods for edge detection, edge linking, line-segment fitting, and Hough transform are utilized to detect strong edges and lines. In the last module, the resultant text, photo, and edge maps are combined to generate a page layout map using K-Means clustering. The proposed algorithm has been tested on several hundred documents that contain simple and complex page layout structures and contents such as articles, magazines, business cards, dictionaries, and newsletters, and compared against state-of-the-art page-segmentation techniques with benchmark performance. The results indicate that our methodology achieves an average of ˜89% classification accuracy in text, photo, and background regions.
Benchmarking the Performance of Mobile Laser Scanning Systems Using a Permanent Test Field
Kaartinen, Harri; Hyyppä, Juha; Kukko, Antero; Jaakkola, Anttoni; Hyyppä, Hannu
2012-01-01
The performance of various mobile laser scanning systems was tested on an established urban test field. The test was connected to the European Spatial Data Research (EuroSDR) project “Mobile Mapping—Road Environment Mapping Using Mobile Laser Scanning”. Several commercial and research systems collected laser point cloud data on the same test field. The system comparisons focused on planimetric and elevation errors using a filtered digital elevation model, poles, and building corners as the reference objects. The results revealed the high quality of the point clouds generated by all of the tested systems under good GNSS conditions. With all professional systems properly calibrated, the elevation accuracy was better than 3.5 cm up to a range of 35 m. The best system achieved a planimetric accuracy of 2.5 cm over a range of 45 m. The planimetric errors increased as a function of range, but moderately so if the system was properly calibrated. The main focus on mobile laser scanning development in the near future should be on the improvement of the trajectory solution, especially under non-ideal conditions, using both improvements in hardware and software. Test fields are relatively easy to implement in built environments and they are feasible for verifying and comparing the performance of different systems and also for improving system calibration to achieve optimum quality.
Dowdell, S; Grassberger, C; Sharp, G C; Paganetti, H
2013-06-21
Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of five lung cancer patients of varying tumor size (50.4-167.1 cc) and motion amplitude (2.9-30.1 mm). Treatments were planned assuming delivery in 35 × 2.5 Gy(RBE) fractions. The spot size, time to change the beam energy (τes), time required for magnet settling (τss), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the five patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior-inferior motion amplitude alone. Larger spot sizes (σ ~ 9-16 mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0 ± 4.4% (1 standard deviation) in a single fraction compared to 86.1 ± 13.1% for smaller spots (σ ~ 2-4 mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes respectively. The interplay effect is highly patient specific, depending on the motion amplitude, tumor location and the delivery parameters. Large degradations of the dose distribution in a single fraction were observed, but improved significantly using conventional fractionation.
Dowdell, S; Grassberger, C; Sharp, G C; Paganetti, H
2013-01-01
Relative motion between a tumor and a scanning proton beam results in a degradation of the dose distribution (interplay effect). This study investigates the relationship between beam scanning parameters and the interplay effect, with the goal of finding parameters that minimize interplay. 4D Monte Carlo simulations of pencil beam scanning proton therapy treatments were performed using the 4DCT geometry of 5 lung cancer patients of varying tumor size (50.4–167.1cc) and motion amplitude (2.9–30.1mm). Treatments were planned assuming delivery in 35×2.5Gy(RBE) fractions. The spot size, time to change the beam energy (τes), time required for magnet settling (τss), initial breathing phase, spot spacing, scanning direction, scanning speed, beam current and patient breathing period were varied for each of the 5 patients. Simulations were performed for a single fraction and an approximation of conventional fractionation. For the patients considered, the interplay effect could not be predicted using the superior-inferior (SI) motion amplitude alone. Larger spot sizes (σ ~9–16mm) were less susceptible to interplay, giving an equivalent uniform dose (EUD) of 99.0±4.4% (1 standard deviation) in a single fraction compared to 86.1±13.1% for smaller spots (σ ~2–4mm). The smaller spot sizes gave EUD values as low as 65.3% of the prescription dose in a single fraction. Reducing the spot spacing improved the target dose homogeneity. The initial breathing phase can have a significant effect on the interplay, particularly for shorter delivery times. No clear benefit was evident when scanning either parallel or perpendicular to the predominant axis of motion. Longer breathing periods decreased the EUD. In general, longer delivery times led to lower interplay effects. Conventional fractionation showed significant improvement in terms of interplay, giving a EUD of at least 84.7% and 100.0% of the prescription dose for the small and larger spot sizes respectively. The interplay effect is highly patient specific, depending on the motion amplitude, tumor location and the delivery parameters. Large degradations of the dose distribution in a single fraction were observed, but improved significantly using conventional fractionation. PMID:23689035
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Peters, Marloes J M; Wierts, Roel; Jutten, Elisabeth M C; Halders, Servé G E A; Willems, Paul C P H; Brans, Boudewijn
2015-11-01
A complication after spinal fusion surgery is pseudarthrosis, but its radiological diagnosis is of limited value. (18)F-fluoride PET with its ability to assess bone metabolism activity could be of value. The goal of this study was to assess the clinical feasibility of calculating the static standardized uptake value (SUV) from a short dynamic scan without the use of blood sampling, thereby obtaining all dynamic and static parameters in a scan of only 30 min. This approach was tested on a retrospective patient population with persisting pain after spinal fusion surgery. In 16 patients, SUVs (SUV max, SUV mean) and kinetic parameters (K 1, k 2, k 3, v b, K i,NLR, K 1/k 2, k 3/(k 2 + k 3), K i,patlak) were derived from static and dynamic PET/CT scans of operated and control regions of the spine, after intravenous administration of 156-214 MBq (18)F-fluoride. Parameter differences between control and operated regions, as well as between pseudarthrosis and fused segments were evaluated. SUVmean at 30 and 60 min was calculated from kinetic parameters obtained from the dynamic data set (SUV mean,2TCM). Agreement between measured and calculated SUVs was evaluated through Bland-Altman plots. Overall, statistically significant differences between control and operated regions were observed for SUV max, SUV mean, K i,NLR, K i,patlak, K 1/k 2 and k 3/(k 2 + k 3). Diagnostic CT showed pseudarthrosis in 6/16 patients, while in 10/16 patients, segments were fused. Of all parameters, only those regarding the incorporation of bone [K i,NLR, K i,patlak, k 3/(k 2 + k 3)] differed statistically significant in the intervertebral disc space between the pseudarthrosis and fused patients group. The mean values of the patient-specific blood clearance rate [Formula: see text] differed statistically significant between the pseudarthrosis and the fusion group, with a p value of 0.011. This may correspond with the lack of statistical significance of the SUV values between pseudarthrosis and fused patients. Bland-Altman plots show that calculated SUV mean,2TCM values corresponded well with the measured SUV mean values. This study shows the feasibility of a 30-min dynamic (18)F-fluoride PET/CT scanning and this may provide dynamic parameters clinically relevant to the diagnosis of pseudarthrosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
NASA Technical Reports Server (NTRS)
Waszak, Martin R.
1998-01-01
This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.
[QUIPS: quality improvement in postoperative pain management].
Meissner, Winfried
2011-01-01
Despite the availability of high-quality guidelines and advanced pain management techniques acute postoperative pain management is still far from being satisfactory. The QUIPS (Quality Improvement in Postoperative Pain Management) project aims to improve treatment quality by means of standardised data acquisition, analysis of quality and process indicators, and feedback and benchmarking. During a pilot phase funded by the German Ministry of Health (BMG), a total of 12,389 data sets were collected from six participating hospitals. Outcome improved in four of the six hospitals. Process indicators, such as routine pain documentation, were only poorly correlated with outcomes. To date, more than 130 German hospitals use QUIPS as a routine quality management tool. An EC-funded parallel project disseminates the concept internationally. QUIPS demonstrates that patient-reported outcomes in postoperative pain management can be benchmarked in routine clinical practice. Quality improvement initiatives should use outcome instead of structural and process parameters. The concept is transferable to other fields of medicine. Copyright © 2011. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Validation of tungsten cross sections in the neutron energy region up to 100 keV
NASA Astrophysics Data System (ADS)
Pigni, Marco T.; Žerovnik, Gašper; Leal, Luiz. C.; Trkov, Andrej
2017-09-01
Following a series of recent cross section evaluations on tungsten isotopes performed at Oak Ridge National Laboratory (ORNL), this paper presents the validation work carried out to test the performance of the evaluated cross sections based on lead-slowing-down (LSD) benchmarks conducted in Grenoble. ORNL completed the resonance parameter evaluation of four tungsten isotopes - 182,183,184,186W - in August 2014 and submitted it as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The evaluations were performed with support from the US Nuclear Criticality Safety Program in an effort to provide improved tungsten cross section and covariance data for criticality safety sensitivity analyses. The validation analysis based on the LSD benchmarks showed an improved agreement with the experimental response when the ORNL tungsten evaluations were included in the ENDF/B-VII.1 library. Comparison with the results obtained with the JEFF-3.2 nuclear data library are also discussed.
Numerical Investigations of the Benchmark Supercritical Wing in Transonic Flow
NASA Technical Reports Server (NTRS)
Chwalowski, Pawel; Heeg, Jennifer; Biedron, Robert T.
2017-01-01
This paper builds on the computational aeroelastic results published previously and generated in support of the second Aeroelastic Prediction Workshop for the NASA Benchmark Supercritical Wing (BSCW) configuration. The computational results are obtained using FUN3D, an unstructured grid Reynolds-Averaged Navier-Stokes solver developed at the NASA Langley Research Center. The analysis results show the effects of the temporal and spatial resolution, the coupling scheme between the flow and the structural solvers, and the initial excitation conditions on the numerical flutter onset. Depending on the free stream condition and the angle of attack, the above parameters do affect the flutter onset. Two conditions are analyzed: Mach 0.74 with angle of attack 0 and Mach 0.85 with angle of attack 5. The results are presented in the form of the damping values computed from the wing pitch angle response as a function of the dynamic pressure or in the form of dynamic pressure as a function of the Mach number.
NASA Astrophysics Data System (ADS)
Nagy, Julia; Eilert, Tobias; Michaelis, Jens
2018-03-01
Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.
Acrylamide content in French fries prepared in households: A pilot study in Spanish homes.
Mesias, Marta; Delgado-Andrade, Cristina; Holgado, Francisca; Morales, Francisco J
2018-09-15
An observational cross-sectional pilot study in 73 Spanish households was conducted to evaluate the impact of consumer practices on the formation of acrylamide during the preparation of French fries from fresh potatoes applying one stage frying. 45.2% of samples presented acrylamide concentrations above the benchmark level for French fries (500 µg/kg). 6.9% of samples exceeded 2000 µg/kg and the 95th percentile was 2028 µg/kg. The median and average values were significantly higher than the EFSA report for this food category, suggesting that the total exposure to acrylamide by the population could be underestimated. In this randomised scenario of cooking practices, the content of reducing sugar and asparagine did not explain the acrylamide levels. However, the chromatic parameter a ∗ of the fried potato was a powerful tool to classify the samples according to the acrylamide benchmark level regardless of the agronomical characteristics of the potato or the consumer practices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Nema, Vijay; Pal, Sudhir Kumar
2013-01-01
Aim: This study was conducted to find the best suited freely available software for modelling of proteins by taking a few sample proteins. The proteins used were small to big in size with available crystal structures for the purpose of benchmarking. Key players like Phyre2, Swiss-Model, CPHmodels-3.0, Homer, (PS)2, (PS)2-V2, Modweb were used for the comparison and model generation. Results: Benchmarking process was done for four proteins, Icl, InhA, and KatG of Mycobacterium tuberculosis and RpoB of Thermus Thermophilus to get the most suited software. Parameters compared during analysis gave relatively better values for Phyre2 and Swiss-Model. Conclusion: This comparative study gave the information that Phyre2 and Swiss-Model make good models of small and large proteins as compared to other screened software. Other software was also good but is often not very efficient in providing full-length and properly folded structure. PMID:24023424
Spectroscopic study of the benchmark Mn+-H2 complex.
Dryza, Viktoras; Poad, Berwyck L J; Bieske, Evan J
2009-05-28
We have recorded the rotationally resolved infrared spectrum of the weakly bound Mn+-H2 complex in the H-H stretch region (4022-4078 cm(-1)) by monitoring Mn+ photodissociation products. The band center of Mn+-H2, the H-H stretch transition, is shifted by -111.8 cm(-1) from the transition of the free H2 molecule. The spectroscopic data suggest that the Mn+-H2 complex consists of a slightly perturbed H2 molecule attached to the Mn+ ion in a T-shaped configuration with a vibrationally averaged intermolecular separation of 2.73 A. Together with the measured Mn+...H2 binding energy of 7.9 kJ/mol (Weis, P.; et al. J. Phys. Chem. A 1997, 101, 2809.), the spectroscopic parameters establish Mn+-H2 as the most thoroughly characterized transition-metal cation-dihydrogen complex and a benchmark for calibrating quantum chemical calculations on noncovalent systems involving open d-shell configurations. Such systems are of possible importance for hydrogen storage applications.
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
Musungu, Sisule F.
2006-01-01
The impact of intellectual property protection in the pharmaceutical sector on developing countries has been a central issue in the fierce debate during the past 10 years in a number of international fora, particularly the World Trade Organization (WTO) and WHO. The debate centres on whether the intellectual property system is: (1) providing sufficient incentives for research and development into medicines for diseases that disproportionately affect developing countries; and (2) restricting access to existing medicines for these countries. The Doha Declaration was adopted at WTO in 2001 and the Commission on Intellectual Property, Innovation and Public Health was established at WHO in 2004, but their respective contributions to tackling intellectual property-related challenges are disputed. Objective parameters are needed to measure whether a particular series of actions, events, decisions or processes contribute to progress in this area. This article proposes six possible benchmarks for intellectual property-related challenges with regard to the development of medicines and ensuring access to medicines in developing countries. PMID:16710545
Sintered Cathodes for All-Solid-State Structural Lithium-Ion Batteries
NASA Technical Reports Server (NTRS)
Huddleston, William; Dynys, Frederick; Sehirlioglu, Alp
2017-01-01
All-solid-state structural lithium ion batteries serve as both structural load-bearing components and as electrical energy storage devices to achieve system level weight savings in aerospace and other transportation applications. This multifunctional design goal is critical for the realization of next generation hybrid or all-electric propulsion systems. Additionally, transitioning to solid state technology improves upon battery safety from previous volatile architectures. This research established baseline solid state processing conditions and performance benchmarks for intercalation-type layered oxide materials for multifunctional application. Under consideration were lithium cobalt oxide and lithium nickel manganese cobalt oxide. Pertinent characteristics such as electrical conductivity, strength, chemical stability, and microstructure were characterized for future application in all-solid-state structural battery cathodes. The study includes characterization by XRD, ICP, SEM, ring-on-ring mechanical testing, and electrical impedance spectroscopy to elucidate optimal processing parameters, material characteristics, and multifunctional performance benchmarks. These findings provide initial conditions for implementing existing cathode materials in load bearing applications.
Allowing for Slow Evolution of Background Plasma in the 3D FDTD Plasma, Sheath, and Antenna Model
NASA Astrophysics Data System (ADS)
Smithe, David; Jenkins, Thomas; King, Jake
2015-11-01
We are working to include a slow-time evolution capability for what has previously been the static background plasma parameters, in the 3D finite-difference time-domain (FDTD) plasma and sheath model used to model ICRF antennas in fusion plasmas. A key aspect of this is SOL-density time-evolution driven by ponderomotive rarefaction from the strong fields in the vicinity of the antenna. We demonstrate and benchmark a Scalar Ponderomotive Potential method, based on local field amplitudes, which is included in the 3D simulation. And present a more advanced Tensor Ponderomotive Potential approach, which we hope to employ in the future, which should improve the physical fidelity in the highly anisotropic environment of the SOL. Finally, we demonstrate and benchmark slow time (non-linear) evolution of the RF sheath, and include realistic collisional effects from the neutral gas. Support from US DOE Grants DE-FC02-08ER54953, DE-FG02-09ER55006.
Dugas, Martin; Eckholt, Markus; Bunzemeier, Holger
2008-01-01
Background Monitoring of hospital information system (HIS) usage can provide insights into best practices within a hospital and help to assess time trends. In terms of effort and cost of benchmarking, figures derived automatically from the routine HIS system are preferable to manual methods like surveys, in particular for repeated analysis. Methods Due to relevance for quality management and efficient resource utilization we focused on time-to-completion of discharge letters (assessed by CT-plots) and usage of patient scheduling. We analyzed these parameters monthly during one year at a major university hospital in Germany. Results We found several distinct patterns of discharge letter documentation indicating a large heterogeneity of HIS usage between different specialties (completeness 51 – 99%, delays 0 – 90 days). Overall usage of scheduling increased during the observation period by 62%, but again showed a considerable variation between departments. Conclusion Regular monitoring of HIS key figures can contribute to a continuous HIS improvement process. PMID:18423046
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
Improving estimation of kinetic parameters in dynamic force spectroscopy using cluster analysis
NASA Astrophysics Data System (ADS)
Yen, Chi-Fu; Sivasankar, Sanjeevi
2018-03-01
Dynamic Force Spectroscopy (DFS) is a widely used technique to characterize the dissociation kinetics and interaction energy landscape of receptor-ligand complexes with single-molecule resolution. In an Atomic Force Microscope (AFM)-based DFS experiment, receptor-ligand complexes, sandwiched between an AFM tip and substrate, are ruptured at different stress rates by varying the speed at which the AFM-tip and substrate are pulled away from each other. The rupture events are grouped according to their pulling speeds, and the mean force and loading rate of each group are calculated. These data are subsequently fit to established models, and energy landscape parameters such as the intrinsic off-rate (koff) and the width of the potential energy barrier (xβ) are extracted. However, due to large uncertainties in determining mean forces and loading rates of the groups, errors in the estimated koff and xβ can be substantial. Here, we demonstrate that the accuracy of fitted parameters in a DFS experiment can be dramatically improved by sorting rupture events into groups using cluster analysis instead of sorting them according to their pulling speeds. We test different clustering algorithms including Gaussian mixture, logistic regression, and K-means clustering, under conditions that closely mimic DFS experiments. Using Monte Carlo simulations, we benchmark the performance of these clustering algorithms over a wide range of koff and xβ, under different levels of thermal noise, and as a function of both the number of unbinding events and the number of pulling speeds. Our results demonstrate that cluster analysis, particularly K-means clustering, is very effective in improving the accuracy of parameter estimation, particularly when the number of unbinding events are limited and not well separated into distinct groups. Cluster analysis is easy to implement, and our performance benchmarks serve as a guide in choosing an appropriate method for DFS data analysis.
Overview and benchmark analysis of fuel cell parameters estimation for energy management purposes
NASA Astrophysics Data System (ADS)
Kandidayeni, M.; Macias, A.; Amamou, A. A.; Boulon, L.; Kelouwani, S.; Chaoui, H.
2018-03-01
Proton exchange membrane fuel cells (PEMFCs) have become the center of attention for energy conversion in many areas such as automotive industry, where they confront a high dynamic behavior resulting in their characteristics variation. In order to ensure appropriate modeling of PEMFCs, accurate parameters estimation is in demand. However, parameter estimation of PEMFC models is highly challenging due to their multivariate, nonlinear, and complex essence. This paper comprehensively reviews PEMFC models parameters estimation methods with a specific view to online identification algorithms, which are considered as the basis of global energy management strategy design, to estimate the linear and nonlinear parameters of a PEMFC model in real time. In this respect, different PEMFC models with different categories and purposes are discussed first. Subsequently, a thorough investigation of PEMFC parameter estimation methods in the literature is conducted in terms of applicability. Three potential algorithms for online applications, Recursive Least Square (RLS), Kalman filter, and extended Kalman filter (EKF), which has escaped the attention in previous works, have been then utilized to identify the parameters of two well-known semi-empirical models in the literature, Squadrito et al. and Amphlett et al. Ultimately, the achieved results and future challenges are discussed.
Messerli, Michael; Dewes, Patricia; Scholtz, Jan-Erik; Arendt, Christophe; Wildermuth, Simon; Vogl, Thomas J; Bauer, Ralf W
2018-05-01
To investigate the impact of an adaptive detector collimation on the dose parameters and accurateness of scan length adaption at prospectively ECG-triggered sequential cardiac CT with a wide-detector third-generation dual-source CT. Ideal scan lengths for human hearts were retrospectively derived from 103 triple-rule-out examinations. These measures were entered into the new scanner operated in prospectively ECG-triggered sequential cardiac scan mode with three different detector settings: (1) adaptive collimation, (2) fixed 64 × 0.6-mm collimation, and (3) fixed 96 × 0.6-mm collimation. Differences in effective scan length and deviation from the ideal scan length and dose parameters (CTDIvol, DLP) were documented. The ideal cardiac scan length could be matched by the adaptive collimation in every case while the mean scanned length was longer by 15.4% with the 64 × 0.6 mm and by 27.2% with the fixed 96 × 0.6-mm collimation. While the DLP was almost identical between the adaptive and the 64 × 0.6-mm collimation (83 vs. 89 mGycm at 120 kV), it was 62.7% higher with the 96 × 0.6-mm collimation (135 mGycm), p < 0.001. The adaptive detector collimation for prospectively ECG-triggered sequential acquisition allows for adjusting the scan length as accurate as this can only be achieved with a spiral acquisition. This technique allows keeping patient exposure low where patient dose would significantly increase with the traditional step-and-shoot mode. • Adaptive detector collimation allows keeping patient exposure low in cardiac CT. • With novel detectors the desired scan length can be accurately matched. • Differences in detector settings may cause 62.7% of excessive dose.
Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes
NASA Astrophysics Data System (ADS)
Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd
2018-05-01
In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.
NASA Astrophysics Data System (ADS)
Teixidor, D.; Ferrer, I.; Ciurana, J.
2012-04-01
This paper reports the characterization of laser machining (milling) process to manufacture micro-channels in order to understand the incidence of process parameters on the final features. Selection of process operational parameters is highly critical for successful laser micromachining. A set of designed experiments is carried out in a pulsed Nd:YAG laser system using AISI H13 hardened tool steel as work material. Several micro-channels have been manufactured as micro-mold cavities varying parameters such as scanning speed (SS), pulse intensity (PI) and pulse frequency (PF). Results are obtained by evaluating the dimensions and the surface finish of the micro-channel. The dimensions and shape of the micro-channels produced with laser-micro-milling process exhibit variations. In general the use of low scanning speeds increases the quality of the feature in both surface finishing and dimensional.
Beyond Gaussians: a study of single spot modeling for scanning proton dose calculation
Li, Yupeng; Zhu, Ronald X.; Sahoo, Narayan; Anand, Aman; Zhang, Xiaodong
2013-01-01
Active spot scanning proton therapy is becoming increasingly adopted by proton therapy centers worldwide. Unlike passive-scattering proton therapy, active spot scanning proton therapy, especially intensity-modulated proton therapy, requires proper modeling of each scanning spot to ensure accurate computation of the total dose distribution contributed from a large number of spots. During commissioning of the spot scanning gantry at the Proton Therapy Center in Houston, it was observed that the long-range scattering protons in a medium may have been inadequately modeled for high-energy beams by a commercial treatment planning system, which could lead to incorrect prediction of field-size effects on dose output. In the present study, we developed a pencil-beam algorithm for scanning-proton dose calculation by focusing on properly modeling individual scanning spots. All modeling parameters required by the pencil-beam algorithm can be generated based solely on a few sets of measured data. We demonstrated that low-dose halos in single-spot profiles in the medium could be adequately modeled with the addition of a modified Cauchy-Lorentz distribution function to a double-Gaussian function. The field-size effects were accurately computed at all depths and field sizes for all energies, and good dose accuracy was also achieved for patient dose verification. The implementation of the proposed pencil beam algorithm also enabled us to study the importance of different modeling components and parameters at various beam energies. The results of this study may be helpful in improving dose calculation accuracy and simplifying beam commissioning and treatment planning processes for spot scanning proton therapy. PMID:22297324
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
NASA Astrophysics Data System (ADS)
Grafen, M.; Delbeck, S.; Busch, H.; Heise, H. M.; Ostendorf, A.
2018-02-01
Mid-infrared spectroscopy hyphenated with micro-dialysis is an excellent method for monitoring metabolic blood parameters as it enables the concurrent, reagent-free and precise measurement of multiple clinically relevant substances such as glucose, lactate and urea in micro-dialysates of blood or interstitial fluid. For a marketable implementation, quantum cascade lasers (QCL) seem to represent a favourable technology due to their high degree of miniaturization and potentially low production costs. In this work, an external cavity (EC) - QCL-based spectrometer and two Fourier-transform infrared (FTIR) spectrometers were benchmarked with regard to the precision, accuracy and long-term stability needed for the monitoring of critically ill patients. For the tests, ternary aqueous solutions of glucose, lactate and mannitol (the latter for dialysis recovery determination) were measured in custom-made flow-through transmission cells of different pathlengths and analyzed by Partial Least Squares calibration models. It was revealed, that the wavenumber tuning speed of the QCL had a severe impact on the EC-mirror trajectory due to matching the digital-analog-converter step frequency with the mechanical resonance frequency of the mirror actuation. By selecting an appropriate tuning speed, the mirror oscillations acted as a hardware smoothing filter for the significant intensity variations caused by mode hopping. Besides the tuning speed, the effects of averaging over multiple spectra and software smoothing parameters (Savitzky-Golay-filters and FT-smoothing) were investigated. The final settings led to a performance of the QCL-system, which was comparable with a research FTIR-spectrometer and even surpassed the performance of a small FTIR-mini-spectrometer.
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
DOT National Transportation Integrated Search
2013-02-01
The scanning skills of a vehicle operator represent a key : parameter for hazard perception and effective vehicle operation. : Overriding ones sight distance, or not looking far : enough ahead down the roadway, may not leave a motorcycle : rider e...
Mancia, Claire; Loustaud-Ratti, Véronique; Carrier, Paul; Naudet, Florian; Bellissant, Eric; Labrousse, François; Pichon, Nicolas
2015-08-01
One of the main selection criteria of the quality of a liver graft is the degree of steatosis, which will determine the success of the transplantation. The aim of this study was to evaluate the ability of FibroScan and its related methods Controlled Attenuation Parameter and Liver Stiffness to assess objectively steatosis and fibrosis in livers from brain-dead donors to be potentially used for transplantation. Over a period of 10 months, 23 consecutive brain dead donors screened for liver procurement underwent a FibroScan and a liver biopsy. The different predictive models of liver retrievability using liver biopsy as the gold standard have led to the following area under receiver operating characteristic curve: 76.6% (95% confidence intervals [95% CIs], 48.2%-100%) when based solely on controlled attenuation parameter, 75.0% (95% CIs, 34.3%-100%) when based solely on liver stiffness, and 96.7% (95% CIs, 88.7%-100%) when based on combined indices. Our study suggests that a preoperative selection of brain-dead donors based on a combination of both Controlled Attenuation Parameter and Liver Stiffness obtained with FibroScan could result in a good preoperative prediction of the histological status and degree of steatosis of a potential liver graft.
NASA Astrophysics Data System (ADS)
Silva, C. E. R.; Alvarenga, A. V.; Costa-Felix, R. P. B.
2011-02-01
Ultrasound is often used as a Non-Destructive Testing (NDT) technique to analyze components and structures to detect internal and surface flaws. To guarantee reliable measurements, it is necessary to calibrate instruments and properly assess related uncertainties. An important device of an ultrasonic instrument system is its probe, which characterization should be performed according to EN 12668-2. Concerning immersion probes beam profile, the parameters to be assessed are beam divergence, focal distance, width, and zone length. Such parameters are determined by scanning a reflector or a hydrophone throughout the transducer beam. Within the present work, a methodology developed at Inmetro's Laboratory of Ultrasound to evaluate relevant beam parameters is presented, based on hydrophone scan. Water bath and positioning system to move the hydrophone were used to perform the scan. Studied probes were excited by a signal generator, and the waterborne signals were detected by the hydrophone and acquired using an oscilloscope. A user-friendly virtual instrument was developed in LabVIEW to automate the system. The initial tests were performed using 1 and 2.25 MHz-ultrasonic unfocused probes (Ø 1.27 cm), and results were consistent with the manufacturer's specifications. Moreover, expanded uncertainties were lower than 6% for all parameters under consideration.
Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu
2015-02-12
The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Genetic algorithms for the application of Activated Sludge Model No. 1.
Kim, S; Lee, H; Kim, J; Kim, C; Ko, J; Woo, H; Kim, S
2002-01-01
The genetic algorithm (GA) has been integrated into the IWA ASM No. 1 to calibrate important stoichiometric and kinetic parameters. The evolutionary feature of GA was used to configure the multiple local optima as well as the global optimum. The objective function of optimization was designed to minimize the difference between estimated and measured effluent concentrations at the activated sludge system. Both steady state and dynamic data of the simulation benchmark were used for calibration using denitrification layout. Depending upon the confidence intervals and objective functions, the proposed method provided distributions of parameter space. Field data have been collected and applied to validate calibration capacity of GA. Dynamic calibration was suggested to capture periodic variations of inflow concentrations. Also, in order to verify this proposed method in real wastewater treatment plant, measured data sets for substrate concentrations were obtained from Haeundae wastewater treatment plant and used to estimate parameters in the dynamic system. The simulation results with calibrated parameters matched well with the observed concentrations of effluent COD.
Software electron counting for low-dose scanning transmission electron microscopy.
Mittelberger, Andreas; Kramberger, Christian; Meyer, Jannik C
2018-05-01
The performance of the detector is of key importance for low-dose imaging in transmission electron microscopy, and counting every single electron can be considered as the ultimate goal. In scanning transmission electron microscopy, low-dose imaging can be realized by very fast scanning, however, this also introduces artifacts and a loss of resolution in the scan direction. We have developed a software approach to correct for artifacts introduced by fast scans, making use of a scintillator and photomultiplier response that extends over several pixels. The parameters for this correction can be directly extracted from the raw image. Finally, the images can be converted into electron counts. This approach enables low-dose imaging in the scanning transmission electron microscope via high scan speeds while retaining the image quality of artifact-free slower scans. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Ultrasonic inspection and deployment apparatus
Michaels, Jennifer E.; Michaels, Thomas E.; Mech, Jr., Stephen J.
1984-01-01
An ultrasonic inspection apparatus for the inspection of metal structures, especially installed pipes. The apparatus combines a specimen inspection element, an acoustical velocity sensing element, and a surface profiling element, all in one scanning head. A scanning head bellows contains a volume of oil above the pipe surface, serving as acoustical couplant between the scanning head and the pipe. The scanning head is mounted on a scanning truck which is mobile around a circular track surrounding the pipe. The scanning truck has sufficient motors, gears, and position encoders to allow the scanning head six degrees of motion freedom. A computer system continually monitors acoustical velocity, and uses that parameter to process surface profiling and inspection data. The profiling data is used to automatically control scanning head position and alignment and to define a coordinate system used to identify and interpret inspection data. The apparatus is suitable for highly automated, remote application in hostile environments, particularly high temperature and radiation areas.
Three-dimensional biofilm structure quantification.
Beyenal, Haluk; Donovan, Conrad; Lewandowski, Zbigniew; Harkin, Gary
2004-12-01
Quantitative parameters describing biofilm physical structure have been extracted from three-dimensional confocal laser scanning microscopy images and used to compare biofilm structures, monitor biofilm development, and quantify environmental factors affecting biofilm structure. Researchers have previously used biovolume, volume to surface ratio, roughness coefficient, and mean and maximum thicknesses to compare biofilm structures. The selection of these parameters is dependent on the availability of software to perform calculations. We believe it is necessary to develop more comprehensive parameters to describe heterogeneous biofilm morphology in three dimensions. This research presents parameters describing three-dimensional biofilm heterogeneity, size, and morphology of biomass calculated from confocal laser scanning microscopy images. This study extends previous work which extracted quantitative parameters regarding morphological features from two-dimensional biofilm images to three-dimensional biofilm images. We describe two types of parameters: (1) textural parameters showing microscale heterogeneity of biofilms and (2) volumetric parameters describing size and morphology of biomass. The three-dimensional features presented are average (ADD) and maximum diffusion distances (MDD), fractal dimension, average run lengths (in X, Y and Z directions), aspect ratio, textural entropy, energy and homogeneity. We discuss the meaning of each parameter and present the calculations in detail. The developed algorithms, including automatic thresholding, are implemented in software as MATLAB programs which will be available at site prior to publication of the paper.
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...
NASA Astrophysics Data System (ADS)
Fuchs, Sven; Balling, Niels; Förster, Andrea
2016-04-01
Numerical temperature models generated for geodynamic studies as well as for geothermal energy solutions heavily depend on rock thermal properties. Best practice for the determination of those parameters is the measurement of rock samples in the laboratory. Given the necessity to enlarge databases of subsurface rock parameters beyond drill core measurements an approach for the indirect determination of these parameters is developed, for rocks as well a for geological formations. We present new and universally applicable prediction equations for thermal conductivity, thermal diffusivity and specific heat capacity in sedimentary rocks derived from data provided by standard geophysical well logs. The approach is based on a data set of synthetic sedimentary rocks (clastic rocks, carbonates and evaporates) composed of mineral assemblages with variable contents of 15 major rock-forming minerals and porosities varying between 0 and 30%. Petrophysical properties are assigned to both the rock-forming minerals and the pore-filling fluids. Using multivariate statistics, relationships then were explored between each thermal property and well-logged petrophysical parameters (density, sonic interval transit time, hydrogen index, volume fraction of shale and photoelectric absorption index) on a regression sub set of data (70% of data) (Fuchs et al., 2015). Prediction quality was quantified on the remaining test sub set (30% of data). The combination of three to five well-log parameters results in predictions on the order of <15% for thermal conductivity and thermal diffusivity, and of <10% for specific heat capacity. Comparison of predicted and benchmark laboratory thermal conductivity from deep boreholes of the Norwegian-Danish Basin, the North German Basin, and the Molasse Basin results in 3 to 5% larger uncertainties with regard to the test data set. With regard to temperature models, the use of calculated TC borehole profiles approximate measured temperature logs with an error of <3°C along a 4 km deep profile. A benchmark comparison for thermal diffusivity and specific heat capacity is pending. Fuchs, Sven; Balling, Niels; Förster, Andrea (2015): Calculation of thermal conductivity, thermal diffusivity and specific heat capacity of sedimentary rocks using petrophysical well logs, Geophysical Journal International 203, 1977-2000, doi: 10.1093/gji/ggv403
NASA Astrophysics Data System (ADS)
Douša, Jan; Dick, Galina; Kačmařík, Michal; Václavovic, Pavel; Pottiaux, Eric; Zus, Florian; Brenot, Hugues; Moeller, Gregor; Hinterberger, Fabian; Pacione, Rosa; Stuerze, Andrea; Eben, Kryštof; Teferle, Norman; Ding, Wenwu; Morel, Laurent; Kaplon, Jan; Hordyniec, Pavel; Rohm, Witold
2017-04-01
The COST Action ES1206 GNSS4SWEC addresses new exploitations of the synergy between developments in GNSS and meteorological communities. The Working Group 1 (Advanced GNSS processing techniques) deals with implementing and assessing new methods for GNSS tropospheric monitoring and precise positioning exploiting all modern GNSS constellations, signals, products etc. Besides other goals, WG1 coordinates development of advanced tropospheric products in support of weather numerical and non-numerical nowcasting. These are ultra-fast and high-resolution tropospheric products available in real time or in a sub-hourly fashion and parameters in support of monitoring an anisotropy of the troposphere, e.g. horizontal gradients and tropospheric slant path delays. This talk gives an overview of WG1 activities and, particularly, achievements in two activities, Benchmark and Real-time demonstration campaigns. For the Benchmark campaign a complex data set of GNSS observations and various meteorological data were collected for a two-month period in 2013 (May-June) which included severe weather events in central Europe. An initial processing of data sets from GNSS and numerical weather models (NWM) provided independently estimated reference parameters - ZTDs and tropospheric horizontal gradients. The comparison of horizontal tropospheric gradients from GNSS and NWM data demonstrated a very good agreement among independent solutions with negligible biases and an accuracy of about 0.5 mm. Visual comparisons of maps of zenith wet delays and tropospheric horizontal gradients showed very promising results for future exploitations of advanced GNSS tropospheric products in meteorological applications such as severe weather event monitoring and weather nowcasting. The Benchmark data set is also used for an extensive validation of line-of-sight tropospheric Slant Total Delays (STD) from GNSS, NWM-raytracing and Water Vapour Radiometer (WVR) solutions. Seven institutions delivered their STDs estimated based on GNSS observations processed using different software and strategies. STDs from NWM ray-tracing came from three institutions using four different NWM models. Results show generally a very good mutual agreement among all solutions from all techniques. The influence of adding not cleaned GNSS post-fit residuals, i.e. residuals that still contains non-tropospheric systematic effects such as multipath, to estimated STDs will be presented. The Real-time demonstration campaign aims at enhancing and assessing ultra-fast GNSS tropospheric products for severe weather and NWM nowcasting. Results are showed from real-time demonstrations as well as offline production simulating real-time using Benchmark campaign.
NASA Astrophysics Data System (ADS)
Green, Jonathan; Schmitz, Oliver; Severn, Greg; van Ruremonde, Lars; Winters, Victoria
2017-10-01
The MARIA device at the UW-Madison is used primarily to investigate the dynamics and fueling of neutral particles in helicon discharges. A new systematic method is in development to measure key plasma and neutral particle parameters by spectroscopic methods. The setup relies on spectroscopic line ratios for investigating basic plasma parameters and extrapolation to other states using a collisional radiative model. Active pumping using a Nd:YAG pumped dye laser is used to benchmark and correct the underlying atomic data for the collisional radiative model. First results show a matching linear dependence between electron density and laser induced fluorescence on the magnetic field above 500G. This linear dependence agrees with the helicon dispersion relation and implies MARIA can reliably support the helicon mode and support future measurements. This work was funded by the NSF CAREER award PHY-1455210.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
NASA Astrophysics Data System (ADS)
Liu, Bing; Sun, Li Guo
2018-06-01
This paper chooses the Nanjing-Hangzhou high speed overbridge, a self-anchored suspension bridge, as the research target, trying to identify the dynamic characteristic parameters of the bridge by using the peak-picking method to analyze the velocity response data under ambient excitation collected by 7 vibration pickup sensors set on the bridge deck. The ABAQUS is used to set up a three-dimensional finite element model for the full bridge and amends the finite element model of the suspension bridge based on the identified modal parameter, and suspender force picked by the PDV100 laser vibrometer. The study shows that the modal parameter can well be identified by analyzing the bridge vibration velocity collected by 7 survey points. The identified modal parameter and measured suspender force can be used as the basis of the amendment of the finite element model of the suspension bridge. The amended model can truthfully reflect the structural physical features and it can also be the benchmark model for the long-term health monitoring and condition assessment of the bridge.
[Features of binding of proflavine to DNA at different DNA-ligand concentration ratios].
Berezniak, E G; gladkovskaia, N A; Khrebtova, A S; Dukhopel'nikov, E V; Zinchenko, A V
2009-01-01
The binding of proflavine to calf thymus DNA has been studied using the methods of differential scanning calorimetry and spectrophotometry. It was shown that proflavine can interact with DNA by at least 3 binding modes. At high DNA-ligand concentration ratios (P/D), proflavine intercalates into both GC- and AT-sites, with a preference to GC-rich sequences. At low P/D ratios proflavine interacts with DNA by the external binding mode. From spectrophotometric concentration dependences, the parameters of complexing of proflavine with DNA were calculated. Thermodynamic parameters of DNA melting were calculated from differential scanning calorimetry data.
Multichannel scanning radiometer for remote sensing cloud physical parameters
NASA Technical Reports Server (NTRS)
Curran, R. J.; Kyle, H. L.; Blaine, L. R.; Smith, J.; Clem, T. D.
1981-01-01
A multichannel scanning radiometer developed for remote observation of cloud physical properties is described. Consisting of six channels in the near infrared and one channel in the thermal infrared, the instrument can observe cloud physical parameters such as optical thickness, thermodynamic phase, cloud top altitude, and cloud top temperature. Measurement accuracy is quantified through flight tests on the NASA CV-990 and the NASA WB-57F, and is found to be limited by the harsh environment of the aircraft at flight altitude. The electronics, data system, and calibration of the instrument are also discussed.
Ara, Mirian; Ferreras, Antonio; Pajarin, Ana B; Calvo, Pilar; Figus, Michele; Frezzotti, Paolo
2015-01-01
To assess the intrasession repeatability and intersession reproducibility of peripapillary retinal nerve fiber layer (RNFL) thickness parameters measured by scanning laser polarimetry (SLP) with enhanced corneal compensation (ECC) in healthy and glaucomatous eyes. One randomly selected eye of 82 healthy individuals and 60 glaucoma subjects was evaluated. Three scans were acquired during the first visit to evaluate intravisit repeatability. A different operator obtained two additional scans within 2 months after the first session to determine intervisit reproducibility. The intraclass correlation coefficient (ICC), coefficient of variation (COV), and test-retest variability (TRT) were calculated for all SLP parameters in both groups. ICCs ranged from 0.920 to 0.982 for intravisit measurements and from 0.910 to 0.978 for intervisit measurements. The temporal-superior-nasal-inferior-temporal (TSNIT) average was the highest (0.967 and 0.946) in normal eyes, while nerve fiber indicator (NFI; 0.982) and inferior average (0.978) yielded the best ICC in glaucomatous eyes for intravisit and intervisit measurements, respectively. All COVs were under 10% in both groups, except NFI. TSNIT average had the lowest COV (2.43%) in either type of measurement. Intervisit TRT ranged from 6.48 to 12.84. The reproducibility of peripapillary RNFL measurements obtained with SLP-ECC was excellent, indicating that SLP-ECC is sufficiently accurate for monitoring glaucoma progression.
Barnes, Samuel R; Ng, Thomas S C; Montagne, Axel; Law, Meng; Zlokovic, Berislav V; Jacobs, Russell E
2016-05-01
To determine optimal parameters for acquisition and processing of dynamic contrast-enhanced MRI (DCE-MRI) to detect small changes in near normal low blood-brain barrier (BBB) permeability. Using a contrast-to-noise ratio metric (K-CNR) for Ktrans precision and accuracy, the effects of kinetic model selection, scan duration, temporal resolution, signal drift, and length of baseline on the estimation of low permeability values was evaluated with simulations. The Patlak model was shown to give the highest K-CNR at low Ktrans . The Ktrans transition point, above which other models yielded superior results, was highly dependent on scan duration and tissue extravascular extracellular volume fraction (ve ). The highest K-CNR for low Ktrans was obtained when Patlak model analysis was combined with long scan times (10-30 min), modest temporal resolution (<60 s/image), and long baseline scans (1-4 min). Signal drift as low as 3% was shown to affect the accuracy of Ktrans estimation with Patlak analysis. DCE acquisition and modeling parameters are interdependent and should be optimized together for the tissue being imaged. Appropriately optimized protocols can detect even the subtlest changes in BBB integrity and may be used to probe the earliest changes in neurodegenerative diseases such as Alzheimer's disease and multiple sclerosis. © 2015 Wiley Periodicals, Inc.
[Results of therapy of children with amblyopia by scanning stimulating laser].
Chentsova, O B; Magaramova, M D; Grechanyĭ, M P
1997-01-01
A new effective method for the treatment of amblyopia was used in 113 children: stimulation with ophthalmological SLSO-208A scanning laser by two methods differing by the transmission coefficient and scanning pattern. Good results were attained, the best when laser exposure was combined with traditional therapy for amblyopia and in the patients with the central fixation. The results were assessed by the main parameters of visual functions and the stability of the effect.
Effect of Fourier transform on the streaming in quantum lattice gas algorithms
NASA Astrophysics Data System (ADS)
Oganesov, Armen; Vahala, George; Vahala, Linda; Soe, Min
2018-04-01
All our previous quantum lattice gas algorithms for nonlinear physics have approximated the kinetic energy operator by streaming sequences to neighboring lattice sites. Here, the kinetic energy can be treated to all orders by Fourier transforming the kinetic energy operator with interlaced Dirac-based unitary collision operators. Benchmarking against exact solutions for the 1D nonlinear Schrodinger equation shows an extended range of parameters (soliton speeds and amplitudes) over the Dirac-based near-lattice-site streaming quantum algorithm.
Approximating the Basset force by optimizing the method of van Hinsberg et al.
NASA Astrophysics Data System (ADS)
Casas, G.; Ferrer, A.; Oñate, E.
2018-01-01
In this work we put the method proposed by van Hinsberg et al. [29] to the test, highlighting its accuracy and efficiency in a sequence of benchmarks of increasing complexity. Furthermore, we explore the possibility of systematizing the way in which the method's free parameters are determined by generalizing the optimization problem that was considered originally. Finally, we provide a list of worked-out values, ready for implementation in large-scale particle-laden flow simulations.
A new inertia weight control strategy for particle swarm optimization
NASA Astrophysics Data System (ADS)
Zhu, Xianming; Wang, Hongbo
2018-04-01
Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.
Gabran, S R I; Saad, J H; Salama, M M A; Mansour, R R
2009-01-01
This paper demonstrates the electromagnetic modeling and simulation of an implanted Medtronic deep brain stimulation (DBS) electrode using finite difference time domain (FDTD). The model is developed using Empire XCcel and represents the electrode surrounded with brain tissue assuming homogenous and isotropic medium. The model is created to study the parameters influencing the electric field distribution within the tissue in order to provide reference and benchmarking data for DBS and intra-cortical electrode development.
Joe, Paula S; Ito, Yasushi; Shih, Alan M; Oestenstad, Riedar K; Lungu, Claudiu T
2012-01-01
This study was designed to determine if three-dimensional (3D) laser scanning techniques could be used to collect accurate anthropometric measurements, compared with traditional methods. The use of an alternative 3D method would allow for quick collection of data that could be used to change the parameters used for facepiece design, improving fit and protection for a wider variety of faces. In our study, 10 facial dimensions were collected using both the traditional calipers and tape method and a Konica-Minolta Vivid9i laser scanner. Scans were combined using RapidForm XOR software to create a single complete facial geometry of the subject as a triangulated surface with an associated texture image from which to obtain measurements. A paired t-test was performed on subject means in each measurement by method. Nine subjects were used in this study: five males (one African-American and four Caucasian females) and four females displaying a range of facial dimensions. Five measurements showed significant differences (p<0.05), with most accounted for by subject movements or amended by scanning technique modifications. Laser scanning measurements showed high precision and accuracy when compared with traditional methods. Significant differences found can be very small changes in measurements and are unlikely to present a practical difference. The laser scanning technique demonstrated reliable and quick anthropometric data collection for use in future projects in redesigning respirators.