Sample records for photon benchmark problems

  1. Benchmarking comparison and validation of MCNP photon interaction data

    NASA Astrophysics Data System (ADS)

    Colling, Bethany; Kodeli, I.; Lilley, S.; Packer, L. W.

    2017-09-01

    The objective of the research was to test available photoatomic data libraries for fusion relevant applications, comparing against experimental and computational neutronics benchmarks. Photon flux and heating was compared using the photon interaction data libraries (mcplib 04p, 05t, 84p and 12p). Suitable benchmark experiments (iron and water) were selected from the SINBAD database and analysed to compare experimental values with MCNP calculations using mcplib 04p, 84p and 12p. In both the computational and experimental comparisons, the majority of results with the 04p, 84p and 12p photon data libraries were within 1σ of the mean MCNP statistical uncertainty. Larger differences were observed when comparing computational results with the 05t test photon library. The Doppler broadening sampling bug in MCNP-5 is shown to be corrected for fusion relevant problems through use of the 84p photon data library. The recommended libraries for fusion neutronics are 84p (or 04p) with MCNP6 and 84p if using MCNP-5.

  2. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  3. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  4. Modification and benchmarking of MCNP for low-energy tungsten spectra.

    PubMed

    Mercier, J R; Kopp, D T; McDavid, W D; Dove, S B; Lancaster, J L; Tucker, D M

    2000-12-01

    The MCNP Monte Carlo radiation transport code was modified for diagnostic medical physics applications. In particular, the modified code was thoroughly benchmarked for the production of polychromatic tungsten x-ray spectra in the 30-150 kV range. Validating the modified code for coupled electron-photon transport with benchmark spectra was supplemented with independent electron-only and photon-only transport benchmarks. Major revisions to the code included the proper treatment of characteristic K x-ray production and scoring, new impact ionization cross sections, and new bremsstrahlung cross sections. Minor revisions included updated photon cross sections, electron-electron bremsstrahlung production, and K x-ray yield. The modified MCNP code is benchmarked to electron backscatter factors, x-ray spectra production, and primary and scatter photon transport.

  5. TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer

    NASA Astrophysics Data System (ADS)

    Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.

    2017-07-01

    Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.

  6. A study of the radiative transfer equation using a spherical harmonics-nodal collocation method

    NASA Astrophysics Data System (ADS)

    Capilla, M. T.; Talavera, C. F.; Ginestar, D.; Verdú, G.

    2017-03-01

    Optical tomography has found many medical applications that need to know how the photons interact with the different tissues. The majority of the photon transport simulations are done using the diffusion approximation, but this approximation has a limited validity when optical properties of the different tissues present large gradients, when structures near the photons source are studied or when anisotropic scattering has to be taken into account. As an alternative to the diffusion model, the PL equations for the radiative transfer problem are studied. These equations are discretized in a rectangular mesh using a nodal collocation method. The performance of this model is studied by solving different 1D and 2D benchmark problems of light propagation in tissue having media with isotropic and anisotropic scattering.

  7. AN ASSESSMENT OF MCNP WEIGHT WINDOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. S. HENDRICKS; C. N. CULBERTSON

    2000-01-01

    The weight window variance reduction method in the general-purpose Monte Carlo N-Particle radiation transport code MCNPTM has recently been rewritten. In particular, it is now possible to generate weight window importance functions on a superimposed mesh, eliminating the need to subdivide geometries for variance reduction purposes. Our assessment addresses the following questions: (1) Does the new MCNP4C treatment utilize weight windows as well as the former MCNP4B treatment? (2) Does the new MCNP4C weight window generator generate importance functions as well as MCNP4B? (3) How do superimposed mesh weight windows compare to cell-based weight windows? (4) What are the shortcomingsmore » of the new MCNP4C weight window generator? Our assessment was carried out with five neutron and photon shielding problems chosen for their demanding variance reduction requirements. The problems were an oil well logging problem, the Oak Ridge fusion shielding benchmark problem, a photon skyshine problem, an air-over-ground problem, and a sample problem for variance reduction.« less

  8. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  9. a Proposed Benchmark Problem for Scatter Calculations in Radiographic Modelling

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Bellon, C.; Schumm, A.; Tabary, J.; Duvauchelle, Ph.

    2009-03-01

    Code Validation is a permanent concern in computer modelling, and has been addressed repeatedly in eddy current and ultrasonic modeling. A good benchmark problem is sufficiently simple to be taken into account by various codes without strong requirements on geometry representation capabilities, focuses on few or even a single aspect of the problem at hand to facilitate interpretation and to avoid that compound errors compensate themselves, yields a quantitative result and is experimentally accessible. In this paper we attempt to address code validation for one aspect of radiographic modeling, the scattered radiation prediction. Many NDT applications can not neglect scattered radiation, and the scatter calculation thus is important to faithfully simulate the inspection situation. Our benchmark problem covers the wall thickness range of 10 to 50 mm for single wall inspections, with energies ranging from 100 to 500 keV in the first stage, and up to 1 MeV with wall thicknesses up to 70 mm in the extended stage. A simple plate geometry is sufficient for this purpose, and the scatter data is compared on a photon level, without a film model, which allows for comparisons with reference codes like MCNP. We compare results of three Monte Carlo codes (McRay, Sindbad and Moderato) as well as an analytical first order scattering code (VXI), and confront them to results obtained with MCNP. The comparison with an analytical scatter model provides insights into the application domain where this kind of approach can successfully replace Monte-Carlo calculations.

  10. Neutralino pair production at the photon-photon collider for the τ̃-coannihilation scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonmez, Nasuf, E-mail: nsonmez@cern.ch

    Supersymmetry (SUSY) is a theory which gives an explanation for the strong and electroweak interactions from the grand unification scale down to the weak scale. The search for supersymmetric particles still continues at full speed at the LHC without success. The main task at the ILC is complementing the LHC result and also search for new physics. In this study, the neutralino pair production via photon-photon collision is studied for the t̃-coannihilation scenario in the context of MSSM at the ILC. In the calculation, all the possible one loop diagrams are taken into account for the photon-photon interaction. We presentmore » the production cross section and distribution of various observables for the lightest and next-to-lightest neutralino pairs for benchmark models which are specifically presented in the light of LHC8 data analysis, employing these benchmark models for neutralino pair production could show the potential of the ILC concerning the dark matter searches in supersymmetry.« less

  11. Photonic reservoir computing: a new approach to optical information processing

    NASA Astrophysics Data System (ADS)

    Vandoorne, Kristof; Fiers, Martin; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2010-06-01

    Despite ever increasing computational power, recognition and classification problems remain challenging to solve. Recently, advances have been made by the introduction of the new concept of reservoir computing. This is a methodology coming from the field of machine learning and neural networks that has been successfully used in several pattern classification problems, like speech and image recognition. Thus far, most implementations have been in software, limiting their speed and power efficiency. Photonics could be an excellent platform for a hardware implementation of this concept because of its inherent parallelism and unique nonlinear behaviour. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. We propose using a network of coupled Semiconductor Optical Amplifiers (SOA) and show in simulation that it could be used as a reservoir by comparing it to conventional software implementations using a benchmark speech recognition task. In spite of the differences with classical reservoir models, the performance of our photonic reservoir is comparable to that of conventional implementations and sometimes slightly better. As our implementation uses coherent light for information processing, we find that phase tuning is crucial to obtain high performance. In parallel we investigate the use of a network of photonic crystal cavities. The coupled mode theory (CMT) is used to investigate these resonators. A new framework is designed to model networks of resonators and SOAs. The same network topologies are used, but feedback is added to control the internal dynamics of the system. By adjusting the readout weights of the network in a controlled manner, we can generate arbitrary periodic patterns.

  12. Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications

    DTIC Science & Technology

    2009-05-01

    Analysis of Photonic Networks for a Chip Multiprocessor Using Scientific Applications Gilbert Hendry†, Shoaib Kamil‡?, Aleksandr Biberman†, Johnnie...electronic networks -on-chip warrants investigating real application traces on functionally compa- rable photonic and electronic network designs. We... network can achieve 75× improvement in energy ef- ficiency for synthetic benchmarks and up to 37× improve- ment for real scientific applications

  13. DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations

    NASA Astrophysics Data System (ADS)

    Sempau, Josep; Wilderman, Scott J.; Bielajew, Alex F.

    2000-08-01

    A new Monte Carlo (MC) algorithm, the `dose planning method' (DPM), and its associated computer program for simulating the transport of electrons and photons in radiotherapy class problems employing primary electron beams, is presented. DPM is intended to be a high-accuracy MC alternative to the current generation of treatment planning codes which rely on analytical algorithms based on an approximate solution of the photon/electron Boltzmann transport equation. For primary electron beams, DPM is capable of computing 3D dose distributions (in 1 mm3 voxels) which agree to within 1% in dose maximum with widely used and exhaustively benchmarked general-purpose public-domain MC codes in only a fraction of the CPU time. A representative problem, the simulation of 1 million 10 MeV electrons impinging upon a water phantom of 1283 voxels of 1 mm on a side, can be performed by DPM in roughly 3 min on a modern desktop workstation. DPM achieves this performance by employing transport mechanics and electron multiple scattering distribution functions which have been derived to permit long transport steps (of the order of 5 mm) which can cross heterogeneity boundaries. The underlying algorithm is a `mixed' class simulation scheme, with differential cross sections for hard inelastic collisions and bremsstrahlung events described in an approximate manner to simplify their sampling. The continuous energy loss approximation is employed for energy losses below some predefined thresholds, and photon transport (including Compton, photoelectric absorption and pair production) is simulated in an analogue manner. The δ-scattering method (Woodcock tracking) is adopted to minimize the computational costs of transporting photons across voxels.

  14. A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger

    2017-09-01

    Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.

  15. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  16. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  17. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  18. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  19. MARC calculations for the second WIPP structural benchmark problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.

    1981-05-01

    This report describes calculations made with the MARC structural finite element code for the second WIPP structural benchmark problem. Specific aspects of problem implementation such as element choice, slip line modeling, creep law implementation, and thermal-mechanical coupling are discussed in detail. Also included are the computational results specified in the benchmark problem formulation.

  20. Neuromorphic photonic networks using silicon photonic weight banks.

    PubMed

    Tait, Alexander N; de Lima, Thomas Ferreira; Zhou, Ellen; Wu, Allie X; Nahmias, Mitchell A; Shastri, Bhavin J; Prucnal, Paul R

    2017-08-07

    Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using "neural compiler" to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.

  1. Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.

    PubMed

    Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A

    2004-02-07

    The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.

  2. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  3. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  4. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  5. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  6. Adiabatic Quantum Computing via the Rydberg Blockade

    NASA Astrophysics Data System (ADS)

    Keating, Tyler; Goyal, Krittika; Deutsch, Ivan

    2012-06-01

    We study an architecture for implementing adiabatic quantum computation with trapped neutral atoms. Ground state atoms are dressed by laser fields in a manner conditional on the Rydberg blockade mechanism, thereby providing the requisite entangling interactions. As a benchmark we study the performance of a Quadratic Unconstrained Binary Optimization (QUBO) problem whose solution is found in the ground state spin configuration of an Ising-like model. We model a realistic architecture, including the effects of magnetic level structure, with qubits encoded into the clock states of ^133Cs, effective B-fields implemented through microwaves and light shifts, and atom-atom coupling achieved by excitation to a high-lying Rydberg level. Including the fundamental effects of photon scattering we find a high fidelity for the two-qubit implementation.

  7. Optimization in Radiation Therapy: Applications in Brachytherapy and Intensity Modulated Radiation Therapy

    NASA Astrophysics Data System (ADS)

    McGeachy, Philip David

    Over 50% of cancer patients require radiation therapy (RT). RT is an optimization problem requiring maximization of the radiation damage to the tumor while minimizing the harm to the healthy tissues. This dissertation focuses on two main RT optimization problems: 1) brachytherapy and 2) intensity modulated radiation therapy (IMRT). The brachytherapy research involved solving a non-convex optimization problem by creating an open-source genetic algorithm optimizer to determine the optimal radioactive seed distribution for a given set of patient volumes and constraints, both dosimetric- and implant-based. The optimizer was tested for a set of 45 prostate brachytherapy patients. While all solutions met the clinical standards, they also benchmarked favorably with those generated by a standard commercial solver. Compared to its compatriot, the salient features of the generated solutions were: slightly reduced prostate coverage, lower dose to the urethra and rectum, and a smaller number of needles required for an implant. Historically, IMRT requires modulation of fluence while keeping the photon beam energy fixed. The IMRT-related investigation in this thesis aimed at broadening the solution space by varying photon energy. The problem therefore involved simultaneous optimization of photon beamlet energy and fluence, denoted by XMRT. Formulating the problem as convex, linear programming was applied to obtain solutions for optimal energy-dependent fluences, while achieving all clinical objectives and constraints imposed. Dosimetric advantages of XMRT over single-energy IMRT in the improved sparing of organs at risk (OARs) was demonstrated in simplified phantom studies. The XMRT algorithm was improved to include clinical dose-volume constraints and clinical studies for prostate and head and neck cancer patients were investigated. Compared to IMRT, XMRT provided improved dosimetric benefit in the prostate case, particularly within intermediate- to low-dose regions (≤ 40 Gy) for OARs. For head and neck cases, XMRT solutions showed no significant disadvantage or advantage over IMRT. The deliverability concerns for the fluence maps generated from XMRT were addressed by incorporating smoothing constraints during the optimization and through successful generation of treatment machine files. Further research is needed to explore the full potential of the XMRT approach to RT.

  8. Benchmark dataset for undirected and Mixed Capacitated Arc Routing Problems under Time restrictions with Intermediate Facilities.

    PubMed

    Willemse, Elias J; Joubert, Johan W

    2016-09-01

    In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].

  9. Holographic photon production in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Iatrakis, Ioannis; Kiritsis, Elias; Shen, Chun; Yang, Di-Lun

    2017-04-01

    The thermal-photon emission from strongly coupled gauge theories at finite temperature is calculated using holographic models for QCD in the Veneziano limit (V-QCD). The emission rates are then embedded in hydrodynamic simulations combined with prompt photons from hard scattering and the thermal photons from hadron gas to analyze the spectra and anisotropic flow of direct photons at RHIC and LHC. The results from different sources responsible for the thermal photons in QGP including the weakly coupled QGP (wQGP) from perturbative calculations, strongly coupled N = 4 super Yang-Mills (SYM) plasma (as a benchmark for reference), and Gubser's phenomenological holographic model are then compared. It is found that the direct-photon spectra are enhanced in the strongly coupled scenario compared with the ones in the wQGP, especially at high momenta. Moreover, both the elliptic flow and triangular flow of direct photons are amplified at high momenta for V-QCD and the SYM plasma. The results are further compared with experimental observations.

  10. Benchmark gamma-ray skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nason, R.R.; Shultis, J.K.; Faw, R.E.

    1982-01-01

    A benchmark gamma-ray skyshine experiment is descibed in which /sup 60/Co sources were either collimated into an upward 150-deg conical beam or shielded vertically by two different thicknesses of concrete. A NaI(Tl) spectrometer and a high pressure ion chamber were used to measure, respectively, the energy spectrum and the 4..pi..-exposure rate of the air-reflected gamma photons up to 700 m from the source. Analyses of the data and comparison to DOT discrete ordinates calculations are presented.

  11. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  12. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vongehr, Sascha, E-mail: vongehr@usc.edu

    There are increasingly suggestions for computer simulations of quantum statistics which try to violate Bell type inequalities via classical, common cause correlations. The Clauser–Horne–Shimony–Holt (CHSH) inequality is very robust. However, we argue that with the Einstein–Podolsky–Rosen setup, the CHSH is inferior to the Bell inequality, although and because the latter must assume anti-correlation of entangled photon singlet states. We simulate how often quantum behavior violates both inequalities, depending on the number of photons. Violating Bell 99% of the time is argued to be an ideal benchmark. We present hidden variables that violate the Bell and CHSH inequalities with 50% probability,more » and ones which violate Bell 85% of the time when missing 13% anti-correlation. We discuss how to present the quantum correlations to a wide audience and conclude that, when defending against claims of hidden classicality, one should demand numerical simulations and insist on anti-correlation and the full amount of Bell violation. -- Highlights: •The widely assumed superiority of the CHSH fails in the EPR problem. •We simulate Bell type inequalities behavior depending on the number of photons. •The core of Bell’s theorem in the EPR setup is introduced in a simple way understandable to a wide audience. •We present hidden variables that violate both inequalities with 50% probability. •Algorithms have been supplied in form of Mathematica programs.« less

  14. Benchmarking the MCNP Monte Carlo code with a photon skyshine experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsher, R.H.; Hsu, Hsiao Hua; Harvey, W.F.

    1993-07-01

    The MCNP Monte Carlo transport code is used by the Los Alamos National Laboratory Health and Safety Division for a broad spectrum of radiation shielding calculations. One such application involves the determination of skyshine dose for a variety of photon sources. To verify the accuracy of the code, it was benchmarked with the Kansas State Univ. (KSU) photon skyshine experiment of 1977. The KSU experiment for the unshielded source geometry was simulated in great detail to include the contribution of groundshine, in-silo photon scatter, and the effect of spectral degradation in the source capsule. The standard deviation of the KSUmore » experimental data was stated to be 7%, while the statistical uncertainty of the simulation was kept at or under 1%. The results of the simulation agreed closely with the experimental data, generally to within 6%. At distances of under 100 m from the silo, the modeling of the in-silo scatter was crucial to achieving close agreement with the experiment. Specifically, scatter off the top layer of the source cask accounted for [approximately]12% of the dose at 50 m. At distance >300m, using the [sup 60]Co line spectrum led to a dose overresponse as great as 19% at 700 m. It was necessary to use the actual source spectrum, which includes a Compton tail from photon collisions in the source capsule, to achieve close agreement with experimental data. These results highlight the importance of using Monte Carlo transport techniques to account for the nonideal features of even simple experiments''.« less

  15. Benchmarking Strategies for Measuring the Quality of Healthcare: Problems and Prospects

    PubMed Central

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed. PMID:22666140

  16. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  17. Within-Group Effect-Size Benchmarks for Problem-Solving Therapy for Depression in Adults

    ERIC Educational Resources Information Center

    Rubin, Allen; Yu, Miao

    2017-01-01

    This article provides benchmark data on within-group effect sizes from published randomized clinical trials that supported the efficacy of problem-solving therapy (PST) for depression among adults. Benchmarks are broken down by type of depression (major or minor), type of outcome measure (interview or self-report scale), whether PST was provided…

  18. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  19. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  20. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  1. Carrier-envelope phase control over pathway interference in strong-field dissociation of H2+.

    PubMed

    Kling, Nora G; Betsch, K J; Zohrabi, M; Zeng, S; Anis, F; Ablikim, U; Jochim, Bethany; Wang, Z; Kübel, M; Kling, M F; Carnes, K D; Esry, B D; Ben-Itzhak, I

    2013-10-18

    The dissociation of an H2+ molecular-ion beam by linearly polarized, carrier-envelope-phase-tagged 5 fs pulses at 4×10(14) W/cm2 with a central wavelength of 730 nm was studied using a coincidence 3D momentum imaging technique. Carrier-envelope-phase-dependent asymmetries in the emission direction of H+ fragments relative to the laser polarization were observed. These asymmetries are caused by interference of odd and even photon number pathways, where net zero-photon and one-photon interference predominantly contributes at H+ + H kinetic energy releases of 0.2-0.45 eV, and net two-photon and one-photon interference contributes at 1.65-1.9 eV. These measurements of the benchmark H2+ molecule offer the distinct advantage that they can be quantitatively compared with ab initio theory to confirm our understanding of strong-field coherent control via the carrier-envelope phase.

  2. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  3. Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, C. K. W. (Editor); Hardin, J. C. (Editor)

    1997-01-01

    The proceedings of the Second Computational Aeroacoustics (CAA) Workshop on Benchmark Problems held at Florida State University are the subject of this report. For this workshop, problems arising in typical industrial applications of CAA were chosen. Comparisons between numerical solutions and exact solutions are presented where possible.

  4. A proposed benchmark problem for cargo nuclear threat monitoring

    NASA Astrophysics Data System (ADS)

    Wesley Holmes, Thomas; Calderon, Adan; Peeples, Cody R.; Gardner, Robin P.

    2011-10-01

    There is currently a great deal of technical and political effort focused on reducing the risk of potential attacks on the United States involving radiological dispersal devices or nuclear weapons. This paper proposes a benchmark problem for gamma-ray and X-ray cargo monitoring with results calculated using MCNP5, v1.51. The primary goal is to provide a benchmark problem that will allow researchers in this area to evaluate Monte Carlo models for both speed and accuracy in both forward and inverse calculational codes and approaches for nuclear security applications. A previous benchmark problem was developed by one of the authors (RPG) for two similar oil well logging problems (Gardner and Verghese, 1991, [1]). One of those benchmarks has recently been used by at least two researchers in the nuclear threat area to evaluate the speed and accuracy of Monte Carlo codes combined with variance reduction techniques. This apparent need has prompted us to design this benchmark problem specifically for the nuclear threat researcher. This benchmark consists of conceptual design and preliminary calculational results using gamma-ray interactions on a system containing three thicknesses of three different shielding materials. A point source is placed inside the three materials lead, aluminum, and plywood. The first two materials are in right circular cylindrical form while the third is a cube. The entire system rests on a sufficiently thick lead base so as to reduce undesired scattering events. The configuration was arranged in such a manner that as gamma-ray moves from the source outward it first passes through the lead circular cylinder, then the aluminum circular cylinder, and finally the wooden cube before reaching the detector. A 2 in.×4 in.×16 in. box style NaI (Tl) detector was placed 1 m from the point source located in the center with the 4 in.×16 in. side facing the system. The two sources used in the benchmark are 137Cs and 235U.

  5. Community-based benchmarking improves spike rate inference from two-photon calcium imaging data.

    PubMed

    Berens, Philipp; Freeman, Jeremy; Deneux, Thomas; Chenkov, Nikolay; McColgan, Thomas; Speiser, Artur; Macke, Jakob H; Turaga, Srinivas C; Mineault, Patrick; Rupprecht, Peter; Gerhard, Stephan; Friedrich, Rainer W; Friedrich, Johannes; Paninski, Liam; Pachitariu, Marius; Harris, Kenneth D; Bolte, Ben; Machado, Timothy A; Ringach, Dario; Stone, Jasmine; Rogerson, Luke E; Sofroniew, Nicolas J; Reimer, Jacob; Froudarakis, Emmanouil; Euler, Thomas; Román Rosón, Miroslav; Theis, Lucas; Tolias, Andreas S; Bethge, Matthias

    2018-05-01

    In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.

  6. Mathematical simulations of photon interactions using Monte Carlo analysis to evaluate the uncertainty associated with in vivo K X-ray fluorescence measurements of stable lead in bone

    NASA Astrophysics Data System (ADS)

    Lodwick, Camille J.

    This research utilized Monte Carlo N-Particle version 4C (MCNP4C) to simulate K X-ray fluorescent (K XRF) measurements of stable lead in bone. Simulations were performed to investigate the effects that overlying tissue thickness, bone-calcium content, and shape of the calibration standard have on detector response in XRF measurements at the human tibia. Additional simulations of a knee phantom considered uncertainty associated with rotation about the patella during XRF measurements. Simulations tallied the distribution of energy deposited in a high-purity germanium detector originating from collimated 88 keV 109Cd photons in backscatter geometry. Benchmark measurements were performed on simple and anthropometric XRF calibration phantoms of the human leg and knee developed at the University of Cincinnati with materials proven to exhibit radiological characteristics equivalent to human tissue and bone. Initial benchmark comparisons revealed that MCNP4C limits coherent scatter of photons to six inverse angstroms of momentum transfer and a Modified MCNP4C was developed to circumvent the limitation. Subsequent benchmark measurements demonstrated that Modified MCNP4C adequately models photon interactions associated with in vivo K XRF of lead in bone. Further simulations of a simple leg geometry possessing tissue thicknesses from 0 to 10 mm revealed increasing overlying tissue thickness from 5 to 10 mm reduced predicted lead concentrations an average 1.15% per 1 mm increase in tissue thickness (p < 0.0001). An anthropometric leg phantom was mathematically defined in MCNP to more accurately reflect the human form. A simulated one percent increase in calcium content (by mass) of the anthropometric leg phantom's cortical bone demonstrated to significantly reduce the K XRF normalized ratio by 4.5% (p < 0.0001). Comparison of the simple and anthropometric calibration phantoms also suggested that cylindrical calibration standards can underestimate lead content of a human leg up to 4%. The patellar bone structure in which the fluorescent photons originate was found to vary dramatically with measurement angle. The relative contribution of lead signal from the patella declined from 65% to 27% when rotated 30°. However, rotation of the source-detector about the patella from 0 to 45° demonstrated no significant effect on the net K XRF response at the knee.

  7. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  8. Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2000-01-01

    The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.

  9. One-dimensional photonic crystal optical limiter.

    PubMed

    Soon, Boon Yi; Haus, Joseph; Scalora, Michael; Sibilia, Concita

    2003-08-25

    We explore a new passive optical limiter design using transverse modulation instability in the one-dimensional photonic crystal (PC) using x(3) materials. The performance of PC optical limiters strongly depends on the choice of the materials and the geometry and it improves as the duration of the incident pulse is extended. PC optical limiter performance is compared with that of a device made from homogeneous material. We identify three criteria for benchmarking the PC optical limiter. We also include a discussion of the advantages and disadvantages of PC optical limiters for real world applications.

  10. Fiber optic label-free biophotonic diagnostic tool for cardiovascular disease

    NASA Astrophysics Data System (ADS)

    Rius, Cristina; Ackermann, Tobias N.; Dorado, Beatriz; Muñoz-Berbel, Xavier; Andrés, Vicente; Llobera, Andreu

    2015-06-01

    A label-free compact method for performing photonic characterization of "healthy" versus "diseased" arteries has been developed. It permits the detection of atherosclerotic lesion in living mouse arteries. Using this prototype, we observed that the spectral response (photonic fingerprint, PIN) obtained from aortas of wild-type mice differs from the response of ApoE-KO mice fed with high-fat diet (an atheroprone mouse model). Benchmark of the results against gold standard was performed by staining the aortas with Oil-Red-O to visualize atherosclerotic plaques.

  11. PMLB: a large benchmark suite for machine learning evaluation and comparison.

    PubMed

    Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H

    2017-01-01

    The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

  12. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  13. Monte Carlo simulations and benchmark measurements on the response of TE(TE) and Mg(Ar) ionization chambers in photon, electron and neutron beams

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Huang, Tseng-Te; Liu, Yuan-Hao; Chen, Wei-Lin; Chen, Yen-Fu; Wu, Shu-Wei; Nievaart, Sander; Jiang, Shiang-Huei

    2015-06-01

    The paired ionization chambers (ICs) technique is commonly employed to determine neutron and photon doses in radiology or radiotherapy neutron beams, where neutron dose shows very strong dependence on the accuracy of accompanying high energy photon dose. During the dose derivation, it is an important issue to evaluate the photon and electron response functions of two commercially available ionization chambers, denoted as TE(TE) and Mg(Ar), used in our reactor based epithermal neutron beam. Nowadays, most perturbation corrections for accurate dose determination and many treatment planning systems are based on the Monte Carlo technique. We used general purposed Monte Carlo codes, MCNP5, EGSnrc, FLUKA or GEANT4 for benchmark verifications among them and carefully measured values for a precise estimation of chamber current from absorbed dose rate of cavity gas. Also, energy dependent response functions of two chambers were calculated in a parallel beam with mono-energies from 20 keV to 20 MeV photons and electrons by using the optimal simple spherical and detailed IC models. The measurements were performed in the well-defined (a) four primary M-80, M-100, M120 and M150 X-ray calibration fields, (b) primary 60Co calibration beam, (c) 6 MV and 10 MV photon, (d) 6 MeV and 18 MeV electron LINACs in hospital and (e) BNCT clinical trials neutron beam. For the TE(TE) chamber, all codes were almost identical over the whole photon energy range. In the Mg(Ar) chamber, MCNP5 showed lower response than other codes for photon energy region below 0.1 MeV and presented similar response above 0.2 MeV (agreed within 5% in the simple spherical model). With the increase of electron energy, the response difference between MCNP5 and other codes became larger in both chambers. Compared with the measured currents, MCNP5 had the difference from the measurement data within 5% for the 60Co, 6 MV, 10 MV, 6 MeV and 18 MeV LINACs beams. But for the Mg(Ar) chamber, the derivations reached 7.8-16.5% below 120 kVp X-ray beams. In this study, we were especially interested in BNCT doses where low energy photon contribution is less to ignore, MCNP model is recognized as the most suitable to simulate wide photon-electron and neutron energy distributed responses of the paired ICs. Also, MCNP provides the best prediction of BNCT source adjustment by the detector's neutron and photon responses.

  14. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  15. A suite of benchmark and challenge problems for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark; Fu, Pengcheng; McClure, Mark

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulation capabilitiesmore » to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners. We present the suite of benchmark and challenge problems developed for the GTO-CCS, providing problem descriptions and sample solutions.« less

  16. ORANGE: a Monte Carlo dose engine for radiotherapy.

    PubMed

    van der Zee, W; Hogenbirk, A; van der Marck, S C

    2005-02-21

    This study presents data for the verification of ORANGE, a fast MCNP-based dose engine for radiotherapy treatment planning. In order to verify the new algorithm, it has been benchmarked against DOSXYZ and against measurements. For the benchmarking, first calculations have been done using the ICCR-XIII benchmark. Next, calculations have been done with DOSXYZ and ORANGE in five different phantoms (one homogeneous, two with bone equivalent inserts and two with lung equivalent inserts). The calculations have been done with two mono-energetic photon beams (2 MeV and 6 MeV) and two mono-energetic electron beams (10 MeV and 20 MeV). Comparison of the calculated data (from DOSXYZ and ORANGE) against measurements was possible for a realistic 10 MV photon beam and a realistic 15 MeV electron beam in a homogeneous phantom only. For the comparison of the calculated dose distributions and dose distributions against measurements, the concept of the confidence limit (CL) has been used. This concept reduces the difference between two data sets to a single number, which gives the deviation for 90% of the dose distributions. Using this concept, it was found that ORANGE was always within the statistical bandwidth with DOSXYZ and the measurements. The ICCR-XIII benchmark showed that ORANGE is seven times faster than DOSXYZ, a result comparable with other accelerated Monte Carlo dose systems when no variance reduction is used. As shown for XVMC, using variance reduction techniques has the potential for further acceleration. Using modern computer hardware, this brings the total calculation time for a dose distribution with 1.5% (statistical) accuracy within the clinical range (less then 10 min). This means that ORANGE can be a candidate for a dose engine in radiotherapy treatment planning.

  17. Photonic simulation of entanglement growth and engineering after a spin chain quench.

    PubMed

    Pitsios, Ioannis; Banchi, Leonardo; Rab, Adil S; Bentivegna, Marco; Caprara, Debora; Crespi, Andrea; Spagnolo, Nicolò; Bose, Sougato; Mataloni, Paolo; Osellame, Roberto; Sciarrino, Fabio

    2017-11-17

    The time evolution of quantum many-body systems is one of the most important processes for benchmarking quantum simulators. The most curious feature of such dynamics is the growth of quantum entanglement to an amount proportional to the system size (volume law) even when interactions are local. This phenomenon has great ramifications for fundamental aspects, while its optimisation clearly has an impact on technology (e.g., for on-chip quantum networking). Here we use an integrated photonic chip with a circuit-based approach to simulate the dynamics of a spin chain and maximise the entanglement generation. The resulting entanglement is certified by constructing a second chip, which measures the entanglement between multiple distant pairs of simulated spins, as well as the block entanglement entropy. This is the first photonic simulation and optimisation of the extensive growth of entanglement in a spin chain, and opens up the use of photonic circuits for optimising quantum devices.

  18. Imaging CF3I conical intersection and photodissociation dynamics by ultrafast electron diffraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Jie

    Conical intersections play a critical role in excited state dynamics of polyatomic molecules, as they govern the reaction pathways of many nonadiabatic processes. However, ultrafast probes have lacked sufficient spatial resolution to image wavepacket trajectories through these intersections directly. Here we present the simultaneous experimental characterization of one-photon and two-photon excitation channels in isolated CF3I molecules using ultrafast gas phase electron diffraction. In the two-photon channel, we have mapped out the real space trajectories of a coherent nuclear wavepacket, which bifurcates onto two potential energy surfaces when passing through a conical intersection. In the one-photon channel, we have resolved excitationmore » of both the umbrella and the breathing vibrational modes in the CF3 fragment in multiple nuclear dimensions. These findings benchmark and validate ab-initio nonadiabatic dynamics calculations.« less

  19. Quantum storage of a photonic polarization qubit in a solid.

    PubMed

    Gündoğan, Mustafa; Ledingham, Patrick M; Almasi, Attaallah; Cristiani, Matteo; de Riedmatten, Hugues

    2012-05-11

    We report on the quantum storage and retrieval of photonic polarization quantum bits onto and out of a solid state storage device. The qubits are implemented with weak coherent states at the single photon level, and are stored for a predetermined time of 500 ns in a praseodymium doped crystal with a storage and retrieval efficiency of 10%, using the atomic frequency comb scheme. We characterize the storage by using quantum state tomography, and find that the average conditional fidelity of the retrieved qubits exceeds 95% for a mean photon number μ=0.4. This is significantly higher than a classical benchmark, taking into account the poissonian statistics and finite memory efficiency, which proves that our crystal functions as a quantum storage device for polarization qubits. These results extend the storage capabilities of solid state quantum light matter interfaces to polarization encoding, which is widely used in quantum information science.

  20. Verification and benchmark testing of the NUFT computer code

    NASA Astrophysics Data System (ADS)

    Lee, K. H.; Nitao, J. J.; Kulshrestha, A.

    1993-10-01

    This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.

  1. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  2. Analyzing the BBOB results by means of benchmarking concepts.

    PubMed

    Mersmann, O; Preuss, M; Trautmann, H; Bischl, B; Weihs, C

    2015-01-01

    We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the "best" one? and the second one is: which algorithm should I use for my real-world problem? Both are connected and neither is easy to answer. We present a theoretical framework for designing and analyzing the raw data of such benchmark experiments. This represents a first step in answering the aforementioned questions. The 2009 and 2010 BBOB benchmark results are analyzed by means of this framework and we derive insight regarding the answers to the two questions. Furthermore, we discuss how to properly aggregate rankings from algorithm evaluations on individual problems into a consensus, its theoretical background and which common pitfalls should be avoided. Finally, we address the grouping of test problems into sets with similar optimizer rankings and investigate whether these are reflected by already proposed test problem characteristics, finding that this is not always the case.

  3. Performance of today’s dual energy CT and future multi energy CT in virtual non-contrast imaging and in iodine quantification: A simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan

    2015-07-15

    Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found for the application of virtual non-contrast and iodine imaging. Future MECT with realistic photon counting detectors currently can only perform comparably to dual source DECT at 100 kV/Sn 140 kV. Dual source concepts with photon counting detectors could be a solution to this problem, promising a better performance.« less

  4. Least-Squares Spectral Element Solutions to the CAA Workshop Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Lin, Wen H.; Chan, Daniel C.

    1997-01-01

    This paper presents computed results for some of the CAA benchmark problems via the acoustic solver developed at Rocketdyne CFD Technology Center under the corporate agreement between Boeing North American, Inc. and NASA for the Aerospace Industry Technology Program. The calculations are considered as benchmark testing of the functionality, accuracy, and performance of the solver. Results of these computations demonstrate that the solver is capable of solving the propagation of aeroacoustic signals. Testing of sound generation and on more realistic problems is now pursued for the industrial applications of this solver. Numerical calculations were performed for the second problem of Category 1 of the current workshop problems for an acoustic pulse scattered from a rigid circular cylinder, and for two of the first CAA workshop problems, i. e., the first problem of Category 1 for the propagation of a linear wave and the first problem of Category 4 for an acoustic pulse reflected from a rigid wall in a uniform flow of Mach 0.5. The aim for including the last two problems in this workshop is to test the effectiveness of some boundary conditions set up in the solver. Numerical results of the last two benchmark problems have been compared with their corresponding exact solutions and the comparisons are excellent. This demonstrates the high fidelity of the solver in handling wave propagation problems. This feature lends the method quite attractive in developing a computational acoustic solver for calculating the aero/hydrodynamic noise in a violent flow environment.

  5. New prompt fission gamma-ray spectral data from 239Pu(nth, f) in response to a high priority request from OECD Nuclear Energy Agency

    NASA Astrophysics Data System (ADS)

    Gatera, Angélique; Belgya, Tamás; Geerts, Wouter; Göök, Alf; Hambsch, Franz-Josef; Lebois, Matthieu; Maróti, Boglárka; Oberstedt, Stephan; Oberstedt, Andreas; Postelt, Frederik; Qi, Liqiang; Szentmiklósi, Laszló; Vidali, Marzio; Zeiser, Fabio

    2017-09-01

    Benchmark reactor calculations have revealed an underestimation of γ-heat following fission of up to 28%. To improve the modelling of new nuclear reactors, the OECD/NEA initiated a nuclear data High Priority Request List (HPRL) entry for the major isotopes (235U, 239Pu). In response to that HPRL entry, we executed a dedicated measurement program on prompt fission γ-rays employing state-of-the-art lanthanum bromide (LaBr3) detectors with superior timing and good energy resolution. Our new results from 252Cf(sf), 235U(nth,f) and 241Pu(nth,f) provide prompt fission γ-ray spectra characteristics : average number of photons per fission, average total energy per fission and mean photon energy; all within 2% of uncertainty. We present preliminary results on 239Pu(nth,f), recently measured at the Budapest Neutron Centre and supported by the CHANDA Trans-national Access Activity, as well as discussing our different published results in comparison to the historical data and what it says about the discrepancy observed in the benchmark calculations.

  6. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  7. Memory-Intensive Benchmarks: IRAM vs. Cache-Based Machines

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Gaeke, Brian R.; Husbands, Parry; Li, Xiaoye S.; Oliker, Leonid; Yelick, Katherine A.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The increasing gap between processor and memory performance has lead to new architectural models for memory-intensive applications. In this paper, we explore the performance of a set of memory-intensive benchmarks and use them to compare the performance of conventional cache-based microprocessors to a mixed logic and DRAM processor called VIRAM. The benchmarks are based on problem statements, rather than specific implementations, and in each case we explore the fundamental hardware requirements of the problem, as well as alternative algorithms and data structures that can help expose fine-grained parallelism or simplify memory access patterns. The benchmarks are characterized by their memory access patterns, their basic control structures, and the ratio of computation to memory operation.

  8. Retrieving transient conformational molecular structure information from inner-shell photoionization of laser-aligned molecules

    PubMed Central

    Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.

    2016-01-01

    We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method. PMID:27025410

  9. Retrieving transient conformational molecular structure information from inner-shell photoionization of laser-aligned molecules

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Le, Anh-Thu; Yu, Chao; Lucchese, R. R.; Lin, C. D.

    2016-03-01

    We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.

  10. Evanescent Properties of Optical Diffraction from 2-Dimensional Hexagonal Photonic Crystals and Their Sensor Applications.

    PubMed

    Liao, Yu-Yang; Chen, Yung-Tsan; Chen, Chien-Chun; Huang, Jian-Jang

    2018-04-03

    The sensitivity of traditional diffraction grating sensors is limited by the spatial resolution of the measurement setup. Thus, a large space is required to improve sensor performance. Here, we demonstrate a compact hexagonal photonic crystal (PhC) optical sensor with high sensitivity. PhCs are able to diffract optical beams to various angles in azimuthal space. The critical wavelength that satisfies the phase matching or becomes evanescent was used to benchmark the refractive index of a target analyte applied on a PhC sensor. Using a glucose solution as an example, our sensor demonstrated very high sensitivity and a low limit of detection. This shows that the diffraction mechanism of hexagonal photonic crystals can be used for sensors when compact size is a concern.

  11. Search for Production of Resonant States in the Photon-Jet Mass Distribution Using p p Collisions at s = 7 TeV Collected by the ATLAS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2012-05-22

    This Letter describes a model-independent search for the production of new resonant states in photon + jet events in 2.11 fb -1 of proton-proton collisions at √ s = 7 TeV . We compare the photon + jet mass distribution to a background model derived from data and find consistency with the background-only hypothesis. Given the lack of evidence for a signal, we set 95% credibility level limits on generic Gaussian-shaped signals and on a benchmark excited-quark ( q * ) model, excluding 2 TeV Gaussian resonances with cross section times branching fraction times acceptance times efficiency near 5 fbmore » and excluding q * masses below 2.46 TeV, respectively.« less

  12. Signifying quantum benchmarks for qubit teleportation and secure quantum communication using Einstein-Podolsky-Rosen steering inequalities

    NASA Astrophysics Data System (ADS)

    Reid, M. D.

    2013-12-01

    The demonstration of quantum teleportation of a photonic qubit from Alice to Bob usually relies on data conditioned on detection at Bob's location. I show that Bohm's Einstein-Podolsky-Rosen (EPR) paradox can be used to verify that the quantum benchmark for qubit teleportation has been reached, without postselection. This is possible for scenarios insensitive to losses at the generation station, and with efficiencies of ηB>1/3 for the teleportation process. The benchmark is obtained if it is shown that Bob can “steer” Alice's record of the qubit as stored by Charlie. EPR steering inequalities involving m measurement settings can also be used to confirm quantum teleportation, for efficiencies ηB>1/m, if one assumes trusted detectors for Charlie and Alice. Using proofs of monogamy, I show that two-setting EPR steering inequalities can signify secure teleportation of the qubit state.

  13. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less

  14. Model Prediction Results for 2007 Ultrasonic Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Kim, Hak-Joon; Song, Sung-Jin

    2008-02-01

    The World Federation of NDE Centers (WFNDEC) has addressed two types of problems for the 2007 ultrasonic benchmark problems: prediction of side-drilled hole responses with 45° and 60° refracted shear waves, and effects of surface curvatures on the ultrasonic responses of flat-bottomed hole. To solve this year's ultrasonic benchmark problems, we applied multi-Gaussian beam models for calculation of ultrasonic beam fields and the Kirchhoff approximation and the separation of variables method for calculation of far-field scattering amplitudes of flat-bottomed holes and side-drilled holes respectively In this paper, we present comparison results of model predictions to experiments for side-drilled holes and discuss effect of interface curvatures on ultrasonic responses by comparison of peak-to-peak amplitudes of flat-bottomed hole responses with different sizes and interface curvatures.

  15. A Benchmarking Initiative for Reactive Transport Modeling Applied to Subsurface Environmental Applications

    NASA Astrophysics Data System (ADS)

    Steefel, C. I.

    2015-12-01

    Over the last 20 years, we have seen the evolution of multicomponent reactive transport modeling and the expanding range and increasing complexity of subsurface environmental applications it is being used to address. Reactive transport modeling is being asked to provide accurate assessments of engineering performance and risk for important issues with far-reaching consequences. As a result, the complexity and detail of subsurface processes, properties, and conditions that can be simulated have significantly expanded. Closed form solutions are necessary and useful, but limited to situations that are far simpler than typical applications that combine many physical and chemical processes, in many cases in coupled form. In the absence of closed form and yet realistic solutions for complex applications, numerical benchmark problems with an accepted set of results will be indispensable to qualifying codes for various environmental applications. The intent of this benchmarking exercise, now underway for more than five years, is to develop and publish a set of well-described benchmark problems that can be used to demonstrate simulator conformance with norms established by the subsurface science and engineering community. The objective is not to verify this or that specific code--the reactive transport codes play a supporting role in this regard—but rather to use the codes to verify that a common solution of the problem can be achieved. Thus, the objective of each of the manuscripts is to present an environmentally-relevant benchmark problem that tests the conceptual model capabilities, numerical implementation, process coupling, and accuracy. The benchmark problems developed to date include 1) microbially-mediated reactions, 2) isotopes, 3) multi-component diffusion, 4) uranium fate and transport, 5) metal mobility in mining affected systems, and 6) waste repositories and related aspects.

  16. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  17. Ab Initio Optimized Effective Potentials for Real Molecules in Optical Cavities: Photon Contributions to the Molecular Ground State

    PubMed Central

    2018-01-01

    We introduce a simple scheme to efficiently compute photon exchange-correlation contributions due to the coupling to transversal photons as formulated in the newly developed quantum-electrodynamical density-functional theory (QEDFT).1−5 Our construction employs the optimized-effective potential (OEP) approach by means of the Sternheimer equation to avoid the explicit calculation of unoccupied states. We demonstrate the efficiency of the scheme by applying it to an exactly solvable GaAs quantum ring model system, a single azulene molecule, and chains of sodium dimers, all located in optical cavities and described in full real space. While the first example is a two-dimensional system and allows to benchmark the employed approximations, the latter two examples demonstrate that the correlated electron-photon interaction appreciably distorts the ground-state electronic structure of a real molecule. By using this scheme, we not only construct typical electronic observables, such as the electronic ground-state density, but also illustrate how photon observables, such as the photon number, and mixed electron-photon observables, for example, electron–photon correlation functions, become accessible in a density-functional theory (DFT) framework. This work constitutes the first three-dimensional ab initio calculation within the new QEDFT formalism and thus opens up a new computational route for the ab initio study of correlated electron–photon systems in quantum cavities. PMID:29594185

  18. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    NASA Astrophysics Data System (ADS)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  19. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  20. Solution of the neutronics code dynamic benchmark by finite element method

    NASA Astrophysics Data System (ADS)

    Avvakumov, A. V.; Vabishchevich, P. N.; Vasilev, A. O.; Strizhov, V. F.

    2016-10-01

    The objective is to analyze the dynamic benchmark developed by Atomic Energy Research for the verification of best-estimate neutronics codes. The benchmark scenario includes asymmetrical ejection of a control rod in a water-type hexagonal reactor at hot zero power. A simple Doppler feedback mechanism assuming adiabatic fuel temperature heating is proposed. The finite element method on triangular calculation grids is used to solve the three-dimensional neutron kinetics problem. The software has been developed using the engineering and scientific calculation library FEniCS. The matrix spectral problem is solved using the scalable and flexible toolkit SLEPc. The solution accuracy of the dynamic benchmark is analyzed by condensing calculation grid and varying degree of finite elements.

  1. A Methodology for Benchmarking Relational Database Machines,

    DTIC Science & Technology

    1984-01-01

    user benchmarks is to compare the multiple users to the best-case performance The data for each query classification coll and the performance...called a benchmark. The term benchmark originates from the markers used by sur - veyors in establishing common reference points for their measure...formatted databases. In order to further simplify the problem, we restrict our study to those DBMs which support the relational model. A sur - vey

  2. Creation of problem-dependent Doppler-broadened cross sections in the KENO Monte Carlo code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, Shane W. D.; Celik, Cihangir; Maldonado, G. Ivan

    2015-11-06

    In this paper, we introduce a quick method for improving the accuracy of Monte Carlo simulations by generating one- and two-dimensional cross sections at a user-defined temperature before performing transport calculations. A finite difference method is used to Doppler-broaden cross sections to the desired temperature, and unit-base interpolation is done to generate the probability distributions for double differential two-dimensional thermal moderator cross sections at any arbitrarily user-defined temperature. The accuracy of these methods is tested using a variety of contrived problems. In addition, various benchmarks at elevated temperatures are modeled, and results are compared with benchmark results. Lastly, the problem-dependentmore » cross sections are observed to produce eigenvalue estimates that are closer to the benchmark results than those without the problem-dependent cross sections.« less

  3. Hybrid photodetector for single-molecule spectroscopy and microscopy

    PubMed Central

    Michalet, X.; Cheng, Adrian; Antelman, Joshua; Suyama, Motohiro; Arisaka, Katsushi; Weiss, Shimon

    2011-01-01

    We report benchmark tests of a new single-photon counting detector based on a GaAsP photocathode and an electron-bombarded avalanche photodiode developed by Hamamatsu Photonics. We compare its performance with those of standard Geiger-mode avalanche photodiodes. We show its advantages for FCS due to the absence of after-pulsing and for fluorescence lifetime measurements due to its excellent time resolution. Its large sensitive area also greatly simplifies setup alignment. Its spectral sensitivity being similar to that of recently introduced CMOS SPADs, this new detector could become a valuable tool for single-molecule fluorescence measurements, as well as for many other applications. PMID:21822361

  4. Spectroscopic signatures of localization with interacting photons in superconducting qubits

    NASA Astrophysics Data System (ADS)

    Roushan, P.; Neill, C.; Tangpanitanon, J.; Bastidas, V. M.; Megrant, A.; Barends, R.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Fowler, A.; Foxen, B.; Giustina, M.; Jeffrey, E.; Kelly, J.; Lucero, E.; Mutus, J.; Neeley, M.; Quintana, C.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T.; Neven, H.; Angelakis, D. G.; Martinis, J.

    2017-12-01

    Quantized eigenenergies and their associated wave functions provide extensive information for predicting the physics of quantum many-body systems. Using a chain of nine superconducting qubits, we implement a technique for resolving the energy levels of interacting photons. We benchmark this method by capturing the main features of the intricate energy spectrum predicted for two-dimensional electrons in a magnetic field—the Hofstadter butterfly. We introduce disorder to study the statistics of the energy levels of the system as it undergoes the transition from a thermalized to a localized phase. Our work introduces a many-body spectroscopy technique to study quantum phases of matter.

  5. Retrieving transient conformational molecular structure information from inner-shell photoionization of laser-aligned molecules

    DOE PAGES

    Wang, Xu; Le, Anh -Thu; Yu, Chao; ...

    2016-03-30

    We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. Lastly, amore » simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.« less

  6. Absolute dose calibration of an X-ray system and dead time investigations of photon-counting techniques

    NASA Astrophysics Data System (ADS)

    Carpentieri, C.; Schwarz, C.; Ludwig, J.; Ashfaq, A.; Fiederle, M.

    2002-07-01

    High precision concerning the dose calibration of X-ray sources is required when counting and integrating methods are compared. The dose calibration for a dental X-ray tube was executed with special dose calibration equipment (dosimeter) as function of exposure time and rate. Results were compared with a benchmark spectrum and agree within ±1.5%. Dead time investigations with the Medipix1 photon-counting chip (PCC) have been performed by rate variations. Two different types of dead time, paralysable and non-paralysable will be discussed. The dead time depends on settings of the front-end electronics and is a function of signal height, which might lead to systematic defects of systems. Dead time losses in excess of 30% have been found for the PCC at 200 kHz absorbed photons per pixel.

  7. Particle flow oriented electromagnetic calorimeter optimization for the circular electron positron collider

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Fu, C.; Yu, D.; Wang, Z.; Hu, T.; Ruan, M.

    2018-03-01

    The design and optimization of the Electromagnetic Calorimeter (ECAL) are crucial for the Circular Electron Positron Collider (CEPC) project, a proposed future Higgs/Z factory. Following the reference design of the International Large Detector (ILD), a set of silicon-tungsten sampling ECAL geometries are implemented into the Geant4 simulation, whose performance is then scanned using Arbor algorithm. The photon energy response at different ECAL longitudinal structures is analyzed, and the separation performance between nearby photon showers with different ECAL transverse cell sizes is investigated and parametrized. The overall performance is characterized by a set of physics benchmarks, including νν H events where Higgs boson decays into a pair of photons (EM objects) or gluons (jets) and Z→τ+τ- events. Based on these results, we propose an optimized ECAL geometry for the CEPC project.

  8. Photophysics of GaN single-photon emitters in the visible spectral range

    NASA Astrophysics Data System (ADS)

    Berhane, Amanuel M.; Jeong, Kwang-Yong; Bradac, Carlo; Walsh, Michael; Englund, Dirk; Toth, Milos; Aharonovich, Igor

    2018-04-01

    In this work, we present a detailed photophysical analysis of recently discovered, optically stable single-photon emitters (SPEs) in gallium nitride (GaN). Temperature-resolved photoluminescence measurements reveal that the emission lines at 4 K are three orders of magnitude broader than the transform-limited width expected from excited-state lifetime measurements. The broadening is ascribed to ultrafast spectral diffusion. The photophysical study on several emitters at room temperature (RT) reveals an average brightness of (427 ±215 )kCounts /s . Finally, polarization measurements from 14 emitters are used to determine visibility as well as dipole orientation of defect systems within the GaN crystal. Our results underpin some of the fundamental properties of SPEs in GaN both at cryogenic and RT, and define the benchmark for future work in GaN-based single-photon technologies.

  9. Benchmarking a Visual-Basic based multi-component one-dimensional reactive transport modeling tool

    NASA Astrophysics Data System (ADS)

    Torlapati, Jagadish; Prabhakar Clement, T.

    2013-01-01

    We present the details of a comprehensive numerical modeling tool, RT1D, which can be used for simulating biochemical and geochemical reactive transport problems. The code can be run within the standard Microsoft EXCEL Visual Basic platform, and it does not require any additional software tools. The code can be easily adapted by others for simulating different types of laboratory-scale reactive transport experiments. We illustrate the capabilities of the tool by solving five benchmark problems with varying levels of reaction complexity. These literature-derived benchmarks are used to highlight the versatility of the code for solving a variety of practical reactive transport problems. The benchmarks are described in detail to provide a comprehensive database, which can be used by model developers to test other numerical codes. The VBA code presented in the study is a practical tool that can be used by laboratory researchers for analyzing both batch and column datasets within an EXCEL platform.

  10. Introduction to the IWA task group on biofilm modeling.

    PubMed

    Noguera, D R; Morgenroth, E

    2004-01-01

    An International Water Association (IWA) Task Group on Biofilm Modeling was created with the purpose of comparatively evaluating different biofilm modeling approaches. The task group developed three benchmark problems for this comparison, and used a diversity of modeling techniques that included analytical, pseudo-analytical, and numerical solutions to the biofilm problems. Models in one, two, and three dimensional domains were also compared. The first benchmark problem (BM1) described a monospecies biofilm growing in a completely mixed reactor environment and had the purpose of comparing the ability of the models to predict substrate fluxes and concentrations for a biofilm system of fixed total biomass and fixed biomass density. The second problem (BM2) represented a situation in which substrate mass transport by convection was influenced by the hydrodynamic conditions of the liquid in contact with the biofilm. The third problem (BM3) was designed to compare the ability of the models to simulate multispecies and multisubstrate biofilms. These three benchmark problems allowed identification of the specific advantages and disadvantages of each modeling approach. A detailed presentation of the comparative analyses for each problem is provided elsewhere in these proceedings.

  11. Benchmark Problems for Space Mission Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Folta, David C.; Burns, Richard

    2003-01-01

    To provide a high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested for space mission formation flying. The problems cover formation flying in low altitude, near-circular Earth orbit, high altitude, highly elliptical Earth orbits, and large amplitude lissajous trajectories about co-linear libration points of the Sun-Earth/Moon system. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions that are of interest to various agencies.

  12. Simplified Numerical Analysis of ECT Probe - Eddy Current Benchmark Problem 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sikora, R.; Chady, T.; Gratkowski, S.

    2005-04-09

    In this paper a third eddy current benchmark problem is considered. The objective of the benchmark is to determine optimal operating frequency and size of the pancake coil designated for testing tubes made of Inconel. It can be achieved by maximization of the change in impedance of the coil due to a flaw. Approximation functions of the probe (coil) characteristic were developed and used in order to reduce number of required calculations. It results in significant speed up of the optimization process. An optimal testing frequency and size of the probe were achieved as a final result of the calculation.

  13. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  14. Direct-Photon Spectra and Anisotropic Flow in Heavy Ion Collisions from Holography

    NASA Astrophysics Data System (ADS)

    Iatrakis, Ioannis; Kiritsis, Elias; Shen, Chun; Yang, Di-Lun

    2017-03-01

    The thermal-photon emission from strongly coupled gauge theories at finite temperature is calculated by using holographic models for QCD in the Veneziano limit (V-QCD). These emission rates are then embedded in hydrodynamic simulations combined with prompt photons from hard scattering and the thermal photons from hadron gas to analyze the spectra and anisotropic flow of direct photons at RHIC and LHC. The results from different sources responsible for the thermal photons in the quark gluon plasma (QGP) including the weakly coupled QGP (wQGP) from perturbative calculations, strongly coupled N = 4 super Yang-Mills (SYM) plasma (as a benchmark for reference), and Gubser's phenomenological model mimicking the strongly coupled QGP (sQGP) are then compared. It is found that the direct-photon spectra are enhanced in the strongly coupled scenario compared with the ones in the wQGP, especially at intermediate and high momenta, which improve the agreements with data. Moreover, by using IP-glassma initial states, both the elliptic flow and triangular flow of direct photons are amplified at high momenta (pT > 2.5 GeV) for V-QCD, while they are suppressed at low momenta compared to wQGP. The distinct results in holography stem from the blue-shift of emission rates in strong coupling. In addition, the spectra and flow in small collision systems were evaluated for future comparisons. It is found that thermal photons from the deconfined phase are substantial to reconcile the spectra and flow at high momenta.

  15. The PAC-MAN model: Benchmark case for linear acoustics in computational physics

    NASA Astrophysics Data System (ADS)

    Ziegelwanger, Harald; Reiter, Paul

    2017-10-01

    Benchmark cases in the field of computational physics, on the one hand, have to contain a certain complexity to test numerical edge cases and, on the other hand, require the existence of an analytical solution, because an analytical solution allows the exact quantification of the accuracy of a numerical simulation method. This dilemma causes a need for analytical sound field formulations of complex acoustic problems. A well known example for such a benchmark case for harmonic linear acoustics is the ;Cat's Eye model;, which describes the three-dimensional sound field radiated from a sphere with a missing octant analytically. In this paper, a benchmark case for two-dimensional (2D) harmonic linear acoustic problems, viz., the ;PAC-MAN model;, is proposed. The PAC-MAN model describes the radiated and scattered sound field around an infinitely long cylinder with a cut out sector of variable angular width. While the analytical calculation of the 2D sound field allows different angular cut-out widths and arbitrarily positioned line sources, the computational cost associated with the solution of this problem is similar to a 1D problem because of a modal formulation of the sound field in the PAC-MAN model.

  16. Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2004-01-01

    This publication contains the proceedings of the Fourth Computational Aeroacoustics (CAA) Workshop on Benchmark Problems. In this workshop, as in previous workshops, the problems were devised to gauge the technological advancement of computational techniques to calculate all aspects of sound generation and propagation in air directly from the fundamental governing equations. A variety of benchmark problems have been previously solved ranging from simple geometries with idealized acoustic conditions to test the accuracy and effectiveness of computational algorithms and numerical boundary conditions; to sound radiation from a duct; to gust interaction with a cascade of airfoils; to the sound generated by a separating, turbulent viscous flow. By solving these and similar problems, workshop participants have shown the technical progress from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The fourth CAA workshop emphasized the application of CAA methods to the solution of realistic problems. The workshop was held at the Ohio Aerospace Institute in Cleveland, Ohio, on October 20 to 22, 2003. At that time, workshop participants presented their solutions to problems in one or more of five categories. Their solutions are presented in this proceedings along with the comparisons of their solutions to the benchmark solutions or experimental data. The five categories for the benchmark problems were as follows: Category 1:Basic Methods. The numerical computation of sound is affected by, among other issues, the choice of grid used and by the boundary conditions. Category 2:Complex Geometry. The ability to compute the sound in the presence of complex geometric surfaces is important in practical applications of CAA. Category 3:Sound Generation by Interacting With a Gust. The practical application of CAA for computing noise generated by turbomachinery involves the modeling of the noise source mechanism as a vortical gust interacting with an airfoil. Category 4:Sound Transmission and Radiation. Category 5:Sound Generation in Viscous Problems. Sound is generated under certain conditions by a viscous flow as the flow passes an object or a cavity.

  17. Implementing Cognitive Strategy Instruction across the School: The Benchmark Manual for Teachers.

    ERIC Educational Resources Information Center

    Gaskins, Irene; Elliot, Thorne

    Improving reading instruction has been the primary focus at the Benchmark School in Media, Pennsylvania. This book describes the various phases of Benchmark's development of a program to create strategic learners, thinkers, and problem solvers across the curriculum. The goal is to provide teachers and administrators with a handbook that can be…

  18. Adaptive unified continuum FEM modeling of a 3D FSI benchmark problem.

    PubMed

    Jansson, Johan; Degirmenci, Niyazi Cem; Hoffman, Johan

    2017-09-01

    In this paper, we address a 3D fluid-structure interaction benchmark problem that represents important characteristics of biomedical modeling. We present a goal-oriented adaptive finite element methodology for incompressible fluid-structure interaction based on a streamline diffusion-type stabilization of the balance equations for mass and momentum for the entire continuum in the domain, which is implemented in the Unicorn/FEniCS software framework. A phase marker function and its corresponding transport equation are introduced to select the constitutive law, where the mesh tracks the discontinuous fluid-structure interface. This results in a unified simulation method for fluids and structures. We present detailed results for the benchmark problem compared with experiments, together with a mesh convergence study. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  20. Dynamic vehicle routing with time windows in theory and practice.

    PubMed

    Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael

    2017-01-01

    The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.

  1. Direct measurement of the biphoton Wigner function through two-photon interference

    PubMed Central

    Douce, T.; Eckstein, A.; Walborn, S. P.; Khoury, A. Z.; Ducci, S.; Keller, A.; Coudreau, T.; Milman, P.

    2013-01-01

    The Hong-Ou-Mandel (HOM) experiment was a benchmark in quantum optics, evidencing the non–classical nature of photon pairs, later generalized to quantum systems with either bosonic or fermionic statistics. We show that a simple modification in the well-known and widely used HOM experiment provides the direct measurement of the Wigner function. We apply our results to one of the most reliable quantum systems, consisting of biphotons generated by parametric down conversion. A consequence of our results is that a negative value of the Wigner function is a sufficient condition for non-gaussian entanglement between two photons. In the general case, the Wigner function provides all the required information to infer entanglement using well known necessary and sufficient criteria. The present work offers a new vision of the HOM experiment that further develops its possibilities to realize fundamental tests of quantum mechanics using simple optical set-ups. PMID:24346262

  2. Spatial correlations in driven-dissipative photonic lattices

    NASA Astrophysics Data System (ADS)

    Biondi, Matteo; Lienhard, Saskia; Blatter, Gianni; Türeci, Hakan E.; Schmidt, Sebastian

    2017-12-01

    We study the nonequilibrium steady-state of interacting photons in cavity arrays as described by the driven-dissipative Bose–Hubbard and spin-1/2 XY model. For this purpose, we develop a self-consistent expansion in the inverse coordination number of the array (∼ 1/z) to solve the Lindblad master equation of these systems beyond the mean-field approximation. Our formalism is compared and benchmarked with exact numerical methods for small systems based on an exact diagonalization of the Liouvillian and a recently developed corner-space renormalization technique. We then apply this method to obtain insights beyond mean-field in two particular settings: (i) we show that the gas–liquid transition in the driven-dissipative Bose–Hubbard model is characterized by large density fluctuations and bunched photon statistics. (ii) We study the antibunching–bunching transition of the nearest-neighbor correlator in the driven-dissipative spin-1/2 XY model and provide a simple explanation of this phenomenon.

  3. Hybrid methods for witnessing entanglement in a microscopic-macroscopic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spagnolo, Nicolo; Consorzio Nazionale Interuniversitario per le Scienze Fisiche della Materia, Piazzale Aldo Moro 5, I-00185 Roma; Vitelli, Chiara

    2011-09-15

    We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state.more » The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.« less

  4. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  5. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  6. Piping benchmark problems. Volume 1. Dynamic analysis uniform support motion response spectrum method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bezler, P.; Hartzman, M.; Reich, M.

    1980-08-01

    A set of benchmark problems and solutions have been developed for verifying the adequacy of computer programs used for dynamic analysis and design of nuclear piping systems by the Response Spectrum Method. The problems range from simple to complex configurations which are assumed to experience linear elastic behavior. The dynamic loading is represented by uniform support motion, assumed to be induced by seismic excitation in three spatial directions. The solutions consist of frequencies, participation factors, nodal displacement components and internal force and moment components. Solutions to associated anchor point motion static problems are not included.

  7. Localization and Poincaré catastrophe in the problem of a photon scattering on a pair of Rayleigh particles

    NASA Astrophysics Data System (ADS)

    Maksimenko, V. V.; Zagaynov, V. A.; Agranovski, I. E.

    2013-11-01

    It is shown that complexities in a problem of elastic scattering of a photon on a pair of Rayleigh particles (two small metallic spheres) are similar to the complexities of the classic problem of three bodies in celestial mechanics. In the latter problem, as is well known, the phase trajectory of a system becomes a nonanalytical function of its variables. In our problem, the trajectory of a virtual photon at some frequency could be considered such as the well-known Antoine set (Antoine's necklace) or a chain with interlaced sections having zero topological dimension and fractal structure. Such a virtual “zero-dimensional” photon could be localized between the particles of the pair. The topology suppresses the photon's exit to the real world with dimensional equal-to-or-greater-than units. The physical reason for this type of photon localization is related to the “mechanical rigidity” of interlaced sections of the photon trajectory due to a singularity of energy density along these sections. Within the approximations used in this paper, the effect is possible if the frequency of the incident radiation is equal to double the frequency of the dipole surface plasmon in an isolated particle, which is the only character frequency in the problem. This condition and transformation of the photon trajectory to the zero-dimensional Antoine set reminds of some of the simplest variants of Poincaré's catastrophe in the dynamics of some nonintegrable systems. The influence of the localization on elastic light scattering by the pair is investigated.

  8. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  9. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  10. Benchmarking of Touschek Beam Lifetime Calculations for the Advanced Photon Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, A.; Yang, B.

    2017-06-25

    Particle loss from Touschek scattering is one of the most significant issues faced by present and future synchrotron light source storage rings. For example, the predicted, Touschek-dominated beam lifetime for the Advanced Photon Source (APS) Upgrade lattice in 48-bunch, 200-mA timing mode is only ~ 2 h. In order to understand the reliability of the predicted lifetime, a series of measurements with various beam parameters was performed on the present APS storage ring. This paper first describes the entire process of beam lifetime measurement, then compares measured lifetime with the calculated one by applying the measured beam parameters. The resultsmore » show very good agreement.« less

  11. Neutron and photon shielding benchmark calculations by MCNP on the LR-0 experimental facility.

    PubMed

    Hordósy, G

    2005-01-01

    In the framework of the REDOS project, the space-energy distribution of the neutron and photon flux has been calculated over the pressure vessel simulator thickness of the LR-0 experimental reactor, Rez, Czech Republic. The results calculated by the Monte Carlo code MCNP4C are compared with the measurements performed in the Nuclear Research Institute, Rez. The spectra have been measured at the barrel, in front of, inside and behind the pressure vessel in different configurations. The neutron measurements were performed in the energy range 0.1-10 MeV. This work has been done in the frame of the 5th Frame Work Programme of the European Community 1998-2002.

  12. Verification of cardiac mechanics software: benchmark problems and solutions for testing active and passive material behaviour.

    PubMed

    Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A

    2015-12-08

    Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.

  13. Sustainable aggregate production planning in the chemical process industry - A benchmark problem and dataset.

    PubMed

    Brandenburg, Marcus; Hahn, Gerd J

    2018-06-01

    Process industries typically involve complex manufacturing operations and thus require adequate decision support for aggregate production planning (APP). The need for powerful and efficient approaches to solve complex APP problems persists. Problem-specific solution approaches are advantageous compared to standardized approaches that are designed to provide basic decision support for a broad range of planning problems but inadequate to optimize under consideration of specific settings. This in turn calls for methods to compare different approaches regarding their computational performance and solution quality. In this paper, we present a benchmarking problem for APP in the chemical process industry. The presented problem focuses on (i) sustainable operations planning involving multiple alternative production modes/routings with specific production-related carbon emission and the social dimension of varying operating rates and (ii) integrated campaign planning with production mix/volume on the operational level. The mutual trade-offs between economic, environmental and social factors can be considered as externalized factors (production-related carbon emission and overtime working hours) as well as internalized ones (resulting costs). We provide data for all problem parameters in addition to a detailed verbal problem statement. We refer to Hahn and Brandenburg [1] for a first numerical analysis based on and for future research perspectives arising from this benchmarking problem.

  14. High-Accuracy Finite Element Method: Benchmark Calculations

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new high-accuracy finite element scheme with simplex elements for solving the elliptic boundary-value problems and show its efficiency on benchmark solutions of the Helmholtz equation for the triangle membrane and hypercube.

  15. Benchmark test of transport calculations of gold and nickel activation with implications for neutron kerma at Hiroshima.

    PubMed

    Hoshi, M; Hiraoka, M; Hayakawa, N; Sawada, S; Munaka, M; Kuramoto, A; Oka, T; Iwatani, K; Shizuma, K; Hasai, H

    1992-11-01

    A benchmark test of the Monte Carlo neutron and photon transport code system (MCNP) was performed using a 252Cf fission neutron source to validate the use of the code for the energy spectrum analyses of Hiroshima atomic bomb neutrons. Nuclear data libraries used in the Monte Carlo neutron and photon transport code calculation were ENDF/B-III, ENDF/B-IV, LASL-SUB, and ENDL-73. The neutron moderators used were granite (the main component of which is SiO2, with a small fraction of hydrogen), Newlight [polyethylene with 3.7% boron (natural)], ammonium chloride (NH4Cl), and water (H2O). Each moderator was 65 cm thick. The neutron detectors were gold and nickel foils, which were used to detect thermal and epithermal neutrons (4.9 eV) and fast neutrons (> 0.5 MeV), respectively. Measured activity data from neutron-irradiated gold and nickel foils in these moderators decreased to about 1/1,000th or 1/10,000th, which correspond to about 1,500 m ground distance from the hypocenter in Hiroshima. For both gold and nickel detectors, the measured activities and the calculated values agreed within 10%. The slopes of the depth-yield relations in each moderator, except granite, were similar for neutrons detected by the gold and nickel foils. From the results of these studies, the Monte Carlo neutron and photon transport code was verified to be accurate enough for use with the elements hydrogen, carbon, nitrogen, oxygen, silicon, chlorine, and cadmium, and for the incident 252Cf fission spectrum neutrons.

  16. Performance of Multi-chaotic PSO on a shifted benchmark functions set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan

    2015-03-10

    In this paper the performance of Multi-chaotic PSO algorithm is investigated using two shifted benchmark functions. The purpose of shifted benchmark functions is to simulate the time-variant real-world problems. The results of chaotic PSO are compared with canonical version of the algorithm. It is concluded that using the multi-chaotic approach can lead to better results in optimization of shifted functions.

  17. Computational techniques in gamma-ray skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    George, D.L.

    1988-12-01

    Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less

  18. Benchmarking image fusion system design parameters

    NASA Astrophysics Data System (ADS)

    Howell, Christopher L.

    2013-06-01

    A clear and absolute method for discriminating between image fusion algorithm performances is presented. This method can effectively be used to assist in the design and modeling of image fusion systems. Specifically, it is postulated that quantifying human task performance using image fusion should be benchmarked to whether the fusion algorithm, at a minimum, retained the performance benefit achievable by each independent spectral band being fused. The established benchmark would then clearly represent the threshold that a fusion system should surpass to be considered beneficial to a particular task. A genetic algorithm is employed to characterize the fused system parameters using a Matlab® implementation of NVThermIP as the objective function. By setting the problem up as a mixed-integer constraint optimization problem, one can effectively look backwards through the image acquisition process: optimizing fused system parameters by minimizing the difference between modeled task difficulty measure and the benchmark task difficulty measure. The results of an identification perception experiment are presented, where human observers were asked to identify a standard set of military targets, and used to demonstrate the effectiveness of the benchmarking process.

  19. Bin packing problem solution through a deterministic weighted finite automaton

    NASA Astrophysics Data System (ADS)

    Zavala-Díaz, J. C.; Pérez-Ortega, J.; Martínez-Rebollar, A.; Almanza-Ortega, N. N.; Hidalgo-Reyes, M.

    2016-06-01

    In this article the solution of Bin Packing problem of one dimension through a weighted finite automaton is presented. Construction of the automaton and its application to solve three different instances, one synthetic data and two benchmarks are presented: N1C1W1_A.BPP belonging to data set Set_1; and BPP13.BPP belonging to hard28. The optimal solution of synthetic data is obtained. In the first benchmark the solution obtained is one more container than the ideal number of containers and in the second benchmark the solution is two more containers than the ideal solution (approximately 2.5%). The runtime in all three cases was less than one second.

  20. Towards unbiased benchmarking of evolutionary and hybrid algorithms for real-valued optimisation

    NASA Astrophysics Data System (ADS)

    MacNish, Cara

    2007-12-01

    Randomised population-based algorithms, such as evolutionary, genetic and swarm-based algorithms, and their hybrids with traditional search techniques, have proven successful and robust on many difficult real-valued optimisation problems. This success, along with the readily applicable nature of these techniques, has led to an explosion in the number of algorithms and variants proposed. In order for the field to advance it is necessary to carry out effective comparative evaluations of these algorithms, and thereby better identify and understand those properties that lead to better performance. This paper discusses the difficulties of providing benchmarking of evolutionary and allied algorithms that is both meaningful and logistically viable. To be meaningful the benchmarking test must give a fair comparison that is free, as far as possible, from biases that favour one style of algorithm over another. To be logistically viable it must overcome the need for pairwise comparison between all the proposed algorithms. To address the first problem, we begin by attempting to identify the biases that are inherent in commonly used benchmarking functions. We then describe a suite of test problems, generated recursively as self-similar or fractal landscapes, designed to overcome these biases. For the second, we describe a server that uses web services to allow researchers to 'plug in' their algorithms, running on their local machines, to a central benchmarking repository.

  1. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  2. SeSBench - An initiative to benchmark reactive transport models for environmental subsurface processes

    NASA Astrophysics Data System (ADS)

    Jacques, Diederik

    2017-04-01

    As soil functions are governed by a multitude of interacting hydrological, geochemical and biological processes, simulation tools coupling mathematical models for interacting processes are needed. Coupled reactive transport models are a typical example of such coupled tools mainly focusing on hydrological and geochemical coupling (see e.g. Steefel et al., 2015). Mathematical and numerical complexity for both the tool itself or of the specific conceptual model can increase rapidly. Therefore, numerical verification of such type of models is a prerequisite for guaranteeing reliability and confidence and qualifying simulation tools and approaches for any further model application. In 2011, a first SeSBench -Subsurface Environmental Simulation Benchmarking- workshop was held in Berkeley (USA) followed by four other ones. The objective is to benchmark subsurface environmental simulation models and methods with a current focus on reactive transport processes. The final outcome was a special issue in Computational Geosciences (2015, issue 3 - Reactive transport benchmarks for subsurface environmental simulation) with a collection of 11 benchmarks. Benchmarks, proposed by the participants of the workshops, should be relevant for environmental or geo-engineering applications; the latter were mostly related to radioactive waste disposal issues - excluding benchmarks defined for pure mathematical reasons. Another important feature is the tiered approach within a benchmark with the definition of a single principle problem and different sub problems. The latter typically benchmarked individual or simplified processes (e.g. inert solute transport, simplified geochemical conceptual model) or geometries (e.g. batch or one-dimensional, homogeneous). Finally, three codes should be involved into a benchmark. The SeSBench initiative contributes to confidence building for applying reactive transport codes. Furthermore, it illustrates the use of those type of models for different environmental and geo-engineering applications. SeSBench will organize new workshops to add new benchmarks in a new special issue. Steefel, C. I., et al. (2015). "Reactive transport codes for subsurface environmental simulation." Computational Geosciences 19: 445-478.

  3. Revisiting Yasinsky and Henry`s benchmark using modern nodal codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Becker, M.W.

    1995-12-31

    The numerical experiments analyzed by Yasinsky and Henry are quite trivial by comparison with today`s standards because they used the finite difference code WIGLE for their benchmark. Also, this problem is a simple slab (one-dimensional) case with no feedback mechanisms. This research attempts to obtain STAR (Ref. 2) and NEM (Ref. 3) code results in order to produce a more modern kinetics benchmark with results comparable WIGLE.

  4. Benchmarks for target tracking

    NASA Astrophysics Data System (ADS)

    Dunham, Darin T.; West, Philip D.

    2011-09-01

    The term benchmark originates from the chiseled horizontal marks that surveyors made, into which an angle-iron could be placed to bracket ("bench") a leveling rod, thus ensuring that the leveling rod can be repositioned in exactly the same place in the future. A benchmark in computer terms is the result of running a computer program, or a set of programs, in order to assess the relative performance of an object by running a number of standard tests and trials against it. This paper will discuss the history of simulation benchmarks that are being used by multiple branches of the military and agencies of the US government. These benchmarks range from missile defense applications to chemical biological situations. Typically, a benchmark is used with Monte Carlo runs in order to tease out how algorithms deal with variability and the range of possible inputs. We will also describe problems that can be solved by a benchmark.

  5. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy; Kim, Youngkwang; Conway, Claire; Conway, Darrel J.

    2017-01-01

    This paper describes the processes and results of Verification and Validation (VV) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The VV effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  6. Benchmarking the Collocation Stand-Alone Library and Toolkit (CSALT)

    NASA Technical Reports Server (NTRS)

    Hughes, Steven; Knittel, Jeremy; Shoan, Wendy (Compiler); Kim, Youngkwang; Conway, Claire (Compiler); Conway, Darrel

    2017-01-01

    This paper describes the processes and results of Verification and Validation (V&V) efforts for the Collocation Stand Alone Library and Toolkit (CSALT). We describe the test program and environments, the tools used for independent test data, and comparison results. The V&V effort employs classical problems with known analytic solutions, solutions from other available software tools, and comparisons to benchmarking data available in the public literature. Presenting all test results are beyond the scope of a single paper. Here we present high-level test results for a broad range of problems, and detailed comparisons for selected problems.

  7. PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality

    NASA Astrophysics Data System (ADS)

    Hammond, G. E.; Frederick, J. M.

    2016-12-01

    In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.

  8. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  9. Benchmarking FEniCS for mantle convection simulations

    NASA Astrophysics Data System (ADS)

    Vynnytska, L.; Rognes, M. E.; Clark, S. R.

    2013-01-01

    This paper evaluates the usability of the FEniCS Project for mantle convection simulations by numerical comparison to three established benchmarks. The benchmark problems all concern convection processes in an incompressible fluid induced by temperature or composition variations, and cover three cases: (i) steady-state convection with depth- and temperature-dependent viscosity, (ii) time-dependent convection with constant viscosity and internal heating, and (iii) a Rayleigh-Taylor instability. These problems are modeled by the Stokes equations for the fluid and advection-diffusion equations for the temperature and composition. The FEniCS Project provides a novel platform for the automated solution of differential equations by finite element methods. In particular, it offers a significant flexibility with regard to modeling and numerical discretization choices; we have here used a discontinuous Galerkin method for the numerical solution of the advection-diffusion equations. Our numerical results are in agreement with the benchmarks, and demonstrate the applicability of both the discontinuous Galerkin method and FEniCS for such applications.

  10. The effect of real and virtual photons in the di-lepton channel at the LHC

    NASA Astrophysics Data System (ADS)

    Accomando, Elena; Fiaschi, Juri; Hautmann, Francesco; Moretti, Stefano; Shepherd-Themistocleous, Claire H.

    2017-07-01

    We present a study of di-lepton production at the CERN Large Hadron Collider with a particular focus on the contribution resulting from both real and virtual photons in the initial state. We discuss the region of phase space in which the invariant mass of the lepton pair is of the order of several TeV, where searches for new physics phenomena yielding a di-lepton signature are presently carried out. We study both the yield and associated uncertainties for all possible topologies in photon-induced di-lepton production and compare these with what is expected in the standard Drell-Yan channel, where quark-antiquark pairs are responsible for the production of lepton pairs. We analyse the impact of these QED contributions on the expected Standard Model background and on searches for new physics. In this latter case, we use the production of an extra heavy Z‧-boson predicted by the Sequential Standard Model (SSM) as a benchmark process.

  11. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  12. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  13. Benchmark solutions for the galactic ion transport equations: Energy and spatially dependent problems

    NASA Technical Reports Server (NTRS)

    Ganapol, Barry D.; Townsend, Lawrence W.; Wilson, John W.

    1989-01-01

    Nontrivial benchmark solutions are developed for the galactic ion transport (GIT) equations in the straight-ahead approximation. These equations are used to predict potential radiation hazards in the upper atmosphere and in space. Two levels of difficulty are considered: (1) energy independent, and (2) spatially independent. The analysis emphasizes analytical methods never before applied to the GIT equations. Most of the representations derived have been numerically implemented and compared to more approximate calculations. Accurate ion fluxes are obtained (3 to 5 digits) for nontrivial sources. For monoenergetic beams, both accurate doses and fluxes are found. The benchmarks presented are useful in assessing the accuracy of transport algorithms designed to accommodate more complex radiation protection problems. In addition, these solutions can provide fast and accurate assessments of relatively simple shield configurations.

  14. A novel discrete PSO algorithm for solving job shop scheduling problem to minimize makespan

    NASA Astrophysics Data System (ADS)

    Rameshkumar, K.; Rajendran, C.

    2018-02-01

    In this work, a discrete version of PSO algorithm is proposed to minimize the makespan of a job-shop. A novel schedule builder has been utilized to generate active schedules. The discrete PSO is tested using well known benchmark problems available in the literature. The solution produced by the proposed algorithms is compared with best known solution published in the literature and also compared with hybrid particle swarm algorithm and variable neighborhood search PSO algorithm. The solution construction methodology adopted in this study is found to be effective in producing good quality solutions for the various benchmark job-shop scheduling problems.

  15. Finite Element Modeling of the World Federation's Second MFL Benchmark Problem

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwei; Tian, Yong; Udpa, Satish; Udpa, Lalita

    2004-02-01

    This paper presents results obtained by simulating the second magnetic flux leakage benchmark problem proposed by the World Federation of NDE Centers. The geometry consists of notches machined on the internal and external surfaces of a rotating steel pipe that is placed between two yokes that are part of a magnetic circuit energized by an electromagnet. The model calculates the radial component of the leaked field at specific positions. The nonlinear material property of the ferromagnetic pipe is taken into account in simulating the problem. The velocity effect caused by the rotation of the pipe is, however, ignored for reasons of simplicity.

  16. Clear, Complete, and Justified Problem Formulations for Aquatic Life Benchmark Values: Specifying the Dimensions

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  17. CLEAR, COMPLETE, AND JUSTIFIED PROBLEM FORMULATIONS FOR AQUATIC LIFE BENCHMARK VALUES: SPECIFYING THE DIMENSIONS

    EPA Science Inventory

    Nations that develop water quality benchmark values have relied primarily on standard data and methods. However, experience with chemicals such as Se, ammonia, and tributyltin has shown that standard methods do not adequately address some taxa, modes of exposure and effects. Deve...

  18. Benchmark Problems of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.

    A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less

  19. BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.

    PubMed

    Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R

    2015-02-20

    Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .

  20. Benchmark Problems Used to Assess Computational Aeroacoustics Codes

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Envia, Edmane

    2005-01-01

    The field of computational aeroacoustics (CAA) encompasses numerical techniques for calculating all aspects of sound generation and propagation in air directly from fundamental governing equations. Aeroacoustic problems typically involve flow-generated noise, with and without the presence of a solid surface, and the propagation of the sound to a receiver far away from the noise source. It is a challenge to obtain accurate numerical solutions to these problems. The NASA Glenn Research Center has been at the forefront in developing and promoting the development of CAA techniques and methodologies for computing the noise generated by aircraft propulsion systems. To assess the technological advancement of CAA, Glenn, in cooperation with the Ohio Aerospace Institute and the AeroAcoustics Research Consortium, organized and hosted the Fourth CAA Workshop on Benchmark Problems. Participants from industry and academia from both the United States and abroad joined to present and discuss solutions to benchmark problems. These demonstrated technical progress ranging from the basic challenges to accurate CAA calculations to the solution of CAA problems of increasing complexity and difficulty. The results are documented in the proceedings of the workshop. Problems were solved in five categories. In three of the five categories, exact solutions were available for comparison with CAA results. A fourth category of problems representing sound generation from either a single airfoil or a blade row interacting with a gust (i.e., problems relevant to fan noise) had approximate analytical or completely numerical solutions. The fifth category of problems involved sound generation in a viscous flow. In this case, the CAA results were compared with experimental data.

  1. Production and dosimetry of simultaneous therapeutic photons and electrons beam by linear accelerator: A Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Khledi, Navid; Arbabi, Azim; Sardari, Dariush; Mohammadi, Mohammad; Ameri, Ahmad

    2015-02-01

    Depending on the location and depth of tumor, the electron or photon beams might be used for treatment. Electron beam have some advantages over photon beam for treatment of shallow tumors to spare the normal tissues beyond of the tumor. In the other hand, the photon beam are used for deep targets treatment. Both of these beams have some limitations, for example the dependency of penumbra with depth, and the lack of lateral equilibrium for small electron beam fields. In first, we simulated the conventional head configuration of Varian 2300 for 16 MeV electron, and the results approved by benchmarking the Percent Depth Dose (PDD) and profile of the simulation and measurement. In the next step, a perforated Lead (Pb) sheet with 1mm thickness placed at the top of the applicator holder tray. This layer producing bremsstrahlung x-ray and a part of the electrons passing through the holes, in result, we have a simultaneous mixed electron and photon beam. For making the irradiation field uniform, a layer of steel placed after the Pb layer. The simulation was performed for 10×10, and 4×4 cm2 field size. This study was showed the advantages of mixing the electron and photon beam by reduction of pure electron's penumbra dependency with the depth, especially for small fields, also decreasing of dramatic changes of PDD curve with irradiation field size.

  2. Clomp

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gylenhaal, J.; Bronevetsky, G.

    2007-05-25

    CLOMP is the C version of the Livermore OpenMP benchmark deeloped to measure OpenMP overheads and other performance impacts due to threading (like NUMA memory layouts, memory contention, cache effects, etc.) in order to influence future system design. Current best-in-class implementations of OpenMP have overheads at least ten times larger than is required by many of our applications for effective use of OpenMP. This benchmark shows the significant negative performance impact of these relatively large overheads and of other thread effects. The CLOMP benchmark highly configurable to allow a variety of problem sizes and threading effects to be studied andmore » it carefully checks its results to catch many common threading errors. This benchmark is expected to be included as part of the Sequoia Benchmark suite for the Sequoia procurement.« less

  3. Building Bridges Between Geoscience and Data Science through Benchmark Data Sets

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Ebert-Uphoff, I.; Demir, I.; Gel, Y.; Hill, M. C.; Karpatne, A.; Güereque, M.; Kumar, V.; Cabral, E.; Smyth, P.

    2017-12-01

    The changing nature of observational field data demands richer and more meaningful collaboration between data scientists and geoscientists. Thus, among other efforts, the Working Group on Case Studies of the NSF-funded RCN on Intelligent Systems Research To Support Geosciences (IS-GEO) is developing a framework to strengthen such collaborations through the creation of benchmark datasets. Benchmark datasets provide an interface between disciplines without requiring extensive background knowledge. The goals are to create (1) a means for two-way communication between geoscience and data science researchers; (2) new collaborations, which may lead to new approaches for data analysis in the geosciences; and (3) a public, permanent repository of complex data sets, representative of geoscience problems, useful to coordinate efforts in research and education. The group identified 10 key elements and characteristics for ideal benchmarks. High impact: A problem with high potential impact. Active research area: A group of geoscientists should be eager to continue working on the topic. Challenge: The problem should be challenging for data scientists. Data science generality and versatility: It should stimulate development of new general and versatile data science methods. Rich information content: Ideally the data set provides stimulus for analysis at many different levels. Hierarchical problem statement: A hierarchy of suggested analysis tasks, from relatively straightforward to open-ended tasks. Means for evaluating success: Data scientists and geoscientists need means to evaluate whether the algorithms are successful and achieve intended purpose. Quick start guide: Introduction for data scientists on how to easily read the data to enable rapid initial data exploration. Geoscience context: Summary for data scientists of the specific data collection process, instruments used, any pre-processing and the science questions to be answered. Citability: A suitable identifier to facilitate tracking the use of the benchmark later on, e.g. allowing search engines to find all research papers using it. A first sample benchmark developed in collaboration with the Jet Propulsion Laboratory (JPL) deals with the automatic analysis of imaging spectrometer data to detect significant methane sources in the atmosphere.

  4. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  5. Benchmarking in national health service procurement in Scotland.

    PubMed

    Walker, Scott; Masson, Ron; Telford, Ronnie; White, David

    2007-11-01

    The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.

  6. Microbially Mediated Kinetic Sulfur Isotope Fractionation: Reactive Transport Modeling Benchmark

    NASA Astrophysics Data System (ADS)

    Wanner, C.; Druhan, J. L.; Cheng, Y.; Amos, R. T.; Steefel, C. I.; Ajo Franklin, J. B.

    2014-12-01

    Microbially mediated sulfate reduction is a ubiquitous process in many subsurface systems. Isotopic fractionation is characteristic of this anaerobic process, since sulfate reducing bacteria (SRB) favor the reduction of the lighter sulfate isotopologue (S32O42-) over the heavier isotopologue (S34O42-). Detection of isotopic shifts have been utilized as a proxy for the onset of sulfate reduction in subsurface systems such as oil reservoirs and aquifers undergoing uranium bioremediation. Reactive transport modeling (RTM) of kinetic sulfur isotope fractionation has been applied to field and laboratory studies. These RTM approaches employ different mathematical formulations in the representation of kinetic sulfur isotope fractionation. In order to test the various formulations, we propose a benchmark problem set for the simulation of kinetic sulfur isotope fractionation during microbially mediated sulfate reduction. The benchmark problem set is comprised of four problem levels and is based on a recent laboratory column experimental study of sulfur isotope fractionation. Pertinent processes impacting sulfur isotopic composition such as microbial sulfate reduction and dispersion are included in the problem set. To date, participating RTM codes are: CRUNCHTOPE, TOUGHREACT, MIN3P and THE GEOCHEMIST'S WORKBENCH. Preliminary results from various codes show reasonable agreement for the problem levels simulating sulfur isotope fractionation in 1D.

  7. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  8. Particle swarm optimization with recombination and dynamic linkage discovery.

    PubMed

    Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung

    2007-12-01

    In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.

  9. Benchmarking and Threshold Standards in Higher Education. Staff and Educational Development Series.

    ERIC Educational Resources Information Center

    Smith, Helen, Ed.; Armstrong, Michael, Ed.; Brown, Sally, Ed.

    This book explores the issues involved in developing standards in higher education, examining the practical issues involved in benchmarking and offering a critical analysis of the problems associated with this developmental tool. The book focuses primarily on experience in the United Kingdom (UK), but looks also at international activity in this…

  10. Improving Federal Education Programs through an Integrated Performance and Benchmarking System.

    ERIC Educational Resources Information Center

    Department of Education, Washington, DC. Office of the Under Secretary.

    This document highlights the problems with current federal education program data collection activities and lists several factors that make movement toward a possible solution, then discusses the vision for the Integrated Performance and Benchmarking System (IPBS), a vision of an Internet-based system for harvesting information from states about…

  11. A Critical Thinking Benchmark for a Department of Agricultural Education and Studies

    ERIC Educational Resources Information Center

    Perry, Dustin K.; Retallick, Michael S.; Paulsen, Thomas H.

    2014-01-01

    Due to an ever changing world where technology seemingly provides endless answers, today's higher education students must master a new skill set reflecting an emphasis on critical thinking, problem solving, and communications. The purpose of this study was to establish a departmental benchmark for critical thinking abilities of students majoring…

  12. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  13. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  14. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    PubMed Central

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima. PMID:28634487

  15. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search.

    PubMed

    Huang, Xingwang; Zeng, Xuewen; Han, Rui

    2017-01-01

    Binary bat algorithm (BBA) is a binary version of the bat algorithm (BA). It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA) to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO). Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  16. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  17. Benchmarking optimization software with COPS 3.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, E. D.; More, J. J.; Munson, T. S.

    2004-05-24

    The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.

  18. Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yi, Ce

    With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.

  19. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, L.M.; Hochstedler, R.D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less

  20. Validation and Comparison of 2D and 3D Codes for Nearshore Motion of Long Waves Using Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey

    2016-04-01

    Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT-Japan Joint Call and Istanbul Metropolitan Municipality are all acknowledged.

  1. Stimulated photon emission and two-photon Raman scattering in a coupled-cavity QED system

    PubMed Central

    Li, C.; Song, Z.

    2016-01-01

    We study the scattering problem of photon and polariton in a one-dimensional coupled-cavity system. Analytical approximate analysis and numerical simulation show that a photon can stimulate the photon emission from a polariton through polariton-photon collisions. This observation opens the possibility of photon-stimulated transition from insulating to radiative phase in a coupled-cavity QED system. Inversely, we also find that a polariton can be generated by a two-photon Raman scattering process. This paves the way towards single photon storage by the aid of atom-cavity interaction. PMID:26877252

  2. Test One to Test Many: A Unified Approach to Quantum Benchmarks

    NASA Astrophysics Data System (ADS)

    Bai, Ge; Chiribella, Giulio

    2018-04-01

    Quantum benchmarks are routinely used to validate the experimental demonstration of quantum information protocols. Many relevant protocols, however, involve an infinite set of input states, of which only a finite subset can be used to test the quality of the implementation. This is a problem, because the benchmark for the finitely many states used in the test can be higher than the original benchmark calculated for infinitely many states. This situation arises in the teleportation and storage of coherent states, for which the benchmark of 50% fidelity is commonly used in experiments, although finite sets of coherent states normally lead to higher benchmarks. Here, we show that the average fidelity over all coherent states can be indirectly probed with a single setup, requiring only two-mode squeezing, a 50-50 beam splitter, and homodyne detection. Our setup enables a rigorous experimental validation of quantum teleportation, storage, amplification, attenuation, and purification of noisy coherent states. More generally, we prove that every quantum benchmark can be tested by preparing a single entangled state and measuring a single observable.

  3. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  4. Benchmarking, benchmarks, or best practices? Applying quality improvement principles to decrease surgical turnaround time.

    PubMed

    Mitchell, L

    1996-01-01

    The processes of benchmarking, benchmark data comparative analysis, and study of best practices are distinctly different. The study of best practices is explained with an example based on the Arthur Andersen & Co. 1992 "Study of Best Practices in Ambulatory Surgery". The results of a national best practices study in ambulatory surgery were used to provide our quality improvement team with the goal of improving the turnaround time between surgical cases. The team used a seven-step quality improvement problem-solving process to improve the surgical turnaround time. The national benchmark for turnaround times between surgical cases in 1992 was 13.5 minutes. The initial turnaround time at St. Joseph's Medical Center was 19.9 minutes. After the team implemented solutions, the time was reduced to an average of 16.3 minutes, an 18% improvement. Cost-benefit analysis showed a potential enhanced revenue of approximately $300,000, or a potential savings of $10,119. Applying quality improvement principles to benchmarking, benchmarks, or best practices can improve process performance. Understanding which form of benchmarking the institution wishes to embark on will help focus a team and use appropriate resources. Communicating with professional organizations that have experience in benchmarking will save time and money and help achieve the desired results.

  5. Helicopters on the asymmetric battlefield: challenges for photonics

    NASA Astrophysics Data System (ADS)

    Heikell, Johnny

    2007-10-01

    The problem set of battlefield helicopters and related photonics in asymmetric scenarios is addressed with emphasis on survivability and electronic warfare. The problem set is identified starting from an operational perspective, asking how different the asymmetric battlefield is from the traditional Cold War scenario, and by identifying relevant characteristics of battlefield helicopters. Based on this information requirements for photonics are deduced. It is concluded that the shift to asymmetric conflicts brings evolutionary-but not revolutionary-challenges for photonics, mostly so for the laser community. Main causes for the evolutionary drive are shortened engagement ranges, increased threat from ballistic and CBRE weapons, stringent ROEs, and assassination operations.

  6. Microwave-based medical diagnosis using particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Modiri, Arezoo

    This dissertation proposes and investigates a novel architecture intended for microwave-based medical diagnosis (MBMD). Furthermore, this investigation proposes novel modifications of particle swarm optimization algorithm for achieving enhanced convergence performance. MBMD has been investigated through a variety of innovative techniques in the literature since the 1990's and has shown significant promise in early detection of some specific health threats. In comparison to the X-ray- and gamma-ray-based diagnostic tools, MBMD does not expose patients to ionizing radiation; and due to the maturity of microwave technology, it lends itself to miniaturization of the supporting systems. This modality has been shown to be effective in detecting breast malignancy, and hence, this study focuses on the same modality. A novel radiator device and detection technique is proposed and investigated in this dissertation. As expected, hardware design and implementation are of paramount importance in such a study, and a good deal of research, analysis, and evaluation has been done in this regard which will be reported in ensuing chapters of this dissertation. It is noteworthy that an important element of any detection system is the algorithm used for extracting signatures. Herein, the strong intrinsic potential of the swarm-intelligence-based algorithms in solving complicated electromagnetic problems is brought to bear. This task is accomplished through addressing both mathematical and electromagnetic problems. These problems are called benchmark problems throughout this dissertation, since they have known answers. After evaluating the performance of the algorithm for the chosen benchmark problems, the algorithm is applied to MBMD tumor detection problem. The chosen benchmark problems have already been tackled by solution techniques other than particle swarm optimization (PSO) algorithm, the results of which can be found in the literature. However, due to the relatively high level of complexity and randomness inherent to the selection of electromagnetic benchmark problems, a trend to resort to oversimplification in order to arrive at reasonable solutions has been taken in literature when utilizing analytical techniques. Here, an attempt has been made to avoid oversimplification when using the proposed swarm-based optimization algorithms.

  7. On the possibility of 'real-time' Monte Carlo calculations for the estimation of absorbed dose in radioimmunotherapy.

    PubMed

    Johnson, T K; Vessella, R L

    1989-07-01

    Dosimetry calculations of monoclonal antibodies (MABs) are made difficult because the focus of radioactivity is targeted for a nonstandard volume in a nonstandard geometry, precluding straightforward application of the MIRD formalism. The MABDOS software addresses this shortcoming by interactive placement of a spherical perturbation into the Standard Man geometry for each tumor focus. S tables are calculated by a Monte Carlo simulation of photon transport for each organ system (including tumor) that localizes activity. Performance benchmarks are reported that measure the time required to simulate 60,000 photons for each penetrating radiation in the spectrum of 99mTc and 131I using the kidney as source organ. Results indicate that calculation times are probably prohibitive on current microcomputer platforms. Mini and supercomputers offer a realistic platform for MABDOS patient dosimetry estimates.

  8. MCNP4A: Features and philosophy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, J.S.

    This paper describes MCNP, states its philosophy, introduces a number of new features becoming available with version MCNP4A, and answers a number of questions asked by participants in the workshop. MCNP is a general-purpose three-dimensional neutron, photon and electron transport code. Its philosophy is ``Quality, Value and New Features.`` Quality is exemplified by new software quality assurance practices and a program of benchmarking against experiments. Value includes a strong emphasis on documentation and code portability. New features are the third priority. MCNP4A is now available at Los Alamos. New features in MCNP4A include enhanced statistical analysis, distributed processor multitasking, newmore » photon libraries, ENDF/B-VI capabilities, X-Windows graphics, dynamic memory allocation, expanded criticality output, periodic boundaries, plotting of particle tracks via SABRINA, and many other improvements. 23 refs.« less

  9. Brachytherapy dosimetry of 125I and 103Pd sources using an updated cross section library for the MCNP Monte Carlo transport code.

    PubMed

    Bohm, Tim D; DeLuca, Paul M; DeWerd, Larry A

    2003-04-01

    Permanent implantation of low energy (20-40 keV) photon emitting radioactive seeds to treat prostate cancer is an important treatment option for patients. In order to produce accurate implant brachytherapy treatment plans, the dosimetry of a single source must be well characterized. Monte Carlo based transport calculations can be used for source characterization, but must have up to date cross section libraries to produce accurate dosimetry results. This work benchmarks the MCNP code and its photon cross section library for low energy photon brachytherapy applications. In particular, we calculate the emitted photon spectrum, air kerma, depth dose in water, and radial dose function for both 125I and 103Pd based seeds and compare to other published results. Our results show that MCNP's cross section library differs from recent data primarily in the photoelectric cross section for low energies and low atomic number materials. In water, differences as large as 10% in the photoelectric cross section and 6% in the total cross section occur at 125I and 103Pd photon energies. This leads to differences in the dose rate constant of 3% and 5%, and differences as large as 18% and 20% in the radial dose function for the 125I and 103Pd based seeds, respectively. Using a partially updated photon library, calculations of the dose rate constant and radial dose function agree with other published results. Further, the use of the updated photon library allows us to verify air kerma and depth dose in water calculations performed using MCNP's perturbation feature to simulate updated cross sections. We conclude that in order to most effectively use MCNP for low energy photon brachytherapy applications, we must update its cross section library. Following this update, the MCNP code system will be a very effective tool for low energy photon brachytherapy dosimetry applications.

  10. Multi-photon ionization of atoms in intense short-wavelength radiation fields

    NASA Astrophysics Data System (ADS)

    Meyer, Michael

    2015-05-01

    The unprecedented characteristics of XUV and X-ray Free Electron Lasers (FELs) have stimulated numerous investigations focusing on the detailed understanding of fundamental photon-matter interactions in atoms and molecules. In particular, the high intensities (up to 106 W/cm2) giving rise to non-linear phenomena in the short wavelength regime. The basic phenomenology involves the production of highly charged ions via electron emission to which both sequential and direct multi-photon absorption processes contribute. The detailed investigation of the role and relative weight of these processes under different conditions (wavelength, pulse duration, intensity) is the key element for a comprehensive understanding of the ionization dynamics. Here the results of recent investigations are presented, performed at the FELs in Hamburg (FLASH) and Trieste (FERMI) on atomic systems with electronic structures of increasing complexity (Ar, Ne and Xe). Mainly, electron spectroscopy is used to obtain quantitative information about the relevance of various multi-photon ionization processes. For the case of Ar, a variety of processes including above threshold ionization (ATI) from 3p and 3s valence shells, direct 2p two-photon ionization and resonant 2p-4p two-photon excitations were observed and their role was quantitatively determined comparing the experimental ionization yields to ab-initio calculations of the cross sections for the multi-photon processes. Using Ar as a benchmark to prove the reliability of the combined experimental and theoretical approach, the more complex and intriguing case of Xe was studied. Especially, the analysis of the two-photon ATI from the Xe 4d shell reveals new insight into the character of the 4d giant resonance, which was unresolved in the linear one-photon regime. Finally, the influence of intense XUV radiation to the relaxation dynamics of the Ne 2s-3p resonance was investigated by angle-resolved electron spectroscopy, especially be observing the intensity dependent variation of the angular distribution patterns for the sequential ionization process.

  11. There is no one-size-fits-all product for InSAR; on the inclusion of contextual information for geodetically-proof InSAR data products

    NASA Astrophysics Data System (ADS)

    Hanssen, R. F.

    2017-12-01

    In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.

  12. Search for dark matter in association with a Higgs boson decaying to two photons at s = 13 TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    A search for dark matter in association with a Higgs boson decaying to two photons is presented. This study is based on data collected with the ATLAS detector, corresponding to an integrated luminosity of 36.1 fb -1 of proton-proton collisions at the LHC at a center-of-mass energy of 13 TeV in 2015 and 2016. No significant excess over the expected background is observed. Upper limits at 95% confidence level are set on the visible cross section for beyond the Standard Model physics processes, and the production cross section times branching fraction of the Standard Model Higgs boson decaying into twomore » photons in association with missing transverse momentum in three different benchmark models. Finally, limits at 95% confidence level are also set on the observed signal in two-dimensional mass planes. Additionally, the results are interpreted in terms of 90% confidence-level limits on the dark-matter–nucleon scattering cross section, as a function of the dark-matter particle mass, for a spin-independent scenario.« less

  13. Search for dark matter in association with a Higgs boson decaying to two photons at √{s }=13 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdinov, O.; Abeloos, B.; Abidi, S. H.; Abouzeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adachi, S.; Adamczyk, L.; Adelman, J.; Adersberger, M.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agheorghiesei, C.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akatsuka, S.; Akerstedt, H.; Åkesson, T. P. A.; Akilli, E.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albicocco, P.; Alconada Verzini, M. J.; Alderweireldt, S. C.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alshehri, A. A.; Alstaty, M.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Angerami, A.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Antrim, D. J.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Araujo Ferraz, V.; Arce, A. T. H.; Ardell, R. E.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagnaia, P.; Bahmani, M.; Bahrasemani, H.; Baines, J. T.; Bajic, M.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balli, F.; Balunas, W. K.; Banas, E.; Bandyopadhyay, A.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barkeloo, J. T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska-Blenessy, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Beck, H. C.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beermann, T. A.; Begalli, M.; Begel, M.; Behr, J. K.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernardi, G.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethani, A.; Bethke, S.; Bevan, A. J.; Beyer, J.; Bianchi, R. M.; Biebel, O.; Biedermann, D.; Bielski, R.; Bierwagen, K.; Biesuz, N. V.; Biglietti, M.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bittrich, C.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blair, R. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blue, A.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bolz, A. E.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Briglin, D. L.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruni, A.; Bruni, G.; Bruni, L. S.; Brunt, Bh; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burch, T. J.; Burdin, S.; Burgard, C. D.; Burger, A. M.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Cano Bret, M.; Cantero, J.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carlson, B. T.; Carminati, L.; Carney, R. M. D.; Caron, S.; Carquin, E.; Carrá, S.; Carrillo-Montoya, G. D.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castelijn, R.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Celebi, E.; Ceradini, F.; Cerda Alberich, L.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, W. S.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, J.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Cheu, E.; Cheung, K.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chiu, Y. H.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Christodoulou, V.; Chromek-Burckhart, D.; Chu, M. C.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocca, C.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper-Sarkar, A. M.; Cormier, F.; Cormier, K. J. R.; Corradi, M.; Corriveau, F.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Creager, R. A.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cukierman, A. R.; Cummings, J.; Curatolo, M.; Cúth, J.; Czekierda, S.; Czodrowski, P.; D'Amen, G.; D'Auria, S.; D'Eramo, L.; D'Onofrio, M.; da Cunha Sargedas de Sousa, M. J.; da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Daneri, M. F.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Daubney, T.; Davey, W.; David, C.; Davidek, T.; Davis, D. R.; Davison, P.; Dawe, E.; Dawson, I.; de, K.; de Asmundis, R.; de Benedetti, A.; de Castro, S.; de Cecco, S.; de Groot, N.; de Jong, P.; de la Torre, H.; de Lorenzi, F.; de Maria, A.; de Pedis, D.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vasconcelos Corga, K.; de Vivie de Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delporte, C.; Delsart, P. A.; Demarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Devesa, M. R.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; di Bello, F. A.; di Ciaccio, A.; di Ciaccio, L.; di Clemente, W. K.; di Donato, C.; di Girolamo, A.; di Girolamo, B.; di Micco, B.; di Nardo, R.; di Petrillo, K. F.; di Simone, A.; di Sipio, R.; di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Díez Cornell, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; Do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Dubreuil, A.; Duchovni, E.; Duckeck, G.; Ducourthial, A.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dumancic, M.; Dumitriu, A. E.; Duncan, A. K.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Dziedzic, B. S.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; El Kosseifi, R.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernst, M.; Errede, S.; Escalier, M.; Escobar, C.; Esposito, B.; Estrada Pastor, O.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Ezzi, M.; Fabbri, F.; Fabbri, L.; Fabiani, V.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenton, M. J.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, R. R. M.; Flick, T.; Flierl, B. M.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Förster, F. A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Franchino, S.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Freund, B.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Ganguly, S.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; García Pascual, J. A.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gee, C. N. P.; Geisen, J.; Geisen, M.; Geisler, M. P.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Gershon, A.; Geßner, G.; Ghasemi, S.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giangiacomi, N.; Giannetti, P.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gkountoumis, P.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Gama, R.; Goncalves Pinto Firmino da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; González de La Hoz, S.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Gottardo, C. A.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, C.; Gray, H. M.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Grummer, A.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Gui, B.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, W.; Guo, Y.; Gupta, R.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Guzik, M. P.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hagihara, M.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Han, S.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havener, L. B.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heer, S.; Heidegger, K. K.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Held, A.; Hellman, S.; Helsens, C.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herde, H.; Herget, V.; Hernández Jiménez, Y.; Herr, H.; Herten, G.; Hertenberger, R.; Hervas, L.; Herwig, T. C.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Higashino, S.; Higón-Rodriguez, E.; Hildebrand, K.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hils, M.; Hinchliffe, I.; Hirose, M.; Hirschbuehl, D.; Hiti, B.; Hladik, O.; Hoad, X.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Honda, S.; Honda, T.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hoya, J.; Hrabovsky, M.; Hrdinka, J.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, P. J.; Hsu, S.-C.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Isacson, M. F.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, P.; Jacobs, R. M.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansky, R.; Janssen, J.; Janus, M.; Janus, P. A.; Jarlskog, G.; Javadov, N.; Javå¯Rek, T.; Javurkova, M.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jelinskas, A.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiang, Z.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Jivan, H.; Johansson, P.; Johns, K. A.; Johnson, C. A.; Johnson, W. J.; Jon-And, K.; Jones, R. W. L.; Jones, S. D.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kar, D.; Karakostas, K.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawade, K.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kay, E. F.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kendrick, J.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-Zada, F.; Khanov, A.; Kharlamov, A. G.; Kharlamova, T.; Khodinov, A.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; Kirchmeier, D.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kitali, V.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klapdor-Kleingrothaus, T.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klingl, T.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Köhler, N. M.; Koi, T.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Koulouris, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kourlitis, E.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Krauss, D.; Kremer, J. A.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, M. C.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kulinich, Y. P.; Kuna, M.; Kunigo, T.; Kupco, A.; Kupfer, T.; Kuprash, O.; Kurashige, H.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kurth, M. G.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; La Ruffa, F.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Langenberg, R. J.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Lapertosa, A.; Laplace, S.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; Leblanc, M.; Lecompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, G. R.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Li, B.; Li, Changqiao; Li, H.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, H.; Liu, H.; Liu, J. K. K.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo, C. Y.; Lo Sterzo, F.; Lobodzinska, E. M.; Loch, P.; Loebinger, F. K.; Loesle, A.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopez, J. A.; Lopez Mateos, D.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lu, Y. J.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lutz, M. S.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyu, F.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; MacDonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A. S.; Magerl, V.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majersky, O.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mankinen, K. H.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchese, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Martensson, M. U. F.; Marti-Garcia, S.; Martin, C. B.; Martin, T. A.; Martin, V. J.; Martin Dit Latour, B.; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Maznas, I.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; McFayden, J. A.; McHedlidze, G.; McMahon, S. J.; McNamara, P. C.; McPherson, R. A.; Meehan, S.; Megy, T. J.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Mellenthin, J. D.; Melo, M.; Meloni, F.; Melzer, A.; Menary, S. B.; Meng, L.; Meng, X. T.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Minegishi, Y.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mizukami, A.; Mjörnmark, J. U.; Mkrtchyan, T.; Mlynarikova, M.; Moa, T.; Mochizuki, K.; Mogg, P.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morgenstern, S.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Morvaj, L.; Moschovakos, P.; Mosidze, M.; Moss, H. J.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Moyse, E. J. W.; Muanza, S.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nelson, M. E.; Nemecek, S.; Nemethy, P.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Newman, P. R.; Ng, T. Y.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nishu, N.; Nisius, R.; Nitsche, I.; Nitta, T.; Nobe, T.; Noguchi, Y.; Nomachi, M.; Nomidis, I.; Nomura, M. A.; Nooney, T.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'Connor, K.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oppen, H.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero Y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagan Griso, S.; Paganini, M.; Paige, F.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; Panagiotopoulou, E. St.; Panagoulias, I.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasner, J. M.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearson, B.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Peri, F.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, F. H.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Pluth, D.; Podberezko, P.; Poettgen, R.; Poggi, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Ponomarenko, D.; Pontecorvo, L.; Popeneciu, G. A.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Poulard, G.; Poulsen, T.; Poveda, J.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Primavera, M.; Prince, S.; Proklova, N.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puri, A.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rangel-Smith, C.; Rashid, T.; Raspopov, S.; Ratti, M. G.; Rauch, D. M.; Rauscher, F.; Rave, S.; Ravinovich, I.; Rawling, J. H.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reed, R. G.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reiss, A.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Resseguie, E. D.; Rettie, S.; Reynolds, E.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ripellino, G.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Roberts, R. T.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Rocco, E.; Roda, C.; Rodina, Y.; Rodriguez Bosca, S.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; Røhne, O.; Roloff, J.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Rosati, S.; Rosbach, K.; Rose, P.; Rosien, N.-A.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salazar Loyola, J. E.; Salek, D.; Sales de Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sampsonidou, D.; Sánchez, J.; Sanchez Martinez, V.; Sanchez Pineda, A.; Sandaker, H.; Sandbach, R. L.; Sander, C. O.; Sandhoff, M.; Sandoval, C.; Sankey, D. P. C.; Sannino, M.; Sano, Y.; Sansoni, A.; Santoni, C.; Santos, H.; Santoyo Castillo, I.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sato, K.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, L.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schildgen, L. K.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schouwenberg, J. F. P.; Schovancova, J.; Schramm, S.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Sciandra, A.; Sciolla, G.; Scornajenghi, M.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Semprini-Cesari, N.; Senkin, S.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Shen, Y.; Sherafati, N.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shipsey, I. P. J.; Shirabe, S.; Shiyakova, M.; Shlomi, J.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shope, D. R.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sideras Haddad, E.; Sidiropoulou, O.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Siral, I.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smiesko, J.; Smirnov, N.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, J. W.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, I. M.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Søgaard, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Sopczak, A.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spieker, T. M.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanitzki, M. M.; Stapf, B. S.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Stark, S. H.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultan, Dms; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Suruliz, K.; Suster, C. J. E.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Swift, S. P.; Sykora, I.; Sykora, T.; Ta, D.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takasugi, E. H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tanioka, R.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teixeira-Dias, P.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thiele, F.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorova-Nova, S.; Todt, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Tornambe, P.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Treado, C. J.; Trefzger, T.; Tresoldi, F.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tsang, K. W.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tulbure, T. T.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turgeman, D.; Turk Cakir, I.; Turra, R.; Tuts, P. M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usui, J.; Vacavant, L.; Vacek, V.; Vachon, B.; Vadla, K. O. H.; Vaidya, A.; Valderanis, C.; Valdes Santurio, E.; Valentinetti, S.; Valero, A.; Valéry, L.; Valkar, S.; Vallier, A.; Valls Ferrer, J. A.; van den Wollenberg, W.; van der Graaf, H.; van Gemmeren, P.; van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varni, C.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vasquez, G. A.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, A. T.; Vermeulen, J. C.; Vetterli, M. C.; Viaux Maira, N.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vishwakarma, A.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vogel, M.; Vokac, P.; Volpi, G.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Wagner, W.; Wagner-Kuhr, J.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, Q.; Wang, R.-J.; Wang, R.; Wang, S. M.; Wang, T.; Wang, W.; Wang, W.; Wang, Z.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, A. F.; Webb, S.; Weber, M. S.; Weber, S. W.; Weber, S. A.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weirich, M.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M. D.; Werner, P.; Wessels, M.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A. S.; White, A.; White, M. J.; White, R.; Whiteson, D.; Whitmore, B. W.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winkels, E.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wobisch, M.; Wolf, T. M. H.; Wolff, R.; Wolter, M. W.; Wolters, H.; Wong, V. W. S.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Wozniak, K. W.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xi, Z.; Xia, L.; Xu, D.; Xu, L.; Xu, T.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamatani, M.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yigitbasi, E.; Yildirim, E.; Yorita, K.; Yoshihara, K.; Young, C.; Young, C. J. S.; Yu, J.; Yu, J.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zacharis, G.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanzi, D.; Zeitnitz, C.; Zemaityte, G.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, L.; Zhang, M.; Zhang, P.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Y.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, M.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zou, R.; Zur Nedden, M.; Zwalinski, L.; Atlas Collaboration

    2017-12-01

    A search for dark matter in association with a Higgs boson decaying to two photons is presented. This study is based on data collected with the ATLAS detector, corresponding to an integrated luminosity of 36.1 fb-1 of proton-proton collisions at the LHC at a center-of-mass energy of 13 TeV in 2015 and 2016. No significant excess over the expected background is observed. Upper limits at 95% confidence level are set on the visible cross section for beyond the Standard Model physics processes, and the production cross section times branching fraction of the Standard Model Higgs boson decaying into two photons in association with missing transverse momentum in three different benchmark models. Limits at 95% confidence level are also set on the observed signal in two-dimensional mass planes. Additionally, the results are interpreted in terms of 90% confidence-level limits on the dark-matter-nucleon scattering cross section, as a function of the dark-matter particle mass, for a spin-independent scenario.

  14. Search for dark matter in association with a Higgs boson decaying to two photons at s = 13 TeV with the ATLAS detector

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2017-12-08

    A search for dark matter in association with a Higgs boson decaying to two photons is presented. This study is based on data collected with the ATLAS detector, corresponding to an integrated luminosity of 36.1 fb -1 of proton-proton collisions at the LHC at a center-of-mass energy of 13 TeV in 2015 and 2016. No significant excess over the expected background is observed. Upper limits at 95% confidence level are set on the visible cross section for beyond the Standard Model physics processes, and the production cross section times branching fraction of the Standard Model Higgs boson decaying into twomore » photons in association with missing transverse momentum in three different benchmark models. Finally, limits at 95% confidence level are also set on the observed signal in two-dimensional mass planes. Additionally, the results are interpreted in terms of 90% confidence-level limits on the dark-matter–nucleon scattering cross section, as a function of the dark-matter particle mass, for a spin-independent scenario.« less

  15. Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M; Lloyd, S AM; Townson, R

    2014-08-15

    This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less

  16. How Much Debt Is Too Much? Defining Benchmarks for Manageable Student Debt

    ERIC Educational Resources Information Center

    Baum, Sandy; Schwartz, Saul

    2006-01-01

    Many discussions of student loan repayment focus on those students for whom repayment is a problem and conclude that the reliance on debt to finance postsecondary education is excessive. However, from both a pragmatic perspective and a logical perspective, a more appropriate approach is to develop different benchmarks for students in different…

  17. A novel hybrid meta-heuristic technique applied to the well-known benchmark optimization problems

    NASA Astrophysics Data System (ADS)

    Abtahi, Amir-Reza; Bijari, Afsane

    2017-03-01

    In this paper, a hybrid meta-heuristic algorithm, based on imperialistic competition algorithm (ICA), harmony search (HS), and simulated annealing (SA) is presented. The body of the proposed hybrid algorithm is based on ICA. The proposed hybrid algorithm inherits the advantages of the process of harmony creation in HS algorithm to improve the exploitation phase of the ICA algorithm. In addition, the proposed hybrid algorithm uses SA to make a balance between exploration and exploitation phases. The proposed hybrid algorithm is compared with several meta-heuristic methods, including genetic algorithm (GA), HS, and ICA on several well-known benchmark instances. The comprehensive experiments and statistical analysis on standard benchmark functions certify the superiority of the proposed method over the other algorithms. The efficacy of the proposed hybrid algorithm is promising and can be used in several real-life engineering and management problems.

  18. A Bayesian approach to traffic light detection and mapping

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, Siavash; Yilmaz, Alper

    2017-03-01

    Automatic traffic light detection and mapping is an open research problem. The traffic lights vary in color, shape, geolocation, activation pattern, and installation which complicate their automated detection. In addition, the image of the traffic lights may be noisy, overexposed, underexposed, or occluded. In order to address this problem, we propose a Bayesian inference framework to detect and map traffic lights. In addition to the spatio-temporal consistency constraint, traffic light characteristics such as color, shape and height is shown to further improve the accuracy of the proposed approach. The proposed approach has been evaluated on two benchmark datasets and has been shown to outperform earlier studies. The results show that the precision and recall rates for the KITTI benchmark are 95.78 % and 92.95 % respectively and the precision and recall rates for the LARA benchmark are 98.66 % and 94.65 % .

  19. Summary of the Tandem Cylinder Solutions from the Benchmark Problems for Airframe Noise Computations-I Workshop

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2011-01-01

    Fifteen submissions in the tandem cylinders category of the First Workshop on Benchmark problems for Airframe Noise Computations are summarized. Although the geometry is relatively simple, the problem involves complex physics. Researchers employed various block-structured, overset, unstructured and embedded Cartesian grid techniques and considerable computational resources to simulate the flow. The solutions are compared against each other and experimental data from 2 facilities. Overall, the simulations captured the gross features of the flow, but resolving all the details which would be necessary to compute the noise remains challenging. In particular, how to best simulate the effects of the experimental transition strip, and the associated high Reynolds number effects, was unclear. Furthermore, capturing the spanwise variation proved difficult.

  20. Novel probabilistic neuroclassifier

    NASA Astrophysics Data System (ADS)

    Hong, Jiang; Serpen, Gursel

    2003-09-01

    A novel probabilistic potential function neural network classifier algorithm to deal with classes which are multi-modally distributed and formed from sets of disjoint pattern clusters is proposed in this paper. The proposed classifier has a number of desirable properties which distinguish it from other neural network classifiers. A complete description of the algorithm in terms of its architecture and the pseudocode is presented. Simulation analysis of the newly proposed neuro-classifier algorithm on a set of benchmark problems is presented. Benchmark problems tested include IRIS, Sonar, Vowel Recognition, Two-Spiral, Wisconsin Breast Cancer, Cleveland Heart Disease and Thyroid Gland Disease. Simulation results indicate that the proposed neuro-classifier performs consistently better for a subset of problems for which other neural classifiers perform relatively poorly.

  1. Quantum control of molecular fragmentation in strong laser fields

    NASA Astrophysics Data System (ADS)

    Zohrabi, Mohammad

    Present advances in laser technology allow the production of ultrashort (<˜5 fs, approaching single cycle at 800 nm), intense tabletop laser pulses. At these high intensities laser-matter interactions cannot be described with perturbation theory since multiphoton processes are involved. This is in contrast to photodissociation by the absorption of a single photon, which is well described by perturbation theory. For example, at high intensities (<˜5x1013 W/cm2) the fragmentation of molecular hydrogen ions has been observed via the absorption of three or more photons. In another example, an intriguing dissociation mechanism has been observed where molecular hydrogen ions seem to fragment by apparently absorbing no photons. This is actually a two photon process, photoabsorption followed by stimulated emission, resulting in low energy fragments. We are interested in exploring these kinds of multiphoton processes. Our research group has studied the dynamics and control of fragmentation induced by strong laser fields in a variety of molecular targets. The main goal is to provide a basic understanding of fragmentation mechanisms and possible control schemes of benchmark systems such as H2+. This knowledge is further extended to more complex systems like the benchmark H3+ polyatomic and other molecules. In this dissertation, we report research based on two types of experiments. In the first part, we describe laser-induced fragmentation of molecular ion-beam targets. In the latter part, we discuss the formation of highly-excited neutral fragments from hydrogen molecules using ultrashort laser pulses. In carrying out these experiments, we have also extended experimental techniques beyond their previous capabilities. We have performed a few experiments to advance our understanding of laser-induced fragmentation of molecular-ion beams. For instance, we explored vibrationally resolved spectra of O2+ dissociation using various wavelengths. We observed a vibrational suppression effect in the dissociation spectra due to the small magnitude of the dipole transition moment, which depends on the photon energy --- a phenomenon known as Cooper minima. By changing the laser wavelength, the Cooper minima shift, a fact that was used to identify the dissociation pathways. In another project, we studied the carrier-envelope phase (CEP) dependences of highly-excited fragments from hydrogen molecules. General CEP theory predicts a CEP dependence in the total dissociation yield due to the interference of dissociation pathways differing by an even net number of photons, and our measurements are consistent with this prediction. Moreover, we were able to extract the difference in the net number of photons involved in the interfering pathways by using a Fourier analysis. In terms of our experimental method, we have implemented a pump-probe style technique on a thin molecular ion-beam target and explored the feasibility of such experiments. The results presented in this work should lead to a better understanding of the dynamics and control in molecular fragmentation induced by intense laser fields.

  2. Radiation shielding quality assurance

    NASA Astrophysics Data System (ADS)

    Um, Dallsun

    For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.

  3. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    PubMed

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. I/O-Efficient Scientific Computation Using TPIE

    NASA Technical Reports Server (NTRS)

    Vengroff, Darren Erik; Vitter, Jeffrey Scott

    1996-01-01

    In recent years, input/output (I/O)-efficient algorithms for a wide variety of problems have appeared in the literature. However, systems specifically designed to assist programmers in implementing such algorithms have remained scarce. TPIE is a system designed to support I/O-efficient paradigms for problems from a variety of domains, including computational geometry, graph algorithms, and scientific computation. The TPIE interface frees programmers from having to deal not only with explicit read and write calls, but also the complex memory management that must be performed for I/O-efficient computation. In this paper we discuss applications of TPIE to problems in scientific computation. We discuss algorithmic issues underlying the design and implementation of the relevant components of TPIE and present performance results of programs written to solve a series of benchmark problems using our current TPIE prototype. Some of the benchmarks we present are based on the NAS parallel benchmarks while others are of our own creation. We demonstrate that the central processing unit (CPU) overhead required to manage I/O is small and that even with just a single disk, the I/O overhead of I/O-efficient computation ranges from negligible to the same order of magnitude as CPU time. We conjecture that if we use a number of disks in parallel this overhead can be all but eliminated.

  5. Hospital-affiliated practices reduce 'red ink'.

    PubMed

    Bohlmann, R C

    1998-01-01

    Many complain that hospital-group practice affiliations are a failed model and should be abandoned. The author argues for a less rash approach, saying the goal should be to understand the problems precisely, then fix them. Benchmarking is a good place to start. The article outlines the basic definition and ground rules of bench-marking and explains what resources help accomplish the task.

  6. Interaction-free measurement as quantum channel discrimination

    NASA Astrophysics Data System (ADS)

    Zhou, You; Yung, Man-Hong

    2017-12-01

    Interaction-free measurement is a quantum process where, in the ideal situation, an object can be detected as if no interaction took place with the probing photon. Here we show that the problem of interaction-free measurement can be regarded as a problem of quantum-channel discrimination. In particular, we look for the optimal photonic states that can minimize the detection error and the photon loss in detecting the presence or absence of the object, which is taken to be semitransparent, and the number of the interrogation cycle is assumed to be finite. Furthermore, we also investigated the possibility of minimizing the detection error through the use of entangled photons, which is essentially a setting of quantum illumination. However, our results indicate that entanglement does not exhibit a clear advantage; the same performance can be achieved with unentangled photonic states.

  7. Two-photon absorption cross sections within equation-of-motion coupled-cluster formalism using resolution-of-the-identity and Cholesky decomposition representations: Theory, implementation, and benchmarks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nanda, Kaushik D.; Krylov, Anna I.

    The equation-of-motion coupled-cluster (EOM-CC) methods provide a robust description of electronically excited states and their properties. Here, we present a formalism for two-photon absorption (2PA) cross sections for the equation-of-motion for excitation energies CC with single and double substitutions (EOM-CC for electronically excited states with single and double substitutions) wave functions. Rather than the response theory formulation, we employ the expectation-value approach which is commonly used within EOM-CC, configuration interaction, and algebraic diagrammatic construction frameworks. In addition to canonical implementation, we also exploit resolution-of-the-identity (RI) and Cholesky decomposition (CD) for the electron-repulsion integrals to reduce memory requirements and to increasemore » parallel efficiency. The new methods are benchmarked against the CCSD and CC3 response theories for several small molecules. We found that the expectation-value 2PA cross sections are within 5% from the quadratic response CCSD values. The RI and CD approximations lead to small errors relative to the canonical implementation (less than 4%) while affording computational savings. RI/CD successfully address the well-known issue of large basis set requirements for 2PA cross sections calculations. The capabilities of the new code are illustrated by calculations of the 2PA cross sections for model chromophores of the photoactive yellow and green fluorescent proteins.« less

  8. Benchmarking of a treatment planning system for spot scanning proton therapy: Comparison and analysis of robustness to setup errors of photon IMRT and proton SFUD treatment plans of base of skull meningioma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.

    Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less

  9. SU-E-T-22: A Deterministic Solver of the Boltzmann-Fokker-Planck Equation for Dose Calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, X; Gao, H; Paganetti, H

    2015-06-15

    Purpose: The Boltzmann-Fokker-Planck equation (BFPE) accurately models the migration of photons/charged particles in tissues. While the Monte Carlo (MC) method is popular for solving BFPE in a statistical manner, we aim to develop a deterministic BFPE solver based on various state-of-art numerical acceleration techniques for rapid and accurate dose calculation. Methods: Our BFPE solver is based on the structured grid that is maximally parallelizable, with the discretization in energy, angle and space, and its cross section coefficients are derived or directly imported from the Geant4 database. The physical processes that are taken into account are Compton scattering, photoelectric effect, pairmore » production for photons, and elastic scattering, ionization and bremsstrahlung for charged particles.While the spatial discretization is based on the diamond scheme, the angular discretization synergizes finite element method (FEM) and spherical harmonics (SH). Thus, SH is used to globally expand the scattering kernel and FFM is used to locally discretize the angular sphere. As a Result, this hybrid method (FEM-SH) is both accurate in dealing with forward-peaking scattering via FEM, and efficient for multi-energy-group computation via SH. In addition, FEM-SH enables the analytical integration in energy variable of delta scattering kernel for elastic scattering with reduced truncation error from the numerical integration based on the classic SH-based multi-energy-group method. Results: The accuracy of the proposed BFPE solver was benchmarked against Geant4 for photon dose calculation. In particular, FEM-SH had improved accuracy compared to FEM, while both were within 2% of the results obtained with Geant4. Conclusion: A deterministic solver of the Boltzmann-Fokker-Planck equation is developed for dose calculation, and benchmarked against Geant4. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  10. Student reactions to problem-based learning in photonics technician education

    NASA Astrophysics Data System (ADS)

    Massa, Nicholas M.; Donnelly, Judith; Hanes, Fenna

    2014-07-01

    Problem-based learning (PBL) is an instructional approach in which students learn problem-solving and teamwork skills by collaboratively solving complex real-world problems. Research shows that PBL improves student knowledge and retention, motivation, problem-solving skills, and the ability to skillfully apply knowledge in new and novel situations. One of the challenges faced by students accustomed to traditional didactic methods, however, is acclimating to the PBL process in which problem parameters are often ill-defined and ambiguous, often leading to frustration and disengagement with the learning process. To address this problem, the New England Board of Higher Education (NEBHE), funded by the National Science Foundation Advanced Technological Education (NSF-ATE) program, has created and field tested a comprehensive series of industry-based multimedia PBL "Challenges" designed to scaffold the development of students' problem solving and critical thinking skills. In this paper, we present the results of a pilot study conducted to examine student reactions to the PBL Challenges in photonics technician education. During the fall 2012 semester, students (n=12) in two associate degree level photonics courses engaged in PBL using the PBL Challenges. Qualitative and quantitative methods were used to assess student motivation, self-efficacy, critical thinking, metacognitive self-regulation, and peer learning using selected scales from the Motivated Strategies for Learning Questionnaire (MSLQ). Results showed positive gains in all variables. Follow-up focus group interviews yielded positive themes supporting the effectiveness of PBL in developing the knowledge, skills and attitudes of photonics technicians.

  11. Continuous energy adjoint transport for photons in PHITS

    NASA Astrophysics Data System (ADS)

    Malins, Alex; Machida, Masahiko; Niita, Koji

    2017-09-01

    Adjoint Monte Carlo can be an effcient algorithm for solving photon transport problems where the size of the tally is relatively small compared to the source. Such problems are typical in environmental radioactivity calculations, where natural or fallout radionuclides spread over a large area contribute to the air dose rate at a particular location. Moreover photon transport with continuous energy representation is vital for accurately calculating radiation protection quantities. Here we describe the incorporation of an adjoint Monte Carlo capability for continuous energy photon transport into the Particle and Heavy Ion Transport code System (PHITS). An adjoint cross section library for photon interactions was developed based on the JENDL- 4.0 library, by adding cross sections for adjoint incoherent scattering and pair production. PHITS reads in the library and implements the adjoint transport algorithm by Hoogenboom. Adjoint pseudo-photons are spawned within the forward tally volume and transported through space. Currently pseudo-photons can undergo coherent and incoherent scattering within the PHITS adjoint function. Photoelectric absorption is treated implicitly. The calculation result is recovered from the pseudo-photon flux calculated over the true source volume. A new adjoint tally function facilitates this conversion. This paper gives an overview of the new function and discusses potential future developments.

  12. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  13. Accurate ω-ψ Spectral Solution of the Singular Driven Cavity Problem

    NASA Astrophysics Data System (ADS)

    Auteri, F.; Quartapelle, L.; Vigevano, L.

    2002-08-01

    This article provides accurate spectral solutions of the driven cavity problem, calculated in the vorticity-stream function representation without smoothing the corner singularities—a prima facie impossible task. As in a recent benchmark spectral calculation by primitive variables of Botella and Peyret, closed-form contributions of the singular solution for both zero and finite Reynolds numbers are subtracted from the unknown of the problem tackled here numerically in biharmonic form. The method employed is based on a split approach to the vorticity and stream function equations, a Galerkin-Legendre approximation of the problem for the perturbation, and an evaluation of the nonlinear terms by Gauss-Legendre numerical integration. Results computed for Re=0, 100, and 1000 compare well with the benchmark steady solutions provided by the aforementioned collocation-Chebyshev projection method. The validity of the proposed singularity subtraction scheme for computing time-dependent solutions is also established.

  14. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  15. (U) Analytic First and Second Derivatives of the Uncollided Leakage for a Homogeneous Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    2017-04-26

    The second-order adjoint sensitivity analysis methodology (2nd-ASAM), developed by Cacuci, has been applied by Cacuci to derive second derivatives of a response with respect to input parameters for uncollided particles in an inhomogeneous transport problem. In this memo, we present an analytic benchmark for verifying the derivatives of the 2nd-ASAM. The problem is a homogeneous sphere, and the response is the uncollided total leakage. This memo does not repeat the formulas given in Ref. 2. We are preparing a journal article that will include the derivation of Ref. 2 and the benchmark of this memo.

  16. A Study of Fixed-Order Mixed Norm Designs for a Benchmark Problem in Structural Control

    NASA Technical Reports Server (NTRS)

    Whorton, Mark S.; Calise, Anthony J.; Hsu, C. C.

    1998-01-01

    This study investigates the use of H2, p-synthesis, and mixed H2/mu methods to construct full-order controllers and optimized controllers of fixed dimensions. The benchmark problem definition is first extended to include uncertainty within the controller bandwidth in the form of parametric uncertainty representative of uncertainty in the natural frequencies of the design model. The sensitivity of H2 design to unmodelled dynamics and parametric uncertainty is evaluated for a range of controller levels of authority. Next, mu-synthesis methods are applied to design full-order compensators that are robust to both unmodelled dynamics and to parametric uncertainty. Finally, a set of mixed H2/mu compensators are designed which are optimized for a fixed compensator dimension. These mixed norm designs recover the H, design performance levels while providing the same levels of robust stability as the u designs. It is shown that designing with the mixed norm approach permits higher levels of controller authority for which the H, designs are destabilizing. The benchmark problem is that of an active tendon system. The controller designs are all based on the use of acceleration feedback.

  17. Integrated Sensing Processor, Phase 2

    DTIC Science & Technology

    2005-12-01

    performance analysis for several baseline classifiers including neural nets, linear classifiers, and kNN classifiers. Use of CCDR as a preprocessing step...below the level of the benchmark non-linear classifier for this problem ( kNN ). Furthermore, the CCDR preconditioned kNN achieved a 10% improvement over...the benchmark kNN without CCDR. Finally, we found an important connection between intrinsic dimension estimation via entropic graphs and the optimal

  18. Numerical Boundary Conditions for Computational Aeroacoustics Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Tam, Chritsopher K. W.; Kurbatskii, Konstantin A.; Fang, Jun

    1997-01-01

    Category 1, Problems 1 and 2, Category 2, Problem 2, and Category 3, Problem 2 are solved computationally using the Dispersion-Relation-Preserving (DRP) scheme. All these problems are governed by the linearized Euler equations. The resolution requirements of the DRP scheme for maintaining low numerical dispersion and dissipation as well as accurate wave speeds in solving the linearized Euler equations are now well understood. As long as 8 or more mesh points per wavelength is employed in the numerical computation, high quality results are assured. For the first three categories of benchmark problems, therefore, the real challenge is to develop high quality numerical boundary conditions. For Category 1, Problems 1 and 2, it is the curved wall boundary conditions. For Category 2, Problem 2, it is the internal radiation boundary conditions inside the duct. For Category 3, Problem 2, they are the inflow and outflow boundary conditions upstream and downstream of the blade row. These are the foci of the present investigation. Special nonhomogeneous radiation boundary conditions that generate the incoming disturbances and at the same time allow the outgoing reflected or scattered acoustic disturbances to leave the computation domain without significant reflection are developed. Numerical results based on these boundary conditions are provided.

  19. Supply network configuration—A benchmarking problem

    NASA Astrophysics Data System (ADS)

    Brandenburg, Marcus

    2018-03-01

    Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.

  20. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  1. The rotating movement of three immiscible fluids - A benchmark problem

    USGS Publications Warehouse

    Bakker, M.; Oude, Essink G.H.P.; Langevin, C.D.

    2004-01-01

    A benchmark problem involving the rotating movement of three immiscible fluids is proposed for verifying the density-dependent flow component of groundwater flow codes. The problem consists of a two-dimensional strip in the vertical plane filled with three fluids of different densities separated by interfaces. Initially, the interfaces between the fluids make a 45??angle with the horizontal. Over time, the fluids rotate to the stable position whereby the interfaces are horizontal; all flow is caused by density differences. Two cases of the problem are presented, one resulting in a symmetric flow field and one resulting in an asymmetric flow field. An exact analytical solution for the initial flow field is presented by application of the vortex theory and complex variables. Numerical results are obtained using three variable-density groundwater flow codes (SWI, MOCDENS3D, and SEAWAT). Initial horizontal velocities of the interfaces, as simulated by the three codes, compare well with the exact solution. The three codes are used to simulate the positions of the interfaces at two times; the three codes produce nearly identical results. The agreement between the results is evidence that the specific rotational behavior predicted by the models is correct. It also shows that the proposed problem may be used to benchmark variable-density codes. It is concluded that the three models can be used to model accurately the movement of interfaces between immiscible fluids, and have little or no numerical dispersion. ?? 2003 Elsevier B.V. All rights reserved.

  2. A new numerical benchmark for variably saturated variable-density flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Guevara, Carlos; Graf, Thomas

    2016-04-01

    In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.

  3. Using GTO-Velo to Facilitate Communication and Sharing of Simulation Results in Support of the Geothermal Technologies Office Code Comparison Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Signe K.; Purohit, Sumit; Boyd, Lauren W.

    The Geothermal Technologies Office Code Comparison Study (GTO-CCS) aims to support the DOE Geothermal Technologies Office in organizing and executing a model comparison activity. This project is directed at testing, diagnosing differences, and demonstrating modeling capabilities of a worldwide collection of numerical simulators for evaluating geothermal technologies. Teams of researchers are collaborating in this code comparison effort, and it is important to be able to share results in a forum where technical discussions can easily take place without requiring teams to travel to a common location. Pacific Northwest National Laboratory has developed an open-source, flexible framework called Velo that providesmore » a knowledge management infrastructure and tools to support modeling and simulation for a variety of types of projects in a number of scientific domains. GTO-Velo is a customized version of the Velo Framework that is being used as the collaborative tool in support of the GTO-CCS project. Velo is designed around a novel integration of a collaborative Web-based environment and a scalable enterprise Content Management System (CMS). The underlying framework provides a flexible and unstructured data storage system that allows for easy upload of files that can be in any format. Data files are organized in hierarchical folders and each folder and each file has a corresponding wiki page for metadata. The user interacts with Velo through a web browser based wiki technology, providing the benefit of familiarity and ease of use. High-level folders have been defined in GTO-Velo for the benchmark problem descriptions, descriptions of simulator/code capabilities, a project notebook, and folders for participating teams. Each team has a subfolder with write access limited only to the team members, where they can upload their simulation results. The GTO-CCS participants are charged with defining the benchmark problems for the study, and as each GTO-CCS Benchmark problem is defined, the problem creator can provide a description using a template on the metadata page corresponding to the benchmark problem folder. Project documents, references and videos of the weekly online meetings are shared via GTO-Velo. A results comparison tool allows users to plot their uploaded simulation results on the fly, along with those of other teams, to facilitate weekly discussions of the benchmark problem results being generated by the teams. GTO-Velo is an invaluable tool providing the project coordinators and team members with a framework for collaboration among geographically dispersed organizations.« less

  4. Search for Chameleon Particles Using a Photon-Regeneration Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, A. S.; Wester, W.; Baumbaugh, A.

    2009-01-23

    We report the first results from the GammeV search for chameleon particles, which may be created via photon-photon interactions within a strong magnetic field. Chameleons are hypothesized scalar fields that could explain the dark energy problem. We implement a novel technique to create and trap the reflective particles within a jar and to detect them later via their afterglow as they slowly convert back into photons. These measurements provide the first experimental constraints on the couplings of chameleons to photons.

  5. SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis Smith; James Knudsen

    As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure usingmore » the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to obtain the correct results. While conventional methods may be used to evaluate a dynamic system like that in the benchmark, the level of effort required may preclude its use on real-world problems.« less

  6. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  7. Algorithm and Architecture Independent Benchmarking with SEAK

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, andmore » weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  8. Photon scattering from a system of multilevel quantum emitters. II. Application to emitters coupled to a one-dimensional waveguide

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.

    2018-04-01

    In a preceding paper we introduced a formalism to study the scattering of low-intensity fields from a system of multilevel emitters embedded in a three-dimensional (3 D ) dielectric medium. Here we show how this photon-scattering relation can be used to analyze the scattering of single photons and weak coherent states from any generic multilevel quantum emitter coupled to a one-dimensional (1 D ) waveguide. The reduction of the photon-scattering relation to 1 D waveguides provides a direct solution of the scattering problem involving low-intensity fields in the waveguide QED regime. To show how our formalism works, we consider examples of multilevel emitters and evaluate the transmitted and reflected field amplitude. Furthermore, we extend our study to include the dynamical response of the emitters for scattering of a weak coherent photon pulse. As our photon-scattering relation is based on the Heisenberg picture, it is quite useful for problems involving photodetection in the waveguide architecture. We show this by considering a specific problem of state generation by photodetection in a multilevel emitter, where our formalism exhibits its full potential. Since the considered emitters are generic, the 1 D results apply to a plethora of physical systems such as atoms, ions, quantum dots, superconducting qubits, and nitrogen-vacancy centers coupled to a 1 D waveguide or transmission line.

  9. Benchmarking Problems Used in Second Year Level Organic Chemistry Instruction

    ERIC Educational Resources Information Center

    Raker, Jeffrey R.; Towns, Marcy H.

    2010-01-01

    Investigations of the problem types used in college-level general chemistry examinations have been reported in this Journal and were first reported in the "Journal of Chemical Education" in 1924. This study extends the findings from general chemistry to the problems of four college-level organic chemistry courses. Three problem…

  10. Performance Comparison of NAMI DANCE and FLOW-3D® Models in Tsunami Propagation, Inundation and Currents using NTHMP Benchmark Problems

    NASA Astrophysics Data System (ADS)

    Velioglu Sogut, Deniz; Yalciner, Ahmet Cevdet

    2018-06-01

    Field observations provide valuable data regarding nearshore tsunami impact, yet only in inundation areas where tsunami waves have already flooded. Therefore, tsunami modeling is essential to understand tsunami behavior and prepare for tsunami inundation. It is necessary that all numerical models used in tsunami emergency planning be subject to benchmark tests for validation and verification. This study focuses on two numerical codes, NAMI DANCE and FLOW-3D®, for validation and performance comparison. NAMI DANCE is an in-house tsunami numerical model developed by the Ocean Engineering Research Center of Middle East Technical University, Turkey and Laboratory of Special Research Bureau for Automation of Marine Research, Russia. FLOW-3D® is a general purpose computational fluid dynamics software, which was developed by scientists who pioneered in the design of the Volume-of-Fluid technique. The codes are validated and their performances are compared via analytical, experimental and field benchmark problems, which are documented in the ``Proceedings and Results of the 2011 National Tsunami Hazard Mitigation Program (NTHMP) Model Benchmarking Workshop'' and the ``Proceedings and Results of the NTHMP 2015 Tsunami Current Modeling Workshop". The variations between the numerical solutions of these two models are evaluated through statistical error analysis.

  11. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  12. Benchmark results in the 2D lattice Thirring model with a chemical potential

    NASA Astrophysics Data System (ADS)

    Ayyar, Venkitesh; Chandrasekharan, Shailesh; Rantaharju, Jarno

    2018-03-01

    We study the two-dimensional lattice Thirring model in the presence of a fermion chemical potential. Our model is asymptotically free and contains massive fermions that mimic a baryon and light bosons that mimic pions. Hence, it is a useful toy model for QCD, especially since it, too, suffers from a sign problem in the auxiliary field formulation in the presence of a fermion chemical potential. In this work, we formulate the model in both the world line and fermion-bag representations and show that the sign problem can be completely eliminated with open boundary conditions when the fermions are massless. Hence, we are able accurately compute a variety of interesting quantities in the model, and these results could provide benchmarks for other methods that are being developed to solve the sign problem in QCD.

  13. Demonstration of a tool for automatic learning and re-use of knowledge in the activated sludge process.

    PubMed

    Comas, J; Rodríguez-Roda, I; Poch, M; Gernaey, K V; Rosen, C; Jeppsson, U

    2006-01-01

    Wastewater treatment plant operators encounter complex operational problems related to the activated sludge process and usually respond to these by applying their own intuition and by taking advantage of what they have learnt from past experiences of similar problems. However, previous process experiences are not easy to integrate in numerical control, and new tools must be developed to enable re-use of plant operating experience. The aim of this paper is to investigate the usefulness of a case-based reasoning (CBR) approach to apply learning and re-use of knowledge gained during past incidents to confront actual complex problems through the IWA/COST Benchmark protocol. A case study shows that the proposed CBR system achieves a significant improvement of the benchmark plant performance when facing a high-flow event disturbance.

  14. ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics (CAA)

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C. (Editor); Ristorcelli, J. Ray (Editor); Tam, Christopher K. W. (Editor)

    1995-01-01

    The proceedings of the Benchmark Problems in Computational Aeroacoustics Workshop held at NASA Langley Research Center are the subject of this report. The purpose of the Workshop was to assess the utility of a number of numerical schemes in the context of the unusual requirements of aeroacoustical calculations. The schemes were assessed from the viewpoint of dispersion and dissipation -- issues important to long time integration and long distance propagation in aeroacoustics. Also investigated were the effect of implementation of different boundary conditions. The Workshop included a forum in which practical engineering problems related to computational aeroacoustics were discussed. This discussion took the form of a dialogue between an industrial panel and the workshop participants and was an effort to suggest the direction of evolution of this field in the context of current engineering needs.

  15. Measurement problem and local hidden variables with entangled photons

    NASA Astrophysics Data System (ADS)

    Muchowski, Eugen

    2017-12-01

    It is shown that there is no remote action with polarization measurements of photons in singlet state. A model is presented introducing a hidden parameter which determines the polarizer output. This model is able to explain the polarization measurement results with entangled photons. It is not ruled out by Bell's Theorem.

  16. Photonic Design: From Fundamental Solar Cell Physics to Computational Inverse Design

    NASA Astrophysics Data System (ADS)

    Miller, Owen Dennis

    Photonic innovation is becoming ever more important in the modern world. Optical systems are dominating shorter and shorter communications distances, LED's are rapidly emerging for a variety of applications, and solar cells show potential to be a mainstream technology in the energy space. The need for novel, energy-efficient photonic and optoelectronic devices will only increase. This work unites fundamental physics and a novel computational inverse design approach towards such innovation. The first half of the dissertation is devoted to the physics of high-efficiency solar cells. As solar cells approach fundamental efficiency limits, their internal physics transforms. Photonic considerations, instead of electronic ones, are the key to reaching the highest voltages and efficiencies. Proper photon management led to Alta Device's recent dramatic increase of the solar cell efficiency record to 28.3%. Moreover, approaching the Shockley-Queisser limit for any solar cell technology will require light extraction to become a part of all future designs. The second half of the dissertation introduces inverse design as a new computational paradigm in photonics. An assortment of techniques (FDTD, FEM, etc.) have enabled quick and accurate simulation of the "forward problem" of finding fields for a given geometry. However, scientists and engineers are typically more interested in the inverse problem: for a desired functionality, what geometry is needed? Answering this question breaks from the emphasis on the forward problem and forges a new path in computational photonics. The framework of shape calculus enables one to quickly find superior, non-intuitive designs. Novel designs for optical cloaking and sub-wavelength solar cell applications are presented.

  17. Localization control of few-photon states in parity-symmetric ‘photonic molecules’ under balanced pumping

    NASA Astrophysics Data System (ADS)

    Bentley, C. D. B.; Celestino, A.; Yacomotti, A. M.; El-Ganainy, R.; Eisfeld, A.

    2018-06-01

    We theoretically investigate the problem of localization control of few-photon states in driven-dissipative parity-symmetric photonic molecules. Photonic molecules are multi-cavity photonic systems. We show that a quantum feedback loop can utilize the information of the spontaneously-emitted photons from each cavity to induce asymmetric photon population in the cavities, while maintaining a balanced pump that respects parity symmetry. To better understand the system’s behavior, we characterize the degree of asymmetry as a function of the coupling between the two optical cavities. Contrary to intuitive expectations, we find that in some regimes the coupling can enhance the population asymmetry. We also show that these results are robust against experimental imperfections and limitations such as detection efficiency.

  18. Fisher information theory for parameter estimation in single molecule microscopy: tutorial

    PubMed Central

    Chao, Jerry; Ward, E. Sally; Ober, Raimund J.

    2016-01-01

    Estimation of a parameter of interest from image data represents a task that is commonly carried out in single molecule microscopy data analysis. The determination of the positional coordinates of a molecule from its image, for example, forms the basis of standard applications such as single molecule tracking and localization-based superresolution image reconstruction. Assuming that the estimator used recovers, on average, the true value of the parameter, its accuracy, or standard deviation, is then at best equal to the square root of the Cramér-Rao lower bound. The Cramér-Rao lower bound can therefore be used as a benchmark in the evaluation of the accuracy of an estimator. Additionally, as its value can be computed and assessed for different experimental settings, it is useful as an experimental design tool. This tutorial demonstrates a mathematical framework that has been specifically developed to calculate the Cramér-Rao lower bound for estimation problems in single molecule microscopy and, more broadly, fluorescence microscopy. The material includes a presentation of the photon detection process that underlies all image data, various image data models that describe images acquired with different detector types, and Fisher information expressions that are necessary for the calculation of the lower bound. Throughout the tutorial, examples involving concrete estimation problems are used to illustrate the effects of various factors on the accuracy of parameter estimation, and more generally, to demonstrate the flexibility of the mathematical framework. PMID:27409706

  19. A Benchmark Problem for Development of Autonomous Structural Modal Identification

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S.; Woodard, Stanley E.; Juang, Jer-Nan

    1996-01-01

    This paper summarizes modal identification results obtained using an autonomous version of the Eigensystem Realization Algorithm on a dynamically complex, laboratory structure. The benchmark problem uses 48 of 768 free-decay responses measured in a complete modal survey test. The true modal parameters of the structure are well known from two previous, independent investigations. Without user involvement, the autonomous data analysis identified 24 to 33 structural modes with good to excellent accuracy in 62 seconds of CPU time (on a DEC Alpha 4000 computer). The modal identification technique described in the paper is the baseline algorithm for NASA's Autonomous Dynamics Determination (ADD) experiment scheduled to fly on International Space Station assembly flights in 1997-1999.

  20. Developing a benchmark for emotional analysis of music

    PubMed Central

    Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400

  1. Developing a benchmark for emotional analysis of music.

    PubMed

    Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad

    2017-01-01

    Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.

  2. A large-scale benchmark of gene prioritization methods.

    PubMed

    Guala, Dimitri; Sonnhammer, Erik L L

    2017-04-21

    In order to maximize the use of results from high-throughput experimental studies, e.g. GWAS, for identification and diagnostics of new disease-associated genes, it is important to have properly analyzed and benchmarked gene prioritization tools. While prospective benchmarks are underpowered to provide statistically significant results in their attempt to differentiate the performance of gene prioritization tools, a strategy for retrospective benchmarking has been missing, and new tools usually only provide internal validations. The Gene Ontology(GO) contains genes clustered around annotation terms. This intrinsic property of GO can be utilized in construction of robust benchmarks, objective to the problem domain. We demonstrate how this can be achieved for network-based gene prioritization tools, utilizing the FunCoup network. We use cross-validation and a set of appropriate performance measures to compare state-of-the-art gene prioritization algorithms: three based on network diffusion, NetRank and two implementations of Random Walk with Restart, and MaxLink that utilizes network neighborhood. Our benchmark suite provides a systematic and objective way to compare the multitude of available and future gene prioritization tools, enabling researchers to select the best gene prioritization tool for the task at hand, and helping to guide the development of more accurate methods.

  3. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  4. Comptonization in Ultra-Strong Magnetic Fields: Numerical Solution to the Radiative Transfer Problem

    NASA Technical Reports Server (NTRS)

    Ceccobello, C.; Farinelli, R.; Titarchuk, L.

    2014-01-01

    We consider the radiative transfer problem in a plane-parallel slab of thermal electrons in the presence of an ultra-strong magnetic field (B approximately greater than B(sub c) approx. = 4.4 x 10(exp 13) G). Under these conditions, the magnetic field behaves like a birefringent medium for the propagating photons, and the electromagnetic radiation is split into two polarization modes, ordinary and extraordinary, that have different cross-sections. When the optical depth of the slab is large, the ordinary-mode photons are strongly Comptonized and the photon field is dominated by an isotropic component. Aims. The radiative transfer problem in strong magnetic fields presents many mathematical issues and analytical or numerical solutions can be obtained only under some given approximations. We investigate this problem both from the analytical and numerical point of view, provide a test of the previous analytical estimates, and extend these results with numerical techniques. Methods. We consider here the case of low temperature black-body photons propagating in a sub-relativistic temperature plasma, which allows us to deal with a semi-Fokker-Planck approximation of the radiative transfer equation. The problem can then be treated with the variable separation method, and we use a numerical technique to find solutions to the eigenvalue problem in the case of a singular kernel of the space operator. The singularity of the space kernel is the result of the strong angular dependence of the electron cross-section in the presence of a strong magnetic field. Results. We provide the numerical solution obtained for eigenvalues and eigenfunctions of the space operator, and the emerging Comptonization spectrum of the ordinary-mode photons for any eigenvalue of the space equation and for energies significantly lesser than the cyclotron energy, which is on the order of MeV for the intensity of the magnetic field here considered. Conclusions. We derived the specific intensity of the ordinary photons, under the approximation of large angle and large optical depth. These assumptions allow the equation to be treated using a diffusion-like approximation.

  5. A dynamic fault tree model of a propulsion system

    NASA Technical Reports Server (NTRS)

    Xu, Hong; Dugan, Joanne Bechta; Meshkat, Leila

    2006-01-01

    We present a dynamic fault tree model of the benchmark propulsion system, and solve it using Galileo. Dynamic fault trees (DFT) extend traditional static fault trees with special gates to model spares and other sequence dependencies. Galileo solves DFT models using a judicious combination of automatically generated Markov and Binary Decision Diagram models. Galileo easily handles the complexities exhibited by the benchmark problem. In particular, Galileo is designed to model phased mission systems.

  6. Global-local methodologies and their application to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1989-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  7. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  8. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  9. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  10. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  11. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  12. Finite element method analysis of band gap and transmission of two-dimensional metallic photonic crystals at terahertz frequencies.

    PubMed

    Degirmenci, Elif; Landais, Pascal

    2013-10-20

    Photonic band gap and transmission characteristics of 2D metallic photonic crystals at THz frequencies have been investigated using finite element method (FEM). Photonic crystals composed of metallic rods in air, in square and triangular lattice arrangements, are considered for transverse electric and transverse magnetic polarizations. The modes and band gap characteristics of metallic photonic crystal structure are investigated by solving the eigenvalue problem over a unit cell of the lattice using periodic boundary conditions. A photonic band gap diagram of dielectric photonic crystal in square lattice array is also considered and compared with well-known plane wave expansion results verifying our FEM approach. The photonic band gap designs for both dielectric and metallic photonic crystals are consistent with previous studies obtained by different methods. Perfect match is obtained between photonic band gap diagrams and transmission spectra of corresponding lattice structure.

  13. Loophole to the universal photon spectrum in electromagnetic cascades and application to the cosmological lithium problem.

    PubMed

    Poulin, Vivian; Serpico, Pasquale Dario

    2015-03-06

    The standard theory of electromagnetic cascades onto a photon background predicts a quasiuniversal shape for the resulting nonthermal photon spectrum. This has been applied to very disparate fields, including nonthermal big bang nucleosynthesis (BBN). However, once the energy of the injected photons falls below the pair-production threshold the spectral shape is much harder, a fact that has been overlooked in past literature. This loophole may have important phenomenological consequences, since it generically alters the BBN bounds on nonthermal relics; for instance, it allows us to reopen the possibility of purely electromagnetic solutions to the so-called "cosmological lithium problem," which were thought to be excluded by other cosmological constraints. We show this with a proof-of-principle example and a simple particle physics model, compared with previous literature.

  14. Photon scattering from a system of multilevel quantum emitters. I. Formalism

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Elfving, Vincent E.; Reiter, Florentin; Sørensen, Anders S.

    2018-04-01

    We introduce a formalism to solve the problem of photon scattering from a system of multilevel quantum emitters. Our approach provides a direct solution of the scattering dynamics. As such the formalism gives the scattered fields' amplitudes in the limit of a weak incident intensity. Our formalism is equipped to treat both multiemitter and multilevel emitter systems, and is applicable to a plethora of photon-scattering problems, including conditional state preparation by photodetection. In this paper, we develop the general formalism for an arbitrary geometry. In the following paper (part II) S. Das et al. [Phys. Rev. A 97, 043838 (2018), 10.1103/PhysRevA.97.043838], we reduce the general photon-scattering formalism to a form that is applicable to one-dimensional waveguides and show its applicability by considering explicit examples with various emitter configurations.

  15. A scalable multi-photon coincidence detector based on superconducting nanowires.

    PubMed

    Zhu, Di; Zhao, Qing-Yuan; Choi, Hyeongrak; Lu, Tsung-Ju; Dane, Andrew E; Englund, Dirk; Berggren, Karl K

    2018-06-04

    Coincidence detection of single photons is crucial in numerous quantum technologies and usually requires multiple time-resolved single-photon detectors. However, the electronic readout becomes a major challenge when the measurement basis scales to large numbers of spatial modes. Here, we address this problem by introducing a two-terminal coincidence detector that enables scalable readout of an array of detector segments based on superconducting nanowire microstrip transmission line. Exploiting timing logic, we demonstrate a sixteen-element detector that resolves all 136 possible single-photon and two-photon coincidence events. We further explore the pulse shapes of the detector output and resolve up to four-photon events in a four-element device, giving the detector photon-number-resolving capability. This new detector architecture and operating scheme will be particularly useful for multi-photon coincidence detection in large-scale photonic integrated circuits.

  16. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  17. Validation of optimization strategies using the linear structured production chains

    NASA Astrophysics Data System (ADS)

    Kusiak, Jan; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2017-06-01

    Different optimization strategies applied to sequence of several stages of production chains were validated in this paper. Two benchmark problems described by ordinary differential equations (ODEs) were considered. A water tank and a passive CR-RC filter were used as the exemplary objects described by the first and the second order differential equations, respectively. Considered in the work optimization problems serve as the validators of strategies elaborated by the Authors. However, the main goal of research is selection of the best strategy for optimization of two real metallurgical processes which will be investigated in an on-going projects. The first problem will be the oxidizing roasting process of zinc sulphide concentrate where the sulphur from the input concentrate should be eliminated and the minimal concentration of sulphide sulphur in the roasted products has to be achieved. Second problem will be the lead refining process consisting of three stages: roasting to the oxide, oxide reduction to metal and the oxidizing refining. Strategies, which appear the most effective in considered benchmark problems will be candidates for optimization of the mentioned above industrial processes.

  18. A heuristic approach to handle capacitated facility location problem evaluated using clustering internal evaluation

    NASA Astrophysics Data System (ADS)

    Sutanto, G. R.; Kim, S.; Kim, D.; Sutanto, H.

    2018-03-01

    One of the problems in dealing with capacitated facility location problem (CFLP) is occurred because of the difference between the capacity numbers of facilities and the number of customers that needs to be served. A facility with small capacity may result in uncovered customers. These customers need to be re-allocated to another facility that still has available capacity. Therefore, an approach is proposed to handle CFLP by using k-means clustering algorithm to handle customers’ allocation. And then, if customers’ re-allocation is needed, is decided by the overall average distance between customers and the facilities. This new approach is benchmarked to the existing approach by Liao and Guo which also use k-means clustering algorithm as a base idea to decide the facilities location and customers’ allocation. Both of these approaches are benchmarked by using three clustering evaluation methods with connectedness, compactness, and separations factors.

  19. Integrating CFD, CAA, and Experiments Towards Benchmark Datasets for Airframe Noise Problems

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan M.; Yamamoto, Kazuomi

    2012-01-01

    Airframe noise corresponds to the acoustic radiation due to turbulent flow in the vicinity of airframe components such as high-lift devices and landing gears. The combination of geometric complexity, high Reynolds number turbulence, multiple regions of separation, and a strong coupling with adjacent physical components makes the problem of airframe noise highly challenging. Since 2010, the American Institute of Aeronautics and Astronautics has organized an ongoing series of workshops devoted to Benchmark Problems for Airframe Noise Computations (BANC). The BANC workshops are aimed at enabling a systematic progress in the understanding and high-fidelity predictions of airframe noise via collaborative investigations that integrate state of the art computational fluid dynamics, computational aeroacoustics, and in depth, holistic, and multifacility measurements targeting a selected set of canonical yet realistic configurations. This paper provides a brief summary of the BANC effort, including its technical objectives, strategy, and selective outcomes thus far.

  20. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  1. Modified reactive tabu search for the symmetric traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.

  2. Toward Automated Benchmarking of Atomistic Force Fields: Neat Liquid Densities and Static Dielectric Constants from the ThermoML Data Archive.

    PubMed

    Beauchamp, Kyle A; Behr, Julie M; Rustenburg, Ariën S; Bayly, Christopher I; Kroenlein, Kenneth; Chodera, John D

    2015-10-08

    Atomistic molecular simulations are a powerful way to make quantitative predictions, but the accuracy of these predictions depends entirely on the quality of the force field employed. Although experimental measurements of fundamental physical properties offer a straightforward approach for evaluating force field quality, the bulk of this information has been tied up in formats that are not machine-readable. Compiling benchmark data sets of physical properties from non-machine-readable sources requires substantial human effort and is prone to the accumulation of human errors, hindering the development of reproducible benchmarks of force-field accuracy. Here, we examine the feasibility of benchmarking atomistic force fields against the NIST ThermoML data archive of physicochemical measurements, which aggregates thousands of experimental measurements in a portable, machine-readable, self-annotating IUPAC-standard format. As a proof of concept, we present a detailed benchmark of the generalized Amber small-molecule force field (GAFF) using the AM1-BCC charge model against experimental measurements (specifically, bulk liquid densities and static dielectric constants at ambient pressure) automatically extracted from the archive and discuss the extent of data available for use in larger scale (or continuously performed) benchmarks. The results of even this limited initial benchmark highlight a general problem with fixed-charge force fields in the representation low-dielectric environments, such as those seen in binding cavities or biological membranes.

  3. Time-Bin-Encoded Boson Sampling with a Single-Photon Device.

    PubMed

    He, Yu; Ding, X; Su, Z-E; Huang, H-L; Qin, J; Wang, C; Unsleber, S; Chen, C; Wang, H; He, Y-M; Wang, X-L; Zhang, W-J; Chen, S-J; Schneider, C; Kamp, M; You, L-X; Wang, Z; Höfling, S; Lu, Chao-Yang; Pan, Jian-Wei

    2017-05-12

    Boson sampling is a problem strongly believed to be intractable for classical computers, but can be naturally solved on a specialized photonic quantum simulator. Here, we implement the first time-bin-encoded boson sampling using a highly indistinguishable (∼94%) single-photon source based on a single quantum-dot-micropillar device. The protocol requires only one single-photon source, two detectors, and a loop-based interferometer for an arbitrary number of photons. The single-photon pulse train is time-bin encoded and deterministically injected into an electrically programmable multimode network. The observed three- and four-photon boson sampling rates are 18.8 and 0.2 Hz, respectively, which are more than 100 times faster than previous experiments based on parametric down-conversion.

  4. Photocatalytic water splitting: Materials design and high-throughput screening of molecular compositions

    NASA Astrophysics Data System (ADS)

    Khnayzer, Rony S.

    Due to the expected increases on energy demand in the near future, the development of new catalytic molecular compositions and materials capable of directly converting water, with the aid of solar photons, into hydrogen becomes obviated. Hydrogen is a combustible fuel and precious high-energy feedstock chemical. However, for the water-splitting reaction to proceed efficiently and economically enough for large-scale application, efficient light-absorbing sensitizers and water splitting catalysts are required. To study the kinetics of the water reduction reaction, we have used titania (TiO2) nanoparticles as a robust scaffold to photochemically grow platinum (Pt) nanoparticles from a unique surface-anchored molecular precursor Pt(dcbpy)Cl2 [dcbpy = 4,4'-dicarboxylic acid-2,2'-bipyridine]. The hybrid Pt/TiO 2 nanomaterials obtained were shown to be a superior water reduction catalyst (WRC) in aqueous suspensions when compared with the benchmark platinized TiO2. In addition, cobalt phosphate (CoPi) water oxidation catalyst (WOC) was photochemically assembled on the surface of TiO2, and its structure and mechanism of activity showed resemblance to the established electrochemically grown CoPi material. Both WRC and WOC described above possessed near unity Faradaic efficiency for hydrogen and oxygen production respectively, and were fully characterized by electron microscopy, x-ray absorption spectroscopy, electrochemistry and photochemistry. While there are established materials and molecules that are able to drive water splitting catalysis, some of these efficient semiconductors, including titanium dioxide (TiO2) and tungsten trioxide (WO3), are only able to absorb high-energy (ultraviolet or blue) photons. This high-energy light represents merely a fraction of the solar spectrum that strikes the earth and the energy content of those remaining photons is simply wasted. A strategy to mitigate this problem has been developed over the years in our laboratory. Briefly, photons of low energy are converted into higher energy light using a process termed photon upconversion. Using this technique, low energy photons supplied by the sun can be converted into light of appropriate energy to trigger electronic transitions in high energy absorbing photoactive materials without any chemical modification of the latter. We have shown, that this technology is capable of upconverting visible sunlight to sensitize wide-bandgap semiconductors such as WO3, subsequently extending the photoaction of these materials to cover a larger portion of the solar spectrum. Besides the engineering of different compositions that serve as either sensitizers or catalysts in these solar energy conversion schemes, we have designed an apparatus for parallel high-throughput screening of these photocatalytic compositions. This combinatorial approach to solar fuels photocatalysis has already led to unprecedented fundamental understanding of the generation of hydrogen gas from pure water. The activity of a series of new Ru(II) sensitizers along with Co(II) molecular WRCs were optimized under visible light excitation utilizing different experimental conditions. The multi-step mechanism of activity of selected compositions was further elucidated by pump-probe transient absorption spectroscopy.

  5. INL Results for Phases I and III of the OECD/NEA MHTGR-350 Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Javier Ortensi; Sonat Sen

    2013-09-01

    The Idaho National Laboratory (INL) Very High Temperature Reactor (VHTR) Technology Development Office (TDO) Methods Core Simulation group led the construction of the Organization for Economic Cooperation and Development (OECD) Modular High Temperature Reactor (MHTGR) 350 MW benchmark for comparing and evaluating prismatic VHTR analysis codes. The benchmark is sponsored by the OECD's Nuclear Energy Agency (NEA), and the project will yield a set of reference steady-state, transient, and lattice depletion problems that can be used by the Department of Energy (DOE), the Nuclear Regulatory Commission (NRC), and vendors to assess their code suits. The Methods group is responsible formore » defining the benchmark specifications, leading the data collection and comparison activities, and chairing the annual technical workshops. This report summarizes the latest INL results for Phase I (steady state) and Phase III (lattice depletion) of the benchmark. The INSTANT, Pronghorn and RattleSnake codes were used for the standalone core neutronics modeling of Exercise 1, and the results obtained from these codes are compared in Section 4. Exercise 2 of Phase I requires the standalone steady-state thermal fluids modeling of the MHTGR-350 design, and the results for the systems code RELAP5-3D are discussed in Section 5. The coupled neutronics and thermal fluids steady-state solution for Exercise 3 are reported in Section 6, utilizing the newly developed Parallel and Highly Innovative Simulation for INL Code System (PHISICS)/RELAP5-3D code suit. Finally, the lattice depletion models and results obtained for Phase III are compared in Section 7. The MHTGR-350 benchmark proved to be a challenging simulation set of problems to model accurately, and even with the simplifications introduced in the benchmark specification this activity is an important step in the code-to-code verification of modern prismatic VHTR codes. A final OECD/NEA comparison report will compare the Phase I and III results of all other international participants in 2014, while the remaining Phase II transient case results will be reported in 2015.« less

  6. Spherical Harmonic Solutions to the 3D Kobayashi Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, P.N.; Chang, B.; Hanebutte, U.R.

    1999-12-29

    Spherical harmonic solutions of order 5, 9 and 21 on spatial grids containing up to 3.3 million cells are presented for the Kobayashi benchmark suite. This suite of three problems with simple geometry of pure absorber with large void region was proposed by Professor Kobayashi at an OECD/NEA meeting in 1996. Each of the three problems contains a source, a void and a shield region. Problem 1 can best be described as a box in a box problem, where a source region is surrounded by a square void region which itself is embedded in a square shield region. Problems 2more » and 3 represent a shield with a void duct. Problem 2 having a straight and problem 3 a dog leg shaped duct. A pure absorber and a 50% scattering case are considered for each of the three problems. The solutions have been obtained with Ardra, a scalable, parallel neutron transport code developed at Lawrence Livermore National Laboratory (LLNL). The Ardra code takes advantage of a two-level parallelization strategy, which combines message passing between processing nodes and thread based parallelism amongst processors on each node. All calculations were performed on the IBM ASCI Blue-Pacific computer at LLNL.« less

  7. Time resolved 3D momentum imaging of ultrafast dynamics by coherent VUV-XUV radiation

    DOE PAGES

    Sturm, F. P.; Wright, T. W.; Ray, D.; ...

    2016-06-14

    Have we present a new experimental setup for measuring ultrafast nuclear and electron dynamics of molecules after photo-excitation and ionization. We combine a high flux femtosecond vacuum ultraviolet (VUV) and extreme ultraviolet (XUV) source with an internally cold molecular beam and a 3D momentum imaging particle spectrometer to measure electrons and ions in coincidence. We describe a variety of tools developed to perform pump-probe studies in the VUV-XUV spectrum and to modify and characterize the photon beam. First benchmark experiments are presented to demonstrate the capabilities of the system.

  8. Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Stoller, Scott D.; Ur, Shmuel

    2003-01-01

    A problem that has been getting prominence in testing is that of looking for intermittent bugs. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. In this paper (invited by PADTAD 2003) we outline a proposed project to facilitate research. The project goals are as follows. The first goal is to create a benchmark that can be used to evaluate different solutions. The benchmark, apart from containing programs with documented bugs, will include other artifacts, such as traces, that are useful for evaluating some of the technologies. The second goal is to create a set of tools with open API s that can be used to check ideas without building a large system. For example an instrumentor will be available, that could be used to test temporal noise making heuristics. The third goal is to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate.

  9. Global-local methodologies and their application to nonlinear analysis. [for structural postbuckling study

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1986-01-01

    An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.

  10. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J; Dossa, D; Gokhale, M

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe:more » (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7DBE Xeon Dual Socket Blackford Server Motherboard; 2 Intel Xeon Dual-Core 2.66 GHz processors; 1 GB DDR2 PC2-5300 RAM (2 x 512); 80GB Hard Drive (Seagate SATA II Barracuda). The Fusion board is presently capable of 4X in a PCIe slot. The image resampling benchmark was run on a dual Xeon workstation with NVIDIA graphics card (see Chapter 5 for full specification). An XtremeData Opteron+FPGA was used for the language classification application. We observed that these benchmarks are not uniformly I/O intensive. The only benchmark that showed greater that 50% of the time in I/O was the graph algorithm when it accessed data files over NFS. When local disk was used, the graph benchmark spent at most 40% of its time in I/O. The other benchmarks were CPU dominated. The image resampling benchmark and language classification showed order of magnitude speedup over software by using co-processor technology to offload the CPU-intensive kernels. Our experiments to date suggest that emerging hardware technologies offer significant benefit to boosting the performance of data-intensive algorithms. Using GPU and FPGA co-processors, we were able to improve performance by more than an order of magnitude on the benchmark algorithms, eliminating the processor bottleneck of CPU-bound tasks. Experiments with a prototype solid state nonvolative memory available today show 10X better throughput on random reads than disk, with a 2X speedup on a graph processing benchmark when compared to the use of local SATA disk.« less

  12. Search for Long-Lived Neutral Particles in Final States with Delayed Photon and Missing Transverse Energy from Proton-Proton Collisions Using the CMS detector

    NASA Astrophysics Data System (ADS)

    Tambe Ebai, Norbert

    We perform a search for long-lived neutral particles in final state with delayed photons and large missing transverse energy produced in LHC proton-proton collisions at center-of-mass energy, √s = 8 TeV. Capitalizing on the excellent timing resolution of the CMS Electromagnetic Calorimeter the search uses photon time measurements made by the Electromagnetic Calorimeter as the main search quantity. We found a single event consistent with our background expectations from the Standard Model and set a model-independent upper limit of 4.37 on number of signal events. We also interpret our results in the context of the SPS8 benchmark GMSB model and show that neutralinos with mean lifetime, tau PSneutralinoOne ≤ 45 ns, and mass, mPSneutralinoOne ≤ 300 GeV/c2, or effective Supersymmetry breaking energy scale, Lambda ≤ 220TeV, are ruled out of existence at 95% CLS confidence level. The exclusion limit on the product of the production cross-section and branching ratio of the neutralino to photon and gravitino decay channel, sigma(PSneutralinoOne → gamma + G˜) x BR, for different lifetimes and masses is derived. Our results confirm for the first time that the CMS Electromagnetic Calorimeter provides good sensitivity to search for long-lived neutral particles with lifetimes up to 40 ns and masses up to 300 GeV/c2 using only timing measurements.

  13. Design of Unstructured Adaptive (UA) NAS Parallel Benchmark Featuring Irregular, Dynamic Memory Accesses

    NASA Technical Reports Server (NTRS)

    Feng, Hui-Yu; VanderWijngaart, Rob; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We describe the design of a new method for the measurement of the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. The method involves the solution of a stylized heat transfer problem on an unstructured, adaptive grid. A Spectral Element Method (SEM) with an adaptive, nonconforming mesh is selected to discretize the transport equation. The relatively high order of the SEM lowers the fraction of wall clock time spent on inter-processor communication, which eases the load balancing task and allows us to concentrate on the memory accesses. The benchmark is designed to be three-dimensional. Parallelization and load balance issues of a reference implementation will be described in detail in future reports.

  14. PID controller tuning using metaheuristic optimization algorithms for benchmark problems

    NASA Astrophysics Data System (ADS)

    Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.

    2017-11-01

    This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.

  15. Monte Carlo studies on photon interactions in radiobiological experiments

    PubMed Central

    Shahmohammadi Beni, Mehrdad; Krstic, D.; Nikezic, D.

    2018-01-01

    X-ray and γ-ray photons have been widely used for studying radiobiological effects of ionizing radiations. Photons are indirectly ionizing radiations so they need to set in motion electrons (which are a directly ionizing radiation) to perform the ionizations. When the photon dose decreases to below a certain limit, the number of electrons set in motion will become so small that not all cells in an “exposed” cell population can get at least one electron hit. When some cells in a cell population are not hit by a directly ionizing radiation (in other words not irradiated), there will be rescue effect between the irradiated cells and non-irradiated cells, and the resultant radiobiological effect observed for the “exposed” cell population will be different. In the present paper, the mechanisms underlying photon interactions in radiobiological experiments were studied using our developed NRUphoton computer code, which was benchmarked against the MCNP5 code by comparing the photon dose delivered to the cell layer underneath the water medium. The following conclusions were reached: (1) The interaction fractions decreased in the following order: 16O > 12C > 14N > 1H. Bulges in the interaction fractions (versus water medium thickness) were observed, which reflected changes in the energies of the propagating photons due to traversals of different amount of water medium as well as changes in the energy-dependent photon interaction cross-sections. (2) Photoelectric interaction and incoherent scattering dominated for lower-energy (10 keV) and high-energy (100 keV and 1 MeV) incident photons. (3) The fractions of electron ejection from different nuclei were mainly governed by the photoelectric effect cross-sections, and the fractions from the 1s subshell were the largest. (4) The penetration fractions in general decreased with increasing medium thickness, and increased with increasing incident photon energy, the latter being explained by the corresponding reduction in interaction cross-sections. (5) The areas under the angular distribution curves of photons exiting the medium layer and subsequently undergoing interactions within the cell layer became smaller for larger incident photon energies. (6) The number of cells suffering at least one electron hit increased with the administered dose. For larger incident photon energies, the numbers of cells suffering at least one electron hit became smaller, which was attributed to the reduction in the photon interaction cross-section. These results highlighted the importance of the administered dose in radiobiological experiments. In particular, the threshold administered doses at which all cells in the exposed cell array suffered at least one electron hit might provide hints on explaining the intriguing observation that radiation-induced cancers can be statistically detected only above the threshold value of ~100 mSv, and thus on reconciling controversies over the linear no-threshold model. PMID:29561871

  16. Proton Magnetic Form Factor from Existing Elastic e-p Cross Section Data

    NASA Astrophysics Data System (ADS)

    Ou, Longwu; Christy, Eric; Gilad, Shalev; Keppel, Cynthia; Schmookler, Barak; Wojtsekhowski, Bogdan

    2015-04-01

    The proton magnetic form factor GMp, in addition to being an important benchmark for all cross section measurements in hadron physics, provides critical information on proton structure. Extraction of GMp from e-p cross section data is complicated by two-photon exchange (TPE) effects, where available calculations still have large theoretical uncertainties. Studies of TPE contributions to e-p scattering have observed no nonlinear effects in Rosenbluth separations. Recent theoretical investigations show that the TPE correction goes to 0 when ɛ approaches 1, where ɛ is the virtual photon polarization parameter. In this talk, existing e-p elastic cross section data are reanalyzed by extrapolating the reduced cross section for ɛ approaching 1. Existing polarization transfer data, which is supposed to be relatively immune to TPE effects, are used to produce a ratio of electric and magnetic form factors. The extrapolated reduced cross section and polarization transfer ratio are then used to calculate GEp and GMp at different Q2 values.

  17. An estimate for the thermal photon rate from lattice QCD

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman

    2018-03-01

    We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.

  18. On the development of a comprehensive MC simulation model for the Gamma Knife Perfexion radiosurgery unit

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.

    2016-02-01

    This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819  ±  0.004 and 0.8941  ±  0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.

  19. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy

    NASA Astrophysics Data System (ADS)

    Giménez-Alventosa, Vicent; Antunes, Paula C. G.; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-01

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  20. Collision-kerma conversion between dose-to-tissue and dose-to-water by photon energy-fluence corrections in low-energy brachytherapy.

    PubMed

    Giménez-Alventosa, Vicent; Antunes, Paula C G; Vijande, Javier; Ballester, Facundo; Pérez-Calatayud, José; Andreo, Pedro

    2017-01-07

    The AAPM TG-43 brachytherapy dosimetry formalism, introduced in 1995, has become a standard for brachytherapy dosimetry worldwide; it implicitly assumes that charged-particle equilibrium (CPE) exists for the determination of absorbed dose to water at different locations, except in the vicinity of the source capsule. Subsequent dosimetry developments, based on Monte Carlo calculations or analytical solutions of transport equations, do not rely on the CPE assumption and determine directly the dose to different tissues. At the time of relating dose to tissue and dose to water, or vice versa, it is usually assumed that the photon fluence in water and in tissues are practically identical, so that the absorbed dose in the two media can be related by their ratio of mass energy-absorption coefficients. In this work, an efficient way to correlate absorbed dose to water and absorbed dose to tissue in brachytherapy calculations at clinically relevant distances for low-energy photon emitting seeds is proposed. A correction is introduced that is based on the ratio of the water-to-tissue photon energy-fluences. State-of-the art Monte Carlo calculations are used to score photon fluence differential in energy in water and in various human tissues (muscle, adipose and bone), which in all cases include a realistic modelling of low-energy brachytherapy sources in order to benchmark the formalism proposed. The energy-fluence based corrections given in this work are able to correlate absorbed dose to tissue and absorbed dose to water with an accuracy better than 0.5% in the most critical cases (e.g. bone tissue).

  1. Using Toyota's A3 Thinking for Analyzing MBA Business Cases

    ERIC Educational Resources Information Center

    Anderson, Joe S.; Morgan, James N.; Williams, Susan K.

    2011-01-01

    A3 Thinking is fundamental to Toyota's benchmark management philosophy and to their lean production system. It is used to solve problems, gain agreement, mentor team members, and lead organizational improvements. A structured problem-solving approach, A3 Thinking builds improvement opportunities through experience. We used "The Toyota…

  2. Applicability domains for classification problems: benchmarking of distance to models for AMES mutagenicity set

    EPA Science Inventory

    For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...

  3. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients.

    PubMed

    Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F

    2016-12-05

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.

  4. Implementing a benchmarking and feedback concept decreases postoperative pain after total knee arthroplasty: A prospective study including 256 patients

    PubMed Central

    Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.

    2016-01-01

    Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911

  5. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  6. Counting the Photons: Determining the Absolute Storage Capacity of Persistent Phosphors

    PubMed Central

    Rodríguez Burbano, Diana C.; Capobianco, John A.

    2017-01-01

    The performance of a persistent phosphor is often determined by comparing luminance decay curves, expressed in cd/m2. However, these photometric units do not enable a straightforward, objective comparison between different phosphors in terms of the total number of emitted photons, as these units are dependent on the emission spectrum of the phosphor. This may lead to incorrect conclusions regarding the storage capacity of the phosphor. An alternative and convenient technique of characterizing the performance of a phosphor was developed on the basis of the absolute storage capacity of phosphors. In this technique, the phosphor is incorporated in a transparent polymer and the measured afterglow is converted into an absolute number of emitted photons, effectively quantifying the amount of energy that can be stored in the material. This method was applied to the benchmark phosphor SrAl2O4:Eu,Dy and to the nano-sized phosphor CaS:Eu. The results indicated that only a fraction of the Eu ions (around 1.6% in the case of SrAl2O4:Eu,Dy) participated in the energy storage process, which is in line with earlier reports based on X-ray absorption spectroscopy. These findings imply that there is still a significant margin for improving the storage capacity of persistent phosphors. PMID:28773228

  7. Why do I need to know this? Optics/photonics problem-based learning in the math classroom

    NASA Astrophysics Data System (ADS)

    Donnelly, Matthew J.; Donnelly, Judith F.; Donnelly, Stephanie

    2017-08-01

    A common complaint of engineering managers is that new employees at all levels, technician through engineer, tend to have rote calculation ability but are unable to think critically and use structured problem solving techniques to apply mathematical concepts. Further, they often have poor written and oral communication skills and difficulty working in teams. Ironically, a common question of high school mathematics students is "Why do I need to know this?" In this paper we describe a project using optics/photonics and Problem Based Learning (PBL) to address these issues in a high school calculus classroom.

  8. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  9. An optical solution for the traveling salesman problem.

    PubMed

    Haist, Tobias; Osten, Wolfgang

    2007-08-06

    We introduce an optical method based on white light interferometry in order to solve the well-known NP-complete traveling salesman problem. To our knowledge it is the first time that a method for the reduction of non-polynomial time to quadratic time has been proposed. We will show that this achievement is limited by the number of available photons for solving the problem. It will turn out that this number of photons is proportional to N(N) for a traveling salesman problem with N cities and that for large numbers of cities the method in practice therefore is limited by the signal-to-noise ratio. The proposed method is meant purely as a gedankenexperiment.

  10. Photon migration in non-scattering tissue and the effects on image reconstruction

    NASA Astrophysics Data System (ADS)

    Dehghani, H.; Delpy, D. T.; Arridge, S. R.

    1999-12-01

    Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.

  11. Numerical Prediction of Signal for Magnetic Flux Leakage Benchmark Task

    NASA Astrophysics Data System (ADS)

    Lunin, V.; Alexeevsky, D.

    2003-03-01

    Numerical results predicted by the finite element method based code are presented. The nonlinear magnetic time-dependent benchmark problem proposed by the World Federation of Nondestructive Evaluation Centers, involves numerical prediction of normal (radial) component of the leaked field in the vicinity of two practically rectangular notches machined on a rotating steel pipe (with known nonlinear magnetic characteristic). One notch is located on external surface of pipe and other is on internal one, and both are oriented axially.

  12. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  13. Air-kerma strength determination of a miniature x-ray source for brachytherapy applications

    NASA Astrophysics Data System (ADS)

    Davis, Stephen D.

    A miniature x-ray source has been developed by Xoft Inc. for high dose-rate brachytherapy treatments. The source is contained in a 5.4 mm diameter water-cooling catheter. The source voltage can be adjusted from 40 kV to 50 kV and the beam current is adjustable up to 300 muA. Electrons are accelerated toward a tungsten-coated anode to produce a lightly-filtered bremsstrahlung photon spectrum. The sources were initially used for early-stage breast cancer treatment using a balloon applicator. More recently, Xoft Inc. has developed vaginal and surface applicators. The miniature x-ray sources have been characterized using a modification of the American Association of Physicists in Medicine Task Group No. 43 formalism normally used for radioactive brachytherapy sources. Primary measurements of air kerma were performed using free-air ionization chambers at the University of Wisconsin (UW) and the National Institute of Standards and Technology (NIST). The measurements at UW were used to calibrate a well-type ionization chamber for clinical verification of source strength. Accurate knowledge of the emitted photon spectrum was necessary to calculate the corrections required to determine air-kerma strength, defined in vacuo. Theoretical predictions of the photon spectrum were calculated using three separate Monte Carlo codes: MCNP5, EGSnrc, and PENELOPE. Each code used different implementations of the underlying radiological physics. Benchmark studies were performed to investigate these differences in detail. The most important variation among the codes was found to be the calculation of fluorescence photon production following electron-induced vacancies in the L shell of tungsten atoms. The low-energy tungsten L-shell fluorescence photons have little clinical significance at the treatment distance, but could have a large impact on air-kerma measurements. Calculated photon spectra were compared to spectra measured with high-purity germanium spectroscopy systems at both UW and NIST. The effects of escaped germanium fluorescence photons and Compton-scattered photons were taken into account for the UW measurements. The photon spectrum calculated using the PENELOPE Monte Carlo code had the best agreement with the spectrum measured at NIST. Corrections were applied to the free-air chamber measurements to arrive at an air-kerma strength determination for the miniature x-ray sources.

  14. A benchmark for subduction zone modeling

    NASA Astrophysics Data System (ADS)

    van Keken, P.; King, S.; Peacock, S.

    2003-04-01

    Our understanding of subduction zones hinges critically on the ability to discern its thermal structure and dynamics. Computational modeling has become an essential complementary approach to observational and experimental studies. The accurate modeling of subduction zones is challenging due to the unique geometry, complicated rheological description and influence of fluid and melt formation. The complicated physics causes problems for the accurate numerical solution of the governing equations. As a consequence it is essential for the subduction zone community to be able to evaluate the ability and limitations of various modeling approaches. The participants of a workshop on the modeling of subduction zones, held at the University of Michigan at Ann Arbor, MI, USA in 2002, formulated a number of case studies to be developed into a benchmark similar to previous mantle convection benchmarks (Blankenbach et al., 1989; Busse et al., 1991; Van Keken et al., 1997). Our initial benchmark focuses on the dynamics of the mantle wedge and investigates three different rheologies: constant viscosity, diffusion creep, and dislocation creep. In addition we investigate the ability of codes to accurate model dynamic pressure and advection dominated flows. Proceedings of the workshop and the formulation of the benchmark are available at www.geo.lsa.umich.edu/~keken/subduction02.html We strongly encourage interested research groups to participate in this benchmark. At Nice 2003 we will provide an update and first set of benchmark results. Interested researchers are encouraged to contact one of the authors for further details.

  15. Validation of the BUGJEFF311.BOLIB, BUGENDF70.BOLIB and BUGLE-B7 broad-group libraries on the PCA-Replica (H2O/Fe) neutron shielding benchmark experiment

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2016-03-01

    The PCA-Replica 12/13 (H2O/Fe) neutron shielding benchmark experiment was analysed using the TORT-3.2 3D SN code. PCA-Replica reproduces a PWR ex-core radial geometry with alternate layers of water and steel including a pressure vessel simulator. Three broad-group coupled neutron/photon working cross section libraries in FIDO-ANISN format with the same energy group structure (47 n + 20 γ) and based on different nuclear data were alternatively used: the ENEA BUGJEFF311.BOLIB (JEFF-3.1.1) and UGENDF70.BOLIB (ENDF/B-VII.0) libraries and the ORNL BUGLE-B7 (ENDF/B-VII.0) library. Dosimeter cross sections derived from the IAEA IRDF-2002 dosimetry file were employed. The calculated reaction rates for the Rh-103(n,n')Rh-103m, In-115(n,n')In-115m and S-32(n,p)P-32 threshold activation dosimeters and the calculated neutron spectra are compared with the corresponding experimental results.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, Timothy P.; Martz, Roger L.; Kiedrowski, Brian C.

    New unstructured mesh capabilities in MCNP6 (developmental version during summer 2012) show potential for conducting multi-physics analyses by coupling MCNP to a finite element solver such as Abaqus/CAE[2]. Before these new capabilities can be utilized, the ability of MCNP to accurately estimate eigenvalues and pin powers using an unstructured mesh must first be verified. Previous work to verify the unstructured mesh capabilities in MCNP was accomplished using the Godiva sphere [1], and this work attempts to build on that. To accomplish this, a criticality benchmark and a fuel assembly benchmark were used for calculations in MCNP using both the Constructivemore » Solid Geometry (CSG) native to MCNP and the unstructured mesh geometry generated using Abaqus/CAE. The Big Ten criticality benchmark [3] was modeled due to its geometry being similar to that of a reactor fuel pin. The C5G7 3-D Mixed Oxide (MOX) Fuel Assembly Benchmark [4] was modeled to test the unstructured mesh capabilities on a reactor-type problem.« less

  17. The Suite for Embedded Applications and Kernels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-05-10

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We havedesigned SEAK, a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions to these bottlenecks? and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) andgoal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user blackbox evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informativemore » for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.« less

  18. The ab-initio density matrix renormalization group in practice.

    PubMed

    Olivares-Amaya, Roberto; Hu, Weifeng; Nakatani, Naoki; Sharma, Sandeep; Yang, Jun; Chan, Garnet Kin-Lic

    2015-01-21

    The ab-initio density matrix renormalization group (DMRG) is a tool that can be applied to a wide variety of interesting problems in quantum chemistry. Here, we examine the density matrix renormalization group from the vantage point of the quantum chemistry user. What kinds of problems is the DMRG well-suited to? What are the largest systems that can be treated at practical cost? What sort of accuracies can be obtained, and how do we reason about the computational difficulty in different molecules? By examining a diverse benchmark set of molecules: π-electron systems, benchmark main-group and transition metal dimers, and the Mn-oxo-salen and Fe-porphine organometallic compounds, we provide some answers to these questions, and show how the density matrix renormalization group is used in practice.

  19. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  20. Gaussian Boson Sampling.

    PubMed

    Hamilton, Craig S; Kruse, Regina; Sansoni, Linda; Barkhofen, Sonja; Silberhorn, Christine; Jex, Igor

    2017-10-27

    Boson sampling has emerged as a tool to explore the advantages of quantum over classical computers as it does not require universal control over the quantum system, which favors current photonic experimental platforms. Here, we introduce Gaussian Boson sampling, a classically hard-to-solve problem that uses squeezed states as a nonclassical resource. We relate the probability to measure specific photon patterns from a general Gaussian state in the Fock basis to a matrix function called the Hafnian, which answers the last remaining question of sampling from Gaussian states. Based on this result, we design Gaussian Boson sampling, a #P hard problem, using squeezed states. This demonstrates that Boson sampling from Gaussian states is possible, with significant advantages in the photon generation probability, compared to existing protocols.

  1. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  2. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  3. Principles for Developing Benchmark Criteria for Staff Training in Responsible Gambling.

    PubMed

    Oehler, Stefan; Banzer, Raphaela; Gruenerbl, Agnes; Malischnig, Doris; Griffiths, Mark D; Haring, Christian

    2017-03-01

    One approach to minimizing the negative consequences of excessive gambling is staff training to reduce the rate of the development of new cases of harm or disorder within their customers. The primary goal of the present study was to assess suitable benchmark criteria for the training of gambling employees at casinos and lottery retailers. The study utilised the Delphi Method, a survey with one qualitative and two quantitative phases. A total of 21 invited international experts in the responsible gambling field participated in all three phases. A total of 75 performance indicators were outlined and assigned to six categories: (1) criteria of content, (2) modelling, (3) qualification of trainer, (4) framework conditions, (5) sustainability and (6) statistical indicators. Nine of the 75 indicators were rated as very important by 90 % or more of the experts. Unanimous support for importance was given to indicators such as (1) comprehensibility and (2) concrete action-guidance for handling with problem gamblers, Additionally, the study examined the implementation of benchmarking, when it should be conducted, and who should be responsible. Results indicated that benchmarking should be conducted every 1-2 years regularly and that one institution should be clearly defined and primarily responsible for benchmarking. The results of the present study provide the basis for developing a benchmarking for staff training in responsible gambling.

  4. Hierarchical Artificial Bee Colony Algorithm for RFID Network Planning Optimization

    PubMed Central

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness. PMID:24592200

  5. Hierarchical artificial bee colony algorithm for RFID network planning optimization.

    PubMed

    Ma, Lianbo; Chen, Hanning; Hu, Kunyuan; Zhu, Yunlong

    2014-01-01

    This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization, called HABC, to tackle the radio frequency identification network planning (RNP) problem. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operators is applied to enhance the global search ability between species. Experiments are conducted on a set of 10 benchmark optimization problems. The results demonstrate that the proposed HABC obtains remarkable performance on most chosen benchmark functions when compared to several successful swarm intelligence and evolutionary algorithms. Then HABC is used for solving the real-world RNP problem on two instances with different scales. Simulation results show that the proposed algorithm is superior for solving RNP, in terms of optimization accuracy and computation robustness.

  6. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    NASA Astrophysics Data System (ADS)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  7. The Paucity Problem: Where Have All the Space Reactor Experiments Gone?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bess, John D.; Marshall, Margaret A.

    2016-10-01

    The Handbooks of the International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPhEP) together contain a plethora of documented and evaluated experiments essential in the validation of nuclear data, neutronics codes, and modeling of various nuclear systems. Unfortunately, only a minute selection of handbook data (twelve evaluations) are of actual experimental facilities and mockups designed specifically for space nuclear research. There is a paucity problem, such that the multitude of space nuclear experimental activities performed in the past several decades have yet to be recovered and made available in such detail that themore » international community could benefit from these valuable historical research efforts. Those experiments represent extensive investments in infrastructure, expertise, and cost, as well as constitute significantly valuable resources of data supporting past, present, and future research activities. The ICSBEP and IRPhEP were established to identify and verify comprehensive sets of benchmark data; evaluate the data, including quantification of biases and uncertainties; compile the data and calculations in a standardized format; and formally document the effort into a single source of verified benchmark data. See full abstract in attached document.« less

  8. Rigorous-two-Steps scheme of TRIPOLI-4® Monte Carlo code validation for shutdown dose rate calculation

    NASA Astrophysics Data System (ADS)

    Jaboulay, Jean-Charles; Brun, Emeric; Hugot, François-Xavier; Huynh, Tan-Dat; Malouch, Fadhel; Mancusi, Davide; Tsilanizara, Aime

    2017-09-01

    After fission or fusion reactor shutdown the activated structure emits decay photons. For maintenance operations the radiation dose map must be established in the reactor building. Several calculation schemes have been developed to calculate the shutdown dose rate. These schemes are widely developed in fusion application and more precisely for the ITER tokamak. This paper presents the rigorous-two-steps scheme implemented at CEA. It is based on the TRIPOLI-4® Monte Carlo code and the inventory code MENDEL. The ITER shutdown dose rate benchmark has been carried out, results are in a good agreement with the other participant.

  9. Itinerant Microwave Photon Detector

    NASA Astrophysics Data System (ADS)

    Royer, Baptiste; Grimsmo, Arne L.; Choquette-Poitevin, Alexandre; Blais, Alexandre

    2018-05-01

    The realization of a high-efficiency microwave single photon detector is a long-standing problem in the field of microwave quantum optics. Here, we propose a quantum nondemolition, high-efficiency photon detector that can readily be implemented in present state-of-the-art circuit quantum electrodynamics. This scheme works in a continuous fashion, gaining information about the photon arrival time as well as about its presence. The key insight that allows us to circumvent the usual limitations imposed by measurement backaction is the use of long-lived dark states in a small ensemble of inhomogeneous artificial atoms to increase the interaction time between the photon and the measurement device. Using realistic system parameters, we show that large detection fidelities are possible.

  10. Using a knowledge-based planning solution to select patients for proton therapy.

    PubMed

    Delaney, Alexander R; Dahele, Max; Tol, Jim P; Kuijper, Ingrid T; Slotman, Ben J; Verbakel, Wilko F A R

    2017-08-01

    Patient selection for proton therapy by comparing proton/photon treatment plans is time-consuming and prone to bias. RapidPlan™, a knowledge-based-planning solution, uses plan-libraries to model and predict organ-at-risk (OAR) dose-volume-histograms (DVHs). We investigated whether RapidPlan, utilizing an algorithm based only on photon beam characteristics, could generate proton DVH-predictions and whether these could correctly identify patients for proton therapy. Model PROT and Model PHOT comprised 30 head-and-neck cancer proton and photon plans, respectively. Proton and photon knowledge-based-plans (KBPs) were made for ten evaluation-patients. DVH-prediction accuracy was analyzed by comparing predicted-vs-achieved mean OAR doses. KBPs and manual plans were compared using salivary gland and swallowing muscle mean doses. For illustration, patients were selected for protons if predicted Model PHOT mean dose minus predicted Model PROT mean dose (ΔPrediction) for combined OARs was ≥6Gy, and benchmarked using achieved KBP doses. Achieved and predicted Model PROT /Model PHOT mean dose R 2 was 0.95/0.98. Generally, achieved mean dose for Model PHOT /Model PROT KBPs was respectively lower/higher than predicted. Comparing Model PROT /Model PHOT KBPs with manual plans, salivary and swallowing mean doses increased/decreased by <2Gy, on average. ΔPrediction≥6Gy correctly selected 4 of 5 patients for protons. Knowledge-based DVH-predictions can provide efficient, patient-specific selection for protons. A proton-specific RapidPlan-solution could improve results. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Structuring β-Ga2O3 photonic crystal photocatalyst for efficient degradation of organic pollutants.

    PubMed

    Li, Xiaofang; Zhen, Xiuzheng; Meng, Sugang; Xian, Jiangjun; Shao, Yu; Fu, Xianzhi; Li, Danzhen

    2013-09-03

    Coupling photocatalysts with photonic crystals structure is based on the unique property of photonic crystals in confining, controlling, and manipulating the incident photons. This combination enhances the light absorption in photocatalysts and thus greatly improves their photocatalytic performance. In this study, Ga2O3 photonic crystals with well-arranged skeleton structures were prepared via a dip-coating infiltration method. The positions of the electronic band absorption for Ga2O3 photonic crystals could be made to locate on the red edge, on the blue edge, and away from the edge of their photonic band gaps by changing the pore sizes of the samples, respectively. Particularly, the electronic band absorption of the Ga2O3 photonic crystal with a pore size of 135 nm was enhanced more than other samples by making it locate on the red edge of its photonic band gap, which was confirmed by the higher instantaneous photocurrent and photocatalytic activity for the degradation of various organic pollutants under ultraviolet light irradiation. Furthermore, the degradation mechanism over Ga2O3 photonic crystals was discussed. The design of Ga2O3 photonic crystals presents a prospective application of photonic crystals in photocatalysis to address light harvesting and quantum efficiency problems through manipulating photons or constructing photonic crystal structure as groundwork.

  12. Photon Limited Images and Their Restoration

    DTIC Science & Technology

    1976-03-01

    arises from noise inherent in the detected image data. In the first part of this report a model is developed which can be used to mathematically and...statistically describe an image detected at low light levels. This rodel serves to clarify some basic properties of photon noise , and provides a basis...for the analysi.s of image restoration. In the second part the problem of linear least-square restoration of imagery limited by photon noise is

  13. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  14. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  15. Experimental demonstration of a BDCZ quantum repeater node.

    PubMed

    Yuan, Zhen-Sheng; Chen, Yu-Ao; Zhao, Bo; Chen, Shuai; Schmiedmayer, Jörg; Pan, Jian-Wei

    2008-08-28

    Quantum communication is a method that offers efficient and secure ways for the exchange of information in a network. Large-scale quantum communication (of the order of 100 km) has been achieved; however, serious problems occur beyond this distance scale, mainly due to inevitable photon loss in the transmission channel. Quantum communication eventually fails when the probability of a dark count in the photon detectors becomes comparable to the probability that a photon is correctly detected. To overcome this problem, Briegel, Dür, Cirac and Zoller (BDCZ) introduced the concept of quantum repeaters, combining entanglement swapping and quantum memory to efficiently extend the achievable distances. Although entanglement swapping has been experimentally demonstrated, the implementation of BDCZ quantum repeaters has proved challenging owing to the difficulty of integrating a quantum memory. Here we realize entanglement swapping with storage and retrieval of light, a building block of the BDCZ quantum repeater. We follow a scheme that incorporates the strategy of BDCZ with atomic quantum memories. Two atomic ensembles, each originally entangled with a single emitted photon, are projected into an entangled state by performing a joint Bell state measurement on the two single photons after they have passed through a 300-m fibre-based communication channel. The entanglement is stored in the atomic ensembles and later verified by converting the atomic excitations into photons. Our method is intrinsically phase insensitive and establishes the essential element needed to realize quantum repeaters with stationary atomic qubits as quantum memories and flying photonic qubits as quantum messengers.

  16. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  17. Optimization in optical systems revisited: Beyond genetic algorithms

    NASA Astrophysics Data System (ADS)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  18. Evaluating Biology Achievement Scores in an ICT Integrated PBL Environment

    ERIC Educational Resources Information Center

    Osman, Kamisah; Kaur, Simranjeet Judge

    2014-01-01

    Students' achievement in Biology is often looked up as a benchmark to evaluate the mode of teaching and learning in higher education. Problem-based learning (PBL) is an approach that focuses on students' solving a problem through collaborative groups. There were eighty samples involved in this study. The samples were divided into three groups: ICT…

  19. Wilderness visitor management practices: a benchmark and an assessment of progress

    Treesearch

    Alan E. Watson

    1989-01-01

    In the short time that wilderness visitor management practices have been monitored, some obvious trends have developed. The managing agencies, however, have appeared to provide different solutions to similar problems. In the early years, these problems revolved around concern about overuse of the resource and crowded conditions. Some of those concerns exist today, but...

  20. A multiagent evolutionary algorithm for constraint satisfaction problems.

    PubMed

    Liu, Jing; Zhong, Weicai; Jiao, Licheng

    2006-02-01

    With the intrinsic properties of constraint satisfaction problems (CSPs) in mind, we divide CSPs into two types, namely, permutation CSPs and nonpermutation CSPs. According to their characteristics, several behaviors are designed for agents by making use of the ability of agents to sense and act on the environment. These behaviors are controlled by means of evolution, so that the multiagent evolutionary algorithm for constraint satisfaction problems (MAEA-CSPs) results. To overcome the disadvantages of the general encoding methods, the minimum conflict encoding is also proposed. Theoretical analyzes show that MAEA-CSPs has a linear space complexity and converges to the global optimum. The first part of the experiments uses 250 benchmark binary CSPs and 79 graph coloring problems from the DIMACS challenge to test the performance of MAEA-CSPs for nonpermutation CSPs. MAEA-CSPs is compared with six well-defined algorithms and the effect of the parameters is analyzed systematically. The second part of the experiments uses a classical CSP, n-queen problems, and a more practical case, job-shop scheduling problems (JSPs), to test the performance of MAEA-CSPs for permutation CSPs. The scalability of MAEA-CSPs along n for n-queen problems is studied with great care. The results show that MAEA-CSPs achieves good performance when n increases from 10(4) to 10(7), and has a linear time complexity. Even for 10(7)-queen problems, MAEA-CSPs finds the solutions by only 150 seconds. For JSPs, 59 benchmark problems are used, and good performance is also obtained.

  1. Better Medicare Cost Report data are needed to help hospitals benchmark costs and performance.

    PubMed

    Magnus, S A; Smith, D G

    2000-01-01

    To evaluate costs and achieve cost control in the face of new technology and demands for efficiency from both managed care and governmental payers, hospitals need to benchmark their costs against those of other comparable hospitals. Since they typically use Medicare Cost Report (MCR) data for this purpose, a variety of cost accounting problems with the MCR may hamper hospitals' understanding of their relative costs and performance. Managers and researchers alike need to investigate the validity, accuracy, and timeliness of the MCR's cost accounting data.

  2. Toward Scalable Boson Sampling with Photon Loss

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Li, Wei; Jiang, Xiao; He, Y.-M.; Li, Y.-H.; Ding, X.; Chen, M.-C.; Qin, J.; Peng, C.-Z.; Schneider, C.; Kamp, M.; Zhang, W.-J.; Li, H.; You, L.-X.; Wang, Z.; Dowling, J. P.; Höfling, S.; Lu, Chao-Yang; Pan, Jian-Wei

    2018-06-01

    Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16 ×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.

  3. Toward Scalable Boson Sampling with Photon Loss.

    PubMed

    Wang, Hui; Li, Wei; Jiang, Xiao; He, Y-M; Li, Y-H; Ding, X; Chen, M-C; Qin, J; Peng, C-Z; Schneider, C; Kamp, M; Zhang, W-J; Li, H; You, L-X; Wang, Z; Dowling, J P; Höfling, S; Lu, Chao-Yang; Pan, Jian-Wei

    2018-06-08

    Boson sampling is a well-defined task that is strongly believed to be intractable for classical computers, but can be efficiently solved by a specific quantum simulator. However, an outstanding problem for large-scale experimental boson sampling is the scalability. Here we report an experiment on boson sampling with photon loss, and demonstrate that boson sampling with a few photons lost can increase the sampling rate. Our experiment uses a quantum-dot-micropillar single-photon source demultiplexed into up to seven input ports of a 16×16 mode ultralow-loss photonic circuit, and we detect three-, four- and fivefold coincidence counts. We implement and validate lossy boson sampling with one and two photons lost, and obtain sampling rates of 187, 13.6, and 0.78 kHz for five-, six-, and seven-photon boson sampling with two photons lost, which is 9.4, 13.9, and 18.0 times faster than the standard boson sampling, respectively. Our experiment shows an approach to significantly enhance the sampling rate of multiphoton boson sampling.

  4. Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization

    NASA Astrophysics Data System (ADS)

    Kolosnitsyn, A. V.

    2018-02-01

    The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.

  5. Listening to the occupants: a Web-based indoor environmental quality survey.

    PubMed

    Zagreus, Leah; Huizenga, Charlie; Arens, Edward; Lehrer, David

    2004-01-01

    Building occupants are a rich source of information about indoor environmental quality and its effect on comfort and productivity. The Center for the Built Environment has developed a Web-based survey and accompanying online reporting tools to quickly and inexpensively gather, process and present this information. The core questions assess occupant satisfaction with the following IEQ areas: office layout, office furnishings, thermal comfort, indoor air quality, lighting, acoustics, and building cleanliness and maintenance. The survey can be used to assess the performance of a building, identify areas needing improvement, and provide useful feedback to designers and operators about specific aspects of building design features and operating strategies. The survey has been extensively tested and refined and has been conducted in more than 70 buildings, creating a rapidly growing database of standardized survey data that is used for benchmarking. We present three case studies that demonstrate different applications of the survey: a pre/post analysis of occupants moving to a new building, a survey used in conjunction with physical measurements to determine how environmental factors affect occupants' perceived comfort and productivity levels, and a benchmarking example of using the survey to establish how new buildings are meeting a client's design objectives. In addition to its use in benchmarking a building's performance against other buildings, the CBE survey can be used as a diagnostic tool to identify specific problems and their sources. Whenever a respondent indicates dissatisfaction with an aspect of building performance, a branching page follows with more detailed questions about the nature of the problem. This systematically collected information provides a good resource for solving indoor environmental problems in the building. By repeating the survey after a problem has been corrected it is also possible to assess the effectiveness of the solution.

  6. A suite of exercises for verifying dynamic earthquake rupture codes

    USGS Publications Warehouse

    Harris, Ruth A.; Barall, Michael; Aagaard, Brad T.; Ma, Shuo; Roten, Daniel; Olsen, Kim B.; Duan, Benchun; Liu, Dunyu; Luo, Bin; Bai, Kangchen; Ampuero, Jean-Paul; Kaneko, Yoshihiro; Gabriel, Alice-Agnes; Duru, Kenneth; Ulrich, Thomas; Wollherr, Stephanie; Shi, Zheqiang; Dunham, Eric; Bydlon, Sam; Zhang, Zhenguo; Chen, Xiaofei; Somala, Surendra N.; Pelties, Christian; Tago, Josue; Cruz-Atienza, Victor Manuel; Kozdon, Jeremy; Daub, Eric; Aslam, Khurram; Kase, Yuko; Withers, Kyle; Dalguer, Luis

    2018-01-01

    We describe a set of benchmark exercises that are designed to test if computer codes that simulate dynamic earthquake rupture are working as intended. These types of computer codes are often used to understand how earthquakes operate, and they produce simulation results that include earthquake size, amounts of fault slip, and the patterns of ground shaking and crustal deformation. The benchmark exercises examine a range of features that scientists incorporate in their dynamic earthquake rupture simulations. These include implementations of simple or complex fault geometry, off‐fault rock response to an earthquake, stress conditions, and a variety of formulations for fault friction. Many of the benchmarks were designed to investigate scientific problems at the forefronts of earthquake physics and strong ground motions research. The exercises are freely available on our website for use by the scientific community.

  7. Brain-Inspired Photonic Signal Processor for Generating Periodic Patterns and Emulating Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2017-05-01

    Reservoir computing is a bioinspired computing paradigm for processing time-dependent signals. Its hardware implementations have received much attention because of their simplicity and remarkable performance on a series of benchmark tasks. In previous experiments, the output was uncoupled from the system and, in most cases, simply computed off-line on a postprocessing computer. However, numerical investigations have shown that feeding the output back into the reservoir opens the possibility of long-horizon time-series forecasting. Here, we present a photonic reservoir computer with output feedback, and we demonstrate its capacity to generate periodic time series and to emulate chaotic systems. We study in detail the effect of experimental noise on system performance. In the case of chaotic systems, we introduce several metrics, based on standard signal-processing techniques, to evaluate the quality of the emulation. Our work significantly enlarges the range of tasks that can be solved by hardware reservoir computers and, therefore, the range of applications they could potentially tackle. It also raises interesting questions in nonlinear dynamics and chaos theory.

  8. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnis Judzis

    2002-10-01

    This document details the progress to date on the OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING contract for the quarter starting July 2002 through September 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments include the following: (1) Smith International agreed to participate in the DOE Mud Hammer program. (2) Smith International chromed collars for upcoming benchmark tests at TerraTek, now scheduled for 4Q 2002. (3) ConocoPhillips had a field trial of the Smith fluid hammer offshore Vietnam. The hammer functioned properly, though themore » well encountered hole conditions and reaming problems. ConocoPhillips plan another field trial as a result. (4) DOE/NETL extended the contract for the fluid hammer program to allow Novatek to ''optimize'' their much delayed tool to 2003 and to allow Smith International to add ''benchmarking'' tests in light of SDS Digger Tools' current financial inability to participate. (5) ConocoPhillips joined the Industry Advisors for the mud hammer program. (6) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to complete the optimizations.« less

  9. Theoretical and experimental investigations of efficient light coupling with spatially varied all dielectric striped waveguides

    NASA Astrophysics Data System (ADS)

    Yilmaz, Y. A.; Tandogan, S. E.; Hayran, Z.; Giden, I. H.; Turduev, M.; Kurt, H.

    2017-07-01

    Integrated photonic systems require efficient, compact, and broadband solutions for strong light coupling into and out of optical waveguides. The present work investigates an efficient optical power transferring the problem between optical waveguides having different widths of in/out terminals. We propose a considerably practical and feasible concept to implement and design an optical coupler by introducing gradually index modulation to the coupler section. The index profile of the coupler section is modulated with a Gaussian function by the help of striped waveguides. The effective medium theory is used to replace the original spatially varying index profile with dielectric stripes of a finite length/width having a constant effective refractive index. 2D and 3D finite-difference time-domain analyzes are utilized to investigate the sampling effect of the designed optical coupler and to determine the parameters that play a crucial role in enhancing the optical power transfer performance. Comparing the coupling performance of conventional benchmark adiabatic and butt couplers with the designed striped waveguide coupler, the corresponding coupling efficiency increases from approximately 30% to 95% over a wide frequency interval. In addition, to realize the realistic optical coupler appropriate to integrated photonic applications, the proposed structure is numerically designed on a silicon-on-insulator wafer. The implemented SOI platform based optical coupler operates in the telecom wavelength regime (λ = 1.55 μm), and the dimensions of the striped coupler are kept as 9.77 μm (along the transverse to propagation direction) and 7.69 μm (along the propagation direction) where the unit distance is fixed to be 465 nm. Finally, to demonstrate the operating design principle, the microwave experiments are conducted and the spot size conversion ratio as high as 7.1:1 is measured, whereas a coupling efficiency over 60% in the frequency range of 5.0-16.0 GHz has been also demonstrated.

  10. Benchmarking the Performance of Exchange-Correlation Functionals for Predicting Two-Photon Absorption Strengths.

    PubMed

    Beerepoot, Maarten T P; Alam, Md Mehboob; Bednarska, Joanna; Bartkowiak, Wojciech; Ruud, Kenneth; Zaleśny, Robert

    2018-06-15

    The present work investigates the performance of exchange-correlation functionals in the prediction of two-photon absorption (2PA) strengths. For this purpose, we considered six common functionals used for studying 2PA processes and tested these on six organoboron chelates. The set consisted of two semilocal (PBE and BLYP), two hybrid (B3LYP and PBE0), and two range-separated (LC-BLYP and CAM-B3LYP) functionals. The RI-CC2 method was chosen as a reference level and was found to give results consistent with the experimental data that are available for three of the molecules considered. Of the six exchange-correlation functionals studied, only the range-separated functionals predict an ordering of the 2PA strengths that is consistent with experimental and RI-CC2 results. Even though the range-separated functionals predict correct relative trends, the absolute values for the 2PA strengths are underestimated by a factor of 2-6 for the molecules considered. An in-depth analysis, on the basis of the derived generalized few-state model expression for the 2PA strength for a coupled-cluster wave function, reveals that the problem with these functionals can be linked to underestimated excited-state dipole moments and, to a lesser extent, overestimated excitation energies. The semilocal and hybrid functionals exhibit less predictable errors and a variation in the 2PA strengths in disagreement with the reference results. The semilocal and hybrid functionals show smaller average errors than the range-separated functionals, but our analysis reveals that this is due to fortuitous error cancellation between excitation energies and the transition dipole moments. Our results constitute a warning against using currently available exchange-correlation functionals in the prediction of 2PA strengths and highlight the need for functionals that correctly describe the electron density of excited electronic states.

  11. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  12. Scalable Effective Approaches for Quadratic Assignment Problems Based on Conic Optimization and Applications

    DTIC Science & Technology

    2012-02-09

    1nclud1ng suggestions for reduc1ng the burden. to the Department of Defense. ExecutiVe Serv1ce D>rectorate (0704-0188) Respondents should be aware...benchmark problem we contacted Bertrand LeCun who in their poject CHOC from 2005-2008 had applied their parallel B&B framework BOB++ to the RLT1

  13. Alternative industrial carbon emissions benchmark based on input-output analysis

    NASA Astrophysics Data System (ADS)

    Han, Mengyao; Ji, Xi

    2016-12-01

    Some problems exist in the current carbon emissions benchmark setting systems. The primary consideration for industrial carbon emissions standards highly relate to direct carbon emissions (power-related emissions) and only a portion of indirect emissions are considered in the current carbon emissions accounting processes. This practice is insufficient and may cause double counting to some extent due to mixed emission sources. To better integrate and quantify direct and indirect carbon emissions, an embodied industrial carbon emissions benchmark setting method is proposed to guide the establishment of carbon emissions benchmarks based on input-output analysis. This method attempts to link direct carbon emissions with inter-industrial economic exchanges and systematically quantifies carbon emissions embodied in total product delivery chains. The purpose of this study is to design a practical new set of embodied intensity-based benchmarks for both direct and indirect carbon emissions. Beijing, at the first level of carbon emissions trading pilot schemes in China, plays a significant role in the establishment of these schemes and is chosen as an example in this study. The newly proposed method tends to relate emissions directly to each responsibility in a practical way through the measurement of complex production and supply chains and reduce carbon emissions from their original sources. This method is expected to be developed under uncertain internal and external contexts and is further expected to be generalized to guide the establishment of industrial benchmarks for carbon emissions trading schemes in China and other countries.

  14. Instruction-matrix-based genetic programming.

    PubMed

    Li, Gang; Wang, Jin Feng; Lee, Kin Hong; Leung, Kwong-Sak

    2008-08-01

    In genetic programming (GP), evolving tree nodes separately would reduce the huge solution space. However, tree nodes are highly interdependent with respect to their fitness. In this paper, we propose a new GP framework, namely, instruction-matrix (IM)-based GP (IMGP), to handle their interactions. IMGP maintains an IM to evolve tree nodes and subtrees separately. IMGP extracts program trees from an IM and updates the IM with the information of the extracted program trees. As the IM actually keeps most of the information of the schemata of GP and evolves the schemata directly, IMGP is effective and efficient. Our experimental results on benchmark problems have verified that IMGP is not only better than those of canonical GP in terms of the qualities of the solutions and the number of program evaluations, but they are also better than some of the related GP algorithms. IMGP can also be used to evolve programs for classification problems. The classifiers obtained have higher classification accuracies than four other GP classification algorithms on four benchmark classification problems. The testing errors are also comparable to or better than those obtained with well-known classifiers. Furthermore, an extended version, called condition matrix for rule learning, has been used successfully to handle multiclass classification problems.

  15. Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.

    PubMed

    Larsson, Elisabeth; Abrahamsson, Leif

    2003-05-01

    The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.

  16. Semi-quantum Dialogue Based on Single Photons

    NASA Astrophysics Data System (ADS)

    Ye, Tian-Yu; Ye, Chong-Qiang

    2018-02-01

    In this paper, we propose two semi-quantum dialogue (SQD) protocols by using single photons as the quantum carriers, where one requires the classical party to possess the measurement capability and the other does not have this requirement. The security toward active attacks from an outside Eve in the first SQD protocol is guaranteed by the complete robustness of present semi-quantum key distribution (SQKD) protocols, the classical one-time pad encryption, the classical party's randomization operation and the decoy photon technology. The information leakage problem of the first SQD protocol is overcome by the classical party' classical basis measurements on the single photons carrying messages which makes him share their initial states with the quantum party. The security toward active attacks from Eve in the second SQD protocol is guaranteed by the classical party's randomization operation, the complete robustness of present SQKD protocol and the classical one-time pad encryption. The information leakage problem of the second SQD protocol is overcome by the quantum party' classical basis measurements on each two adjacent single photons carrying messages which makes her share their initial states with the classical party. Compared with the traditional information leakage resistant QD protocols, the advantage of the proposed SQD protocols lies in that they only require one party to have quantum capabilities. Compared with the existing SQD protocol, the advantage of the proposed SQD protocols lies in that they only employ single photons rather than two-photon entangled states as the quantum carriers. The proposed SQD protocols can be implemented with present quantum technologies.

  17. Three-dimensional ordered particulate structures: Method to retrieve characteristics from photonic band gap data

    NASA Astrophysics Data System (ADS)

    Miskevich, Alexander A.; Loiko, Valery A.

    2015-01-01

    A method to retrieve characteristics of ordered particulate structures, such as photonic crystals, is proposed. It is based on the solution of the inverse problem using data on the photonic band gap (PBG). The quasicrystalline approximation (QCA) of the theory of multiple scattering of waves and the transfer matrix method (TMM) are used. Retrieval of the refractive index of particles is demonstrated. Refractive indices of the artificial opal particles are estimated using the published experimental data.

  18. Rotating Modulation Imager for the Orphan Source Search Problem

    DTIC Science & Technology

    2008-01-01

    black mask. If the photon hits an open element it is transmitted and the function M(x) = 1. If the photon hits a closed mask element it is not...photon enters the top mask pair in the third slit, but passes through the second slit on the bottom mask. With a single black mask this is physically...modulation efficiency changes as a function of mask thickness for both tungsten and lead masks. The black line shows how the field of view changes with

  19. Study of silicon strip waveguides with diffraction gratings and photonic crystals tuned to a wavelength of 1.5 µm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barabanenkov, M. Yu., E-mail: barab@iptm.ru; Vyatkin, A. F.; Volkov, V. T.

    2015-12-15

    Single-mode submicrometer-thick strip waveguides on silicon-on-insulator substrates, fabricated by silicon-planar-technology methods are considered. To solve the problem of 1.5-µm wavelength radiation input-output and its frequency filtering, strip diffraction gratings and two-dimensional photonic crystals are integrated into waveguides. The reflection and transmission spectra of gratings and photonic crystals are calculated. The waveguide-mode-attenuation coefficient for a polycrystalline silicon waveguide is experimentally estimated.

  20. Quantum-classical transition of photon-Carnot engine induced by quantum decoherence

    NASA Astrophysics Data System (ADS)

    Quan, H. T.; Zhang, P.; Sun, C. P.

    2006-03-01

    We study the physical implementation of the photon-Carnot engine (PCE) based on the cavity quantum electrodynamics system [M. O. Scully, M. Suhail Zubairy, G. S. Agarwal, and H. Walther, Science 299, 862 (2003)]. Here we analyze two decoherence mechanisms for the more practical systems of PCE, the dissipation of photon field, and the pure dephasing of the input atoms. As a result we find that (i) the PCE can work well to some extent even in the existence of the cavity loss (photon dissipation) and (ii) the short-time atomic dephasing, which can destroy the PCE, is a fatal problem to be overcome.

  1. Control of photon storage time using phase locking.

    PubMed

    Ham, Byoung S

    2010-01-18

    A photon echo storage-time extension protocol is presented by using a phase locking method in a three-level backward propagation scheme, where phase locking serves as a conditional stopper of the rephasing process in conventional two-pulse photon echoes. The backward propagation scheme solves the critical problems of extremely low retrieval efficiency and pi rephasing pulse-caused spontaneous emission noise in photon echo based quantum memories. The physics of the storage time extension lies in the imminent population transfer from the excited state to an auxiliary spin state by a phase locking control pulse. We numerically demonstrate that the storage time is lengthened by spin dephasing time.

  2. Single-photon decision maker

    NASA Astrophysics Data System (ADS)

    Naruse, Makoto; Berthel, Martin; Drezet, Aurélien; Huant, Serge; Aono, Masashi; Hori, Hirokazu; Kim, Song-Ju

    2015-08-01

    Decision making is critical in our daily lives and for society in general and is finding evermore practical applications in information and communication technologies. Herein, we demonstrate experimentally that single photons can be used to make decisions in uncertain, dynamically changing environments. Using a nitrogen-vacancy in a nanodiamond as a single-photon source, we demonstrate the decision-making capability by solving the multi-armed bandit problem. This capability is directly and immediately associated with single-photon detection in the proposed architecture, leading to adequate and adaptive autonomous decision making. This study makes it possible to create systems that benefit from the quantum nature of light to perform practical and vital intelligent functions.

  3. Innately Split Model for Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Ikeda, Kokolo; Kobayashi, Sigenobu

    Job-shop Scheduling Problem (JSP) is one of the most difficult benchmark problems. GA approaches often fail searching the global optimum because of the deception UV-structure of JSPs. In this paper, we introduce a novel framework model of GA, Innately Split Model (ISM) which prevents UV-phenomenon, and discuss on its power particularly. Next we analyze the structure of JSPs with the help of the UV-structure hypothesys, and finally we show ISM's excellent performance on JSP.

  4. BIGHORN Computational Fluid Dynamics Theory, Methodology, and Code Verification & Validation Benchmark Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Yidong; Andrs, David; Martineau, Richard Charles

    This document presents the theoretical background for a hybrid finite-element / finite-volume fluid flow solver, namely BIGHORN, based on the Multiphysics Object Oriented Simulation Environment (MOOSE) computational framework developed at the Idaho National Laboratory (INL). An overview of the numerical methods used in BIGHORN are discussed and followed by a presentation of the formulation details. The document begins with the governing equations for the compressible fluid flow, with an outline of the requisite constitutive relations. A second-order finite volume method used for solving the compressible fluid flow problems is presented next. A Pressure-Corrected Implicit Continuous-fluid Eulerian (PCICE) formulation for timemore » integration is also presented. The multi-fluid formulation is being developed. Although multi-fluid is not fully-developed, BIGHORN has been designed to handle multi-fluid problems. Due to the flexibility in the underlying MOOSE framework, BIGHORN is quite extensible, and can accommodate both multi-species and multi-phase formulations. This document also presents a suite of verification & validation benchmark test problems for BIGHORN. The intent for this suite of problems is to provide baseline comparison data that demonstrates the performance of the BIGHORN solution methods on problems that vary in complexity from laminar to turbulent flows. Wherever possible, some form of solution verification has been attempted to identify sensitivities in the solution methods, and suggest best practices when using BIGHORN.« less

  5. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  6. Assessment of the monitoring and evaluation system for integrated community case management (ICCM) in Ethiopia: a comparison against global benchmark indicators.

    PubMed

    Mamo, Dereje; Hazel, Elizabeth; Lemma, Israel; Guenther, Tanya; Bekele, Abeba; Demeke, Berhanu

    2014-10-01

    Program managers require feasible, timely, reliable, and valid measures of iCCM implementation to identify problems and assess progress. The global iCCM Task Force developed benchmark indicators to guide implementers to develop or improve monitoring and evaluation (M&E) systems. To assesses Ethiopia's iCCM M&E system by determining the availability and feasibility of the iCCM benchmark indicators. We conducted a desk review of iCCM policy documents, monitoring tools, survey reports, and other rele- vant documents; and key informant interviews with government and implementing partners involved in iCCM scale-up and M&E. Currently, Ethiopia collects data to inform most (70% [33/47]) iCCM benchmark indicators, and modest extra effort could boost this to 83% (39/47). Eight (17%) are not available given the current system. Most benchmark indicators that track coordination and policy, human resources, service delivery and referral, supervision, and quality assurance are available through the routine monitoring systems or periodic surveys. Indicators for supply chain management are less available due to limited consumption data and a weak link with treatment data. Little information is available on iCCM costs. Benchmark indicators can detail the status of iCCM implementation; however, some indicators may not fit country priorities, and others may be difficult to collect. The government of Ethiopia and partners should review and prioritize the benchmark indicators to determine which should be included in the routine M&E system, especially since iCCMdata are being reviewed for addition to the HMIS. Moreover, the Health Extension Worker's reporting burden can be minimized by an integrated reporting approach.

  7. Benchmarking in pathology: development of an activity-based costing model.

    PubMed

    Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John

    2012-12-01

    Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.

  8. Characteristic of EBT-XD and EBT3 radiochromic film dosimetry for photon and proton beams

    NASA Astrophysics Data System (ADS)

    Khachonkham, Suphalak; Dreindl, Ralf; Heilemann, Gerd; Lechner, Wolfgang; Fuchs, Hermann; Palmans, Hugo; Georg, Dietmar; Kuess, Peter

    2018-03-01

    Recently, a new type of radiochromic film, the EBT-XD film, has been introduced for high dose radiotherapy. The EBT-XD film contains the same structure as the EBT3 film but has a slightly different composition and a thinner active layer. This study benchmarks the EBT-XD against EBT3 film for 6 MV and 10 MV photon beams, as well as for 97.4 MeV and 148.2 MeV proton beams and 15-100 kV x-rays. Dosimetric and film reading characteristics, such as post irradiation darkening, film orientation effect, lateral response artifact (LRA), film sensitivity, energy and beam quality dependency were investigated. Furthermore, quenching effects in the Bragg peak were investigated for a single proton beam energy for both film types, in addition measurements were performed in a spread-out Bragg peak. EBT-XD films showed the same characteristic on film darkening as EBT3. The effects between portrait and landscape orientation were reduced by 3.1% (in pixel value) for EBT-XD compared to EBT3 at a dose of 2000 cGy. The LRA is reduced for EBT-XD films for all investigated dose ranges. The sensitivity of EBT-XD films is superior to EBT3 for doses higher than 500 cGy. In addition, EBT-XD showed a similar dosimetric response for photon and proton irradiation with low energy and beam quality dependency. A quenching effect of 10% was found for both film types. The slight decrease in the thickness of the active layer and different composition configuration of EBT-XD resulted in a reduced film orientation effect and LRA, as well as a sensitivity increase in high-dose regions for both photon and proton beams. Overall, the EBT-XD film improved regarding film reading characteristics and showed advantages in the high-dose region for photon and proton beams.

  9. Characteristic of EBT-XD and EBT3 radiochromic film dosimetry for photon and proton beams.

    PubMed

    Khachonkham, Suphalak; Dreindl, Ralf; Heilemann, Gerd; Lechner, Wolfgang; Fuchs, Hermann; Palmans, Hugo; Georg, Dietmar; Kuess, Peter

    2018-03-15

    Recently, a new type of radiochromic film, the EBT-XD film, has been introduced for high dose radiotherapy. The EBT-XD film contains the same structure as the EBT3 film but has a slightly different composition and a thinner active layer. This study benchmarks the EBT-XD against EBT3 film for 6 MV and 10 MV photon beams, as well as for 97.4 MeV and 148.2 MeV proton beams and 15-100 kV x-rays. Dosimetric and film reading characteristics, such as post irradiation darkening, film orientation effect, lateral response artifact (LRA), film sensitivity, energy and beam quality dependency were investigated. Furthermore, quenching effects in the Bragg peak were investigated for a single proton beam energy for both film types, in addition measurements were performed in a spread-out Bragg peak. EBT-XD films showed the same characteristic on film darkening as EBT3. The effects between portrait and landscape orientation were reduced by 3.1% (in pixel value) for EBT-XD compared to EBT3 at a dose of 2000 cGy. The LRA is reduced for EBT-XD films for all investigated dose ranges. The sensitivity of EBT-XD films is superior to EBT3 for doses higher than 500 cGy. In addition, EBT-XD showed a similar dosimetric response for photon and proton irradiation with low energy and beam quality dependency. A quenching effect of 10% was found for both film types. The slight decrease in the thickness of the active layer and different composition configuration of EBT-XD resulted in a reduced film orientation effect and LRA, as well as a sensitivity increase in high-dose regions for both photon and proton beams. Overall, the EBT-XD film improved regarding film reading characteristics and showed advantages in the high-dose region for photon and proton beams.

  10. Postfabrication Phase Error Correction of Silicon Photonic Circuits by Single Femtosecond Laser Pulses

    DOE PAGES

    Bachman, Daniel; Chen, Zhijiang; Wang, Christopher; ...

    2016-11-29

    Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less

  11. Suppressing spectral diffusion of emitted photons with optical pulses

    DOE PAGES

    Fotso, H. F.; Feiguin, A. E.; Awschalom, D. D.; ...

    2016-01-22

    In many quantum architectures the solid-state qubits, such as quantum dots or color centers, are interfaced via emitted photons. However, the frequency of photons emitted by solid-state systems exhibits slow uncontrollable fluctuations over time (spectral diffusion), creating a serious problem for implementation of the photon-mediated protocols. Here we show that a sequence of optical pulses applied to the solid-state emitter can stabilize the emission line at the desired frequency. We demonstrate efficiency, robustness, and feasibility of the method analytically and numerically. Taking nitrogen-vacancy center in diamond as an example, we show that only several pulses, with the width of 1more » ns, separated by few ns (which is not difficult to achieve) can suppress spectral diffusion. As a result, our method provides a simple and robust way to greatly improve the efficiency of photon-mediated entanglement and/or coupling to photonic cavities for solid-state qubits.« less

  12. A Benchmark Experiment for Photoionized Plasma Emission from Accretion-Powered X-ray Sources

    NASA Astrophysics Data System (ADS)

    Loisel, G.; Bailey, J.; Nagayama, T.; Hansen, S.; Rochau, G.; Liedahl, D.; Fontes, C.; Kallman, T.; Mancini, R.

    2017-10-01

    Accretion-powered emission from X-ray binaries or black-hole accretion in Active Galactic Nuclei is a powerful diagnostic for their behavior and structure. Interpretation of x-ray emission from these objects requires a spectral synthesis model for photoionized plasma. Models must predict the photoionized charge state distribution, the photon emission processes, and the radiation transport influence on the observed emission. At the Z facility, we have measured simultaneously emission and absorption from a photoionized silicon plasma suitable to benchmark photoionization and spectrum formation models with +/-5% reproducibility and E/dE >2500 spectral resolution. Plasma density, temperature, and charge state distribution are determined with absorption spectroscopy. Self-emission measured at adjustable column densities tests radiation transport effects. Observation of 14 transitions in He-like silicon will help understand population mechanisms in a photoionized plasma. First observation of radiative recombination continuum in a photoionized plasma will be presented. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

  13. Organic Dots Based on AIEgens for Two-Photon Fluorescence Bioimaging.

    PubMed

    Lou, Xiaoding; Zhao, Zujin; Tang, Ben Zhong

    2016-12-01

    Two-photon fluorescence imaging technique is a powerful bioanalytical approach in terms of high photostability, low photodamage, high spatiotemporal resolution. Recently, fluorescent organic dots comprised of organic emissive cores and a polymeric matrix are emerging as promising contrast reagents for two-photon fluorescence imaging, owing to their numerous merits of high and tunable fluorescence, good biocompatibility, strong photobleaching resistance, and multiple surface functionality. The emissive core is crucial for organic dots to get high brightness but many conventional chromophores often encounter a severe problem of fluorescence quenching when they form aggregates. To solve this problem, fluorogens featuring aggregation-induced emission (AIE) can fluoresce strongly in aggregates, and thus become ideal candidates for fluorescent organic dots. In addition, two-photon absorption property of the dots can be readily improved by just increase loading contents of AIE fluorogen (AIEgen). Hence, organic dots based on AIEgens have exhibited excellent performances in two-photon fluorescence in vitro cellular imaging, and in vivo vascular architecture visualization of mouse skin, muscle, brain and skull bone. In view of the rapid advances in this important research field, here, we highlight representative fluorescent organic dots with an emissive core of AIEgen aggregate, and discuss their great potential in bioimaging applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  15. Application of a Java-based, univel geometry, neutral particle Monte Carlo code to the searchlight problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles A. Wemple; Joshua J. Cogliati

    2005-04-01

    A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random numbermore » generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN.« less

  16. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  17. A comparative study of upwind and MacCormack schemes for CAA benchmark problems

    NASA Technical Reports Server (NTRS)

    Viswanathan, K.; Sankar, L. N.

    1995-01-01

    In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.

  18. FY16 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2016-09-30

    The goal of the NEAMS neutronics effort is to develop a neutronics toolkit for use on sodium-cooled fast reactors (SFRs) which can be extended to other reactor types. The neutronics toolkit includes the high-fidelity deterministic neutron transport code PROTEUS and many supporting tools such as a cross section generation code MC 2-3, a cross section library generation code, alternative cross section generation tools, mesh generation and conversion utilities, and an automated regression test tool. The FY16 effort for NEAMS neutronics focused on supporting the release of the SHARP toolkit and existing and new users, continuing to develop PROTEUS functions necessarymore » for performance improvement as well as the SHARP release, verifying PROTEUS against available existing benchmark problems, and developing new benchmark problems as needed. The FY16 research effort was focused on further updates of PROTEUS-SN and PROTEUS-MOCEX and cross section generation capabilities as needed.« less

  19. Heuristic methods for the single machine scheduling problem with different ready times and a common due date

    NASA Astrophysics Data System (ADS)

    Birgin, Ernesto G.; Ronconi, Débora P.

    2012-10-01

    The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.

  20. Maturing CCD Photon-Counting Technology for Space Flight

    NASA Technical Reports Server (NTRS)

    Mallik, Udayan; Lyon, Richard; Petrone, Peter; McElwain, Michael; Benford, Dominic; Clampin, Mark; Hicks, Brian

    2015-01-01

    This paper discusses charge blooming and starlight saturation - two potential technical problems - when using an Electron Multiplying Charge Coupled Device (EMCCD) type detector in a high-contrast instrument for imaging exoplanets. These problems especially affect an interferometric type coronagraph - coronagraphs that do not use a mask to physically block starlight in the science channel of the instrument. These problems are presented using images taken with a commercial Princeton Instrument EMCCD camera in the Goddard Space Flight Center's (GSFC), Interferometric Coronagraph facility. In addition, this paper discusses techniques to overcome such problems. This paper also discusses the development and architecture of a Field Programmable Gate Array and Digital-to-Analog Converter based shaped clock controller for a photon-counting EMCCD camera. The discussion contained here will inform high-contrast imaging groups in their work with EMCCD detectors.

  1. Analytic Approximations to the Free Boundary and Multi-dimensional Problems in Financial Derivatives Pricing

    NASA Astrophysics Data System (ADS)

    Lau, Chun Sing

    This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.

  2. Gluon and ghost correlation functions of 2-color QCD at finite density

    NASA Astrophysics Data System (ADS)

    Hajizadeh, Ouraman; Boz, Tamer; Maas, Axel; Skullerud, Jon-Ivar

    2018-03-01

    2-color QCD, i. e. QCD with the gauge group SU(2), is the simplest non-Abelian gauge theory without sign problem at finite quark density. Therefore its study on the lattice is a benchmark for other non-perturbative approaches at finite density. To provide such benchmarks we determine the minimal-Landau-gauge 2-point and 3-gluon correlation functions of the gauge sector and the running gauge coupling at finite density. We observe no significant effects, except for some low-momentum screening of the gluons at and above the supposed high-density phase transition.

  3. Classical and modern control strategies for the deployment, reconfiguration, and station-keeping of the National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation

    NASA Astrophysics Data System (ADS)

    Capo-Lugo, Pedro A.

    Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous control system to a digital control system which will simplify the implementation into the computer onboard the satellite. In addition, this research will show an introductory chapter on attitude dynamics that can be used to maintain the orientation of the satellites, and an adaptive intelligent control scheme will be proposed to maintain the desired orientation of the spacecraft. In conclusion, a solution for the dynamics of the NASA Benchmark Tetrahedron Constellation will be presented in this research work. The main contribution of this work is the use of discrete control schemes, impulsive maneuvers, and intelligent control schemes that can be used to reduce the computational time in which these control schemes can be easily implemented in the computer onboard the satellite. These contributions are explained through the deployment, reconfiguration, and station-keeping process of the proposed NASA Benchmark Tetrahedron Constellation.

  4. Search for Dark Gauge Bosons Decaying into Displaced Lepton-Jets in Proton-Proton Collisions at √S = 13 TeV with the Atlas Detector

    NASA Astrophysics Data System (ADS)

    Diamond, Miriam

    The dark photon (A'), the gauge boson carrier of a hypothetical new force, has been proposed in a wide range of Beyond the Standard Model (BSM) theories, and could serve as our window to an entire dark sector. A massive A' could decay back to the Standard Model (SM) with a significant branching fraction, through kinetic mixing with the SM photon. If this A' can be produced from decays of a dark scalar that mixes with the SM Higgs boson, collider searches involving leptonic final states provide promising discovery prospects with rich phenomenology. This work presents the results of a search for dark photons in the mass range 0.2 ≤ mA' ≤ 10 GeV decaying into collimated jets of light leptons and mesons, so-called ``lepton-jets". It employs 3.57 fb-1 of data from proton--proton collisions at a centre-of-mass energy of √s =13 TeV, collected during 2015 with the ATLAS detector at the LHC. No deviations from SM expectations are observed. Limits on benchmark models predicting Higgs boson decays to A's are derived as a function of the A' lifetime; limits are also established in the parameter space of mA' vs. kinetic mixing parameter epsilon . These extend the limits obtained in a similar search previously performed during Run 1 of the LHC, to include dark photon masses 2 ≤ mA' ≤ 10 GeV and to cover higher epsilon values for 0.2 ≤ mA' ≤ 2 GeV, and are complementary to various other ATLAS A' searches. As data-taking continues at the LHC, the reach of lepton-jet analyses will continue to expand in model coverage and in parameter space.

  5. Quantum routing of single optical photons with a superconducting flux qubit

    NASA Astrophysics Data System (ADS)

    Xia, Keyu; Jelezko, Fedor; Twamley, Jason

    2018-05-01

    Interconnecting optical photons with superconducting circuits is a challenging problem but essential for building long-range superconducting quantum networks. We propose a hybrid quantum interface between the microwave and optical domains where the propagation of a single-photon pulse along a nanowaveguide is controlled in a coherent way by tuning the electromagnetically induced transparency window with the quantum state of a flux qubit mediated by the spin in a nanodiamond. The qubit can route a single-photon pulse using the nanodiamond into a quantum superposition of paths without the aid of an optical cavity—simplifying the setup. By preparing the flux qubit in a superposition state our cavityless scheme creates a hybrid state-path entanglement between a flying single optical photon and a static superconducting qubit.

  6. Quantum-enhanced multiparameter estimation in multiarm interferometers

    PubMed Central

    Ciampini, Mario A.; Spagnolo, Nicolò; Vitelli, Chiara; Pezzè, Luca; Smerzi, Augusto; Sciarrino, Fabio

    2016-01-01

    Quantum metrology is the state-of-the-art measurement technology. It uses quantum resources to enhance the sensitivity of phase estimation over that achievable by classical physics. While single parameter estimation theory has been widely investigated, much less is known about the simultaneous estimation of multiple phases, which finds key applications in imaging and sensing. In this manuscript we provide conditions of useful particle (qudit) entanglement for multiphase estimation and adapt them to multiarm Mach-Zehnder interferometry. We theoretically discuss benchmark multimode Fock states containing useful qudit entanglement and overcoming the sensitivity of separable qudit states in three and four arm Mach-Zehnder-like interferometers - currently within the reach of integrated photonics technology. PMID:27381743

  7. An introduction to mass cytometry: fundamentals and applications.

    PubMed

    Tanner, Scott D; Baranov, Vladimir I; Ornatsky, Olga I; Bandura, Dmitry R; George, Thaddeus C

    2013-05-01

    Mass cytometry addresses the analytical challenges of polychromatic flow cytometry by using metal atoms as tags rather than fluorophores and atomic mass spectrometry as the detector rather than photon optics. The many available enriched stable isotopes of the transition elements can provide up to 100 distinguishable reporting tags, which can be measured simultaneously because of the essential independence of detection provided by the mass spectrometer. We discuss the adaptation of traditional inductively coupled plasma mass spectrometry to cytometry applications. We focus on the generation of cytometry-compatible data and on approaches to unsupervised multivariate clustering analysis. Finally, we provide a high-level review of some recent benchmark reports that highlight the potential for massively multi-parameter mass cytometry.

  8. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  9. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  10. Searching for long-lived particles: A compact detector for exotics at LHCb

    NASA Astrophysics Data System (ADS)

    Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; Robinson, Dean J.

    2018-01-01

    We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reach either complements or exceeds that predicted for other LHC experiments.

  11. Efficient numerical method for analyzing optical bistability in photonic crystal microcavities.

    PubMed

    Yuan, Lijun; Lu, Ya Yan

    2013-05-20

    Nonlinear optical effects can be enhanced by photonic crystal microcavities and be used to develop practical ultra-compact optical devices with low power requirements. The finite-difference time-domain method is the standard numerical method for simulating nonlinear optical devices, but it has limitations in terms of accuracy and efficiency. In this paper, a rigorous and efficient frequency-domain numerical method is developed for analyzing nonlinear optical devices where the nonlinear effect is concentrated in the microcavities. The method replaces the linear problem outside the microcavities by a rigorous and numerically computed boundary condition, then solves the nonlinear problem iteratively in a small region around the microcavities. Convergence of the iterative method is much easier to achieve since the size of the problem is significantly reduced. The method is presented for a specific two-dimensional photonic crystal waveguide-cavity system with a Kerr nonlinearity, using numerical methods that can take advantage of the geometric features of the structure. The method is able to calculate multiple solutions exhibiting the optical bistability phenomenon in the strongly nonlinear regime.

  12. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  13. BODYFIT-1FE: a computer code for three-dimensional steady-state/transient single-phase rod-bundle thermal-hydraulic analysis. Draft report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, B.C.J.; Sha, W.T.; Doria, M.L.

    1980-11-01

    The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less

  14. Fuzzy Kernel k-Medoids algorithm for anomaly detection problems

    NASA Astrophysics Data System (ADS)

    Rustam, Z.; Talita, A. S.

    2017-07-01

    Intrusion Detection System (IDS) is an essential part of security systems to strengthen the security of information systems. IDS can be used to detect the abuse by intruders who try to get into the network system in order to access and utilize the available data sources in the system. There are two approaches of IDS, Misuse Detection and Anomaly Detection (behavior-based intrusion detection). Fuzzy clustering-based methods have been widely used to solve Anomaly Detection problems. Other than using fuzzy membership concept to determine the object to a cluster, other approaches as in combining fuzzy and possibilistic membership or feature-weighted based methods are also used. We propose Fuzzy Kernel k-Medoids that combining fuzzy and possibilistic membership as a powerful method to solve anomaly detection problem since on numerical experiment it is able to classify IDS benchmark data into five different classes simultaneously. We classify IDS benchmark data KDDCup'99 data set into five different classes simultaneously with the best performance was achieved by using 30 % of training data with clustering accuracy reached 90.28 percent.

  15. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  16. An Integrated Development Environment for Adiabatic Quantum Programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Bennink, Ryan S

    2014-01-01

    Adiabatic quantum computing is a promising route to the computational power afforded by quantum information processing. The recent availability of adiabatic hardware raises the question of how well quantum programs perform. Benchmarking behavior is challenging since the multiple steps to synthesize an adiabatic quantum program are highly tunable. We present an adiabatic quantum programming environment called JADE that provides control over all the steps taken during program development. JADE captures the workflow needed to rigorously benchmark performance while also allowing a variety of problem types, programming techniques, and processor configurations. We have also integrated JADE with a quantum simulation enginemore » that enables program profiling using numerical calculation. The computational engine supports plug-ins for simulation methodologies tailored to various metrics and computing resources. We present the design, integration, and deployment of JADE and discuss its use for benchmarking adiabatic quantum programs.« less

  17. Benchmarks for single-phase flow in fractured porous media

    NASA Astrophysics Data System (ADS)

    Flemisch, Bernd; Berre, Inga; Boon, Wietse; Fumagalli, Alessio; Schwenck, Nicolas; Scotti, Anna; Stefansson, Ivar; Tatomir, Alexandru

    2018-01-01

    This paper presents several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media. A number of solution strategies are compared, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fracture model. The proposed benchmarks test the schemes by increasing the difficulties in terms of network geometry, e.g. intersecting fractures, and physical parameters, e.g. low and high fracture-matrix permeability ratio as well as heterogeneous fracture permeabilities. For each problem, the results presented are the number of unknowns, the approximation errors in the porous matrix and in the fractures with respect to a reference solution, and the sparsity and condition number of the discretized linear system. All data and meshes used in this study are publicly available for further comparisons.

  18. BIOREL: the benchmark resource to estimate the relevance of the gene networks.

    PubMed

    Antonov, Alexey V; Mewes, Hans W

    2006-02-06

    The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.

  19. A note on bound constraints handling for the IEEE CEC'05 benchmark function suite.

    PubMed

    Liao, Tianjun; Molina, Daniel; de Oca, Marco A Montes; Stützle, Thomas

    2014-01-01

    The benchmark functions and some of the algorithms proposed for the special session on real parameter optimization of the 2005 IEEE Congress on Evolutionary Computation (CEC'05) have played and still play an important role in the assessment of the state of the art in continuous optimization. In this article, we show that if bound constraints are not enforced for the final reported solutions, state-of-the-art algorithms produce infeasible best candidate solutions for the majority of functions of the IEEE CEC'05 benchmark function suite. This occurs even though the optima of the CEC'05 functions are within the specified bounds. This phenomenon has important implications on algorithm comparisons, and therefore on algorithm designs. This article's goal is to draw the attention of the community to the fact that some authors might have drawn wrong conclusions from experiments using the CEC'05 problems.

  20. Numerical methods for the inverse problem of density functional theory

    DOE PAGES

    Jensen, Daniel S.; Wasserman, Adam

    2017-07-17

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  1. Numerical methods for the inverse problem of density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Daniel S.; Wasserman, Adam

    Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less

  2. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.

  3. Information Based Numerical Practice.

    DTIC Science & Technology

    1987-02-01

    characterization by comparative computational studies of various benchmark problems. See e.g. [MacNeal, Harder (1985)], [Robinson, Blackham (1981)] any...FOR NONADAPTIVE METHODS 2.1. THE QUADRATURE FORMULA The simplest example studied in detail in the literature is the problem of the optimal quadrature...formulae and the functional analytic prerequisites for the study of optimal formulae, we refer to the large monography (808 p) of [Sobolev (1974)]. Let us

  4. A stable partitioned FSI algorithm for incompressible flow and deforming beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L., E-mail: lil19@rpi.edu; Henshaw, W.D., E-mail: henshw@rpi.edu; Banks, J.W., E-mail: banksj3@rpi.edu

    2016-05-01

    An added-mass partitioned (AMP) algorithm is described for solving fluid–structure interaction (FSI) problems coupling incompressible flows with thin elastic structures undergoing finite deformations. The new AMP scheme is fully second-order accurate and stable, without sub-time-step iterations, even for very light structures when added-mass effects are strong. The fluid, governed by the incompressible Navier–Stokes equations, is solved in velocity-pressure form using a fractional-step method; large deformations are treated with a mixed Eulerian-Lagrangian approach on deforming composite grids. The motion of the thin structure is governed by a generalized Euler–Bernoulli beam model, and these equations are solved in a Lagrangian frame usingmore » two approaches, one based on finite differences and the other on finite elements. The key AMP interface condition is a generalized Robin (mixed) condition on the fluid pressure. This condition, which is derived at a continuous level, has no adjustable parameters and is applied at the discrete level to couple the partitioned domain solvers. Special treatment of the AMP condition is required to couple the finite-element beam solver with the finite-difference-based fluid solver, and two coupling approaches are described. A normal-mode stability analysis is performed for a linearized model problem involving a beam separating two fluid domains, and it is shown that the AMP scheme is stable independent of the ratio of the mass of the fluid to that of the structure. A traditional partitioned (TP) scheme using a Dirichlet–Neumann coupling for the same model problem is shown to be unconditionally unstable if the added mass of the fluid is too large. A series of benchmark problems of increasing complexity are considered to illustrate the behavior of the AMP algorithm, and to compare the behavior with that of the TP scheme. The results of all these benchmark problems verify the stability and accuracy of the AMP scheme. Results for one benchmark problem modeling blood flow in a deforming artery are also compared with corresponding results available in the literature.« less

  5. Validation and Performance Comparison of Numerical Codes for Tsunami Inundation

    NASA Astrophysics Data System (ADS)

    Velioglu, D.; Kian, R.; Yalciner, A. C.; Zaytsev, A.

    2015-12-01

    In inundation zones, tsunami motion turns from wave motion to flow of water. Modelling of this phenomenon is a complex problem since there are many parameters affecting the tsunami flow. In this respect, the performance of numerical codes that analyze tsunami inundation patterns becomes important. The computation of water surface elevation is not sufficient for proper analysis of tsunami behaviour in shallow water zones and on land and hence for the development of mitigation strategies. Velocity and velocity patterns are also crucial parameters and have to be computed at the highest accuracy. There are numerous numerical codes to be used for simulating tsunami inundation. In this study, FLOW 3D and NAMI DANCE codes are selected for validation and performance comparison. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. FLOW 3D is used specificaly for flood problems. NAMI DANCE uses finite difference computational method to solve linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In this study, these codes are validated and their performances are compared using two benchmark problems which are discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. One of the problems is an experiment of a single long-period wave propagating up a piecewise linear slope and onto a small-scale model of the town of Seaside, Oregon. Other benchmark problem is an experiment of a single solitary wave propagating up a triangular shaped shelf with an island feature located at the offshore point of the shelf. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. All results are presented with discussions and comparisons. The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement No 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)

  6. Classification and assessment tools for structural motif discovery algorithms.

    PubMed

    Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan

    2013-01-01

    Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.

  7. Comparison of the CENTRM resonance processor to the NITAWL resonance processor in SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollenbach, D.F.; Petrie, L.M.

    1998-01-01

    This report compares the MTAWL and CENTRM resonance processors in the SCALE code system. The cases examined consist of the International OECD/NEA Criticality Working Group Benchmark 20 problem. These cases represent fuel pellets partially dissolved in a borated solution. The assumptions inherent to the Nordheim Integral Treatment, used in MTAWL, are not valid for these problems. CENTRM resolves this limitation by explicitly calculating a problem dependent point flux from point cross sections, which is then used to create group cross sections.

  8. An Assembled Nanocomplex for Improving both Therapeutic Efficiency and Treatment Depth in Photodynamic Therapy.

    PubMed

    Cao, Hongqian; Wang, Lei; Yang, Yang; Li, Juan; Qi, Yanfei; Li, Yue; Li, Ying; Wang, Hao; Li, Junbai

    2018-06-25

    Photodynamic therapy (PDT) shows unique selectivity and irreversible destruction toward treated tissues or cells, but still has several problems in clinical practice. One is limited therapeutic efficiency, which is attributed to hypoxia in tumor sites. Another is the limited treatment depth because traditional photosensitizes are excited by short wavelength light (<700 nm). An assembled nano-complex system composed of oxygen donor, two-photon absorption (TPA) species, and photosensitizer (PS) was synthesized to address both problems. The photosensitizer is excited indirectly by two-photon laser through intraparticle FRET mechanism for improving treatment depth. The oxygen donor, hemoglobin, can supply extra oxygen into tumor location through targeting effect for enhanced PDT efficiency. The mechanism and PDT effect were verified through both in vitro and in vivo experiments. The simple system is promising to promote two-photon PDT for clinical applications. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Generation of quantum entangled states in nonlinear plasmonic structures and metamaterials (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Poddubny, Alexander N.; Sukhorukov, Andrey A.

    2015-09-01

    The practical development of quantum plasmonic circuits incorporating non-classical interference [1] and sources of entangled states calls for a versatile quantum theoretical framework which can fully describe the generation and detection of entangled photons and plasmons. However, majority of the presently used theoretical approaches are typically limited to the toy models assuming loss-less and nondispersive elements or including just a few resonant modes. Here, we present a rigorous Green function approach describing entangled photon-plasmon state generation through spontaneous wave mixing in realistic metal-dielectric nanostructures. Our approach is based on the local Huttner-Barnett quantization scheme [2], which enables problem formulation in terms of a Hermitian Hamiltonian where the losses and dispersion are fully encoded in the electromagnetic Green functions. Hence, the problem can be addressed by the standard quantum mechanical perturbation theory, overcoming mathematical difficulties associated with other quantization schemes. We derive explicit expressions with clear physical meaning for the spatially dependent two-photon detection probability, single-photon detection probability and single-photon density matrix. In the limiting case of low-loss nondispersive waveguides our approach reproduces the previous results [3,4]. Importantly, our technique is far more general and can quantitatively describe generation and detection of spatially-entangled photons in arbitrary metal-dielectric structures taking into account actual losses and dispersion. This is essential to perform the design and optimization of plasmonic structures for generation and control of quantum entangled states. [1] J.S. Fakonas, H. Lee, Y.A. Kelaita and H.A. Atwater, Nature Photonics 8, 317(2014) [2] W. Vogel and D.-G. Welsch, Quantum Optics, Wiley (2006). [3] D.A. Antonosyan, A.S. Solntsev and A.A. Sukhorukov, Phys. Rev. A 90 043845 (2014) [4] L.-G. Helt, J.E. Sipe and M.J. Steel, arXiv: 1407.4219

  10. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  11. Benchmarking B-Cell Epitope Prediction with Quantitative Dose-Response Data on Antipeptide Antibodies: Towards Novel Pharmaceutical Product Development

    PubMed Central

    Caoili, Salvador Eugenio C.

    2014-01-01

    B-cell epitope prediction can enable novel pharmaceutical product development. However, a mechanistically framed consensus has yet to emerge on benchmarking such prediction, thus presenting an opportunity to establish standards of practice that circumvent epistemic inconsistencies of casting the epitope prediction task as a binary-classification problem. As an alternative to conventional dichotomous qualitative benchmark data, quantitative dose-response data on antibody-mediated biological effects are more meaningful from an information-theoretic perspective in the sense that such effects may be expressed as probabilities (e.g., of functional inhibition by antibody) for which the Shannon information entropy (SIE) can be evaluated as a measure of informativeness. Accordingly, half-maximal biological effects (e.g., at median inhibitory concentrations of antibody) correspond to maximally informative data while undetectable and maximal biological effects correspond to minimally informative data. This applies to benchmarking B-cell epitope prediction for the design of peptide-based immunogens that elicit antipeptide antibodies with functionally relevant cross-reactivity. Presently, the Immune Epitope Database (IEDB) contains relatively few quantitative dose-response data on such cross-reactivity. Only a small fraction of these IEDB data is maximally informative, and many more of them are minimally informative (i.e., with zero SIE). Nevertheless, the numerous qualitative data in IEDB suggest how to overcome the paucity of informative benchmark data. PMID:24949474

  12. Generalized parametric down conversion, many particle interferometry, and Bell's theorem

    NASA Technical Reports Server (NTRS)

    Choi, Hyung Sup

    1992-01-01

    A new field of multi-particle interferometry is introduced using a nonlinear optical spontaneous parametric down conversion (SPDC) of a photon into more than two photons. The study of SPDC using a realistic Hamiltonian in a multi-mode shows that at least a low conversion rate limit is possible. The down converted field exhibits many stronger nonclassical phenomena than the usual two photon parametric down conversion. Application of the multi-particle interferometry to a recently proposed many particle Bell's theorem on the Einstein-Podolsky-Rosen problem is given.

  13. Nuclear Photonics

    NASA Astrophysics Data System (ADS)

    Nedorezov, V. G.; Savel'ev-Trofimov, A. B.

    2017-12-01

    A review of works performed at the Institute for Nuclear Research of the Russian Academy of Sciences and at the International Laser Center of the Moscow State University in the context of the new research area called "nuclear photonics" is presented. Nuclear photonics is based on creation of the new-generation gamma-ray sources which make it possible to solve a number of fundamental and applied problems, including research of low-energy photonuclear reactions, namely, investigation of collective excitations of nuclei near the threshold (pygmy resonances); nuclear safety assurance; production of low-energy positron beams; and phase-contrast X-ray imaging.

  14. Realistic loophole-free Bell test with atom-photon entanglement

    NASA Astrophysics Data System (ADS)

    Teo, C.; Araújo, M.; Quintino, M. T.; Minář, J.; Cavalcanti, D.; Scarani, V.; Terra Cunha, M.; França Santos, M.

    2013-07-01

    The establishment of nonlocal correlations, guaranteed through the violation of a Bell inequality, is not only important from a fundamental point of view but constitutes the basis for device-independent quantum information technologies. Although several nonlocality tests have been conducted so far, all of them suffered from either locality or detection loopholes. Among the proposals for overcoming these problems are the use of atom-photon entanglement and hybrid photonic measurements (for example, photodetection and homodyning). Recent studies have suggested that the use of atom-photon entanglement can lead to Bell inequality violations with moderate transmission and detection efficiencies. Here we combine these ideas and propose an experimental setup realizing a simple atom-photon entangled state that can be used to obtain nonlocality when considering realistic experimental parameters including detection efficiencies and losses due to required propagation distances.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachman, Daniel; Chen, Zhijiang; Wang, Christopher

    Phase errors caused by fabrication variations in silicon photonic integrated circuits are an important problem, which negatively impacts device yield and performance. This study reports our recent progress in the development of a method for permanent, postfabrication phase error correction of silicon photonic circuits based on femtosecond laser irradiation. Using beam shaping technique, we achieve a 14-fold enhancement in the phase tuning resolution of the method with a Gaussian-shaped beam compared to a top-hat beam. The large improvement in the tuning resolution makes the femtosecond laser method potentially useful for very fine phase trimming of silicon photonic circuits. Finally, wemore » also show that femtosecond laser pulses can directly modify silicon photonic devices through a SiO 2 cladding layer, making it the only permanent post-fabrication method that can tune silicon photonic circuits protected by an oxide cladding.« less

  16. Comptonization of X-rays by low-temperature electrons. [photon wavelength redistribution in cosmic sources

    NASA Technical Reports Server (NTRS)

    Illarionov, A.; Kallman, T.; Mccray, R.; Ross, R.

    1979-01-01

    A method is described for calculating the spectrum that results from the Compton scattering of a monochromatic source of X-rays by low-temperature electrons, both for initial-value relaxation problems and for steady-state spatial diffusion problems. The method gives an exact solution of the inital-value problem for evolution of the spectrum in an infinite homogeneous medium if Klein-Nishina corrections to the Thomson cross section are neglected. This, together with approximate solutions for problems in which Klein-Nishina corrections are significant and/or spatial diffusion occurs, shows spectral structure near the original photon wavelength that may be used to infer physical conditions in cosmic X-ray sources. Explicit results, shown for examples of time relaxation in an infinite medium and spatial diffusion through a uniform sphere, are compared with results obtained by Monte Carlo calculations and by solving the appropriate Fokker-Planck equation.

  17. HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN

    EPA Science Inventory

    While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...

  18. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  19. An analysis of MCNP cross-sections and tally methods for low-energy photon emitters.

    PubMed

    Demarco, John J; Wallace, Robert E; Boedeker, Kirsten

    2002-04-21

    Monte Carlo calculations are frequently used to analyse a variety of radiological science applications using low-energy (10-1000 keV) photon sources. This study seeks to create a low-energy benchmark for the MCNP Monte Carlo code by simulating the absolute dose rate in water and the air-kerma rate for monoenergetic point sources with energies between 10 keV and 1 MeV. The analysis compares four cross-section datasets as well as the tally method for collision kerma versus absorbed dose. The total photon attenuation coefficient cross-section for low atomic number elements has changed significantly as cross-section data have changed between 1967 and 1989. Differences of up to 10% are observed in the photoelectric cross-section for water at 30 keV between the standard MCNP cross-section dataset (DLC-200) and the most recent XCOM/NIST tabulation. At 30 keV, the absolute dose rate in water at 1.0 cm from the source increases by 7.8% after replacing the DLC-200 photoelectric cross-sections for water with those from the XCOM/NIST tabulation. The differences in the absolute dose rate are analysed when calculated with either the MCNP absorbed dose tally or the collision kerma tally. Significant differences between the collision kerma tally and the absorbed dose tally can occur when using the DLC-200 attenuation coefficients in conjunction with a modern tabulation of mass energy-absorption coefficients.

  20. Pairwise measures of causal direction in the epidemiology of sleep problems and depression.

    PubMed

    Rosenström, Tom; Jokela, Markus; Puttonen, Sampsa; Hintsanen, Mirka; Pulkki-Råback, Laura; Viikari, Jorma S; Raitakari, Olli T; Keltikangas-Järvinen, Liisa

    2012-01-01

    Depressive mood is often preceded by sleep problems, suggesting that they increase the risk of depression. Sleep problems can also reflect prodromal symptom of depression, thus temporal precedence alone is insufficient to confirm causality. The authors applied recently introduced statistical causal-discovery algorithms that can estimate causality from cross-sectional samples in order to infer the direction of causality between the two sets of symptoms from a novel perspective. Two common-population samples were used; one from the Young Finns study (690 men and 997 women, average age 37.7 years, range 30-45), and another from the Wisconsin Longitudinal study (3101 men and 3539 women, average age 53.1 years, range 52-55). These included three depression questionnaires (two in Young Finns data) and two sleep problem questionnaires. Three different causality estimates were constructed for each data set, tested in a benchmark data with a (practically) known causality, and tested for assumption violations using simulated data. Causality algorithms performed well in the benchmark data and simulations, and a prediction was drawn for future empirical studies to confirm: for minor depression/dysphoria, sleep problems cause significantly more dysphoria than dysphoria causes sleep problems. The situation may change as depression becomes more severe, or more severe levels of symptoms are evaluated; also, artefacts due to severe depression being less well presented in the population data than minor depression may intervene the estimation for depression scales that emphasize severe symptoms. The findings are consistent with other emerging epidemiological and biological evidence.

  1. Least-squares Legendre spectral element solutions to sound propagation problems.

    PubMed

    Lin, W H

    2001-02-01

    This paper presents a novel algorithm and numerical results of sound wave propagation. The method is based on a least-squares Legendre spectral element approach for spatial discretization and the Crank-Nicolson [Proc. Cambridge Philos. Soc. 43, 50-67 (1947)] and Adams-Bashforth [D. Gottlieb and S. A. Orszag, Numerical Analysis of Spectral Methods: Theory and Applications (CBMS-NSF Monograph, Siam 1977)] schemes for temporal discretization to solve the linearized acoustic field equations for sound propagation. Two types of NASA Computational Aeroacoustics (CAA) Workshop benchmark problems [ICASE/LaRC Workshop on Benchmark Problems in Computational Aeroacoustics, edited by J. C. Hardin, J. R. Ristorcelli, and C. K. W. Tam, NASA Conference Publication 3300, 1995a] are considered: a narrow Gaussian sound wave propagating in a one-dimensional space without flows, and the reflection of a two-dimensional acoustic pulse off a rigid wall in the presence of a uniform flow of Mach 0.5 in a semi-infinite space. The first problem was used to examine the numerical dispersion and dissipation characteristics of the proposed algorithm. The second problem was to demonstrate the capability of the algorithm in treating sound propagation in a flow. Comparisons were made of the computed results with analytical results and results obtained by other methods. It is shown that all results computed by the present method are in good agreement with the analytical solutions and results of the first problem agree very well with those predicted by other schemes.

  2. Finite element analysis of wrinkling membranes

    NASA Technical Reports Server (NTRS)

    Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.

    1984-01-01

    The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.

  3. Vertical Photon Transport in Cloud Remote Sensing Problems

    NASA Technical Reports Server (NTRS)

    Platnick, S.

    1999-01-01

    Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

  4. Improved artificial bee colony algorithm for vehicle routing problem with time windows

    PubMed Central

    Yan, Qianqian; Zhang, Mengjie; Yang, Yunong

    2017-01-01

    This paper investigates a well-known complex combinatorial problem known as the vehicle routing problem with time windows (VRPTW). Unlike the standard vehicle routing problem, each customer in the VRPTW is served within a given time constraint. This paper solves the VRPTW using an improved artificial bee colony (IABC) algorithm. The performance of this algorithm is improved by a local optimization based on a crossover operation and a scanning strategy. Finally, the effectiveness of the IABC is evaluated on some well-known benchmarks. The results demonstrate the power of IABC algorithm in solving the VRPTW. PMID:28961252

  5. Driven Boson Sampling.

    PubMed

    Barkhofen, Sonja; Bartley, Tim J; Sansoni, Linda; Kruse, Regina; Hamilton, Craig S; Jex, Igor; Silberhorn, Christine

    2017-01-13

    Sampling the distribution of bosons that have undergone a random unitary evolution is strongly believed to be a computationally hard problem. Key to outperforming classical simulations of this task is to increase both the number of input photons and the size of the network. We propose driven boson sampling, in which photons are input within the network itself, as a means to approach this goal. We show that the mean number of photons entering a boson sampling experiment can exceed one photon per input mode, while maintaining the required complexity, potentially leading to less stringent requirements on the input states for such experiments. When using heralded single-photon sources based on parametric down-conversion, this approach offers an ∼e-fold enhancement in the input state generation rate over scattershot boson sampling, reaching the scaling limit for such sources. This approach also offers a dramatic increase in the signal-to-noise ratio with respect to higher-order photon generation from such probabilistic sources, which removes the need for photon number resolution during the heralding process as the size of the system increases.

  6. Solar chameleons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brax, Philippe; Zioutas, Konstantin

    2010-08-15

    We analyze the creation of chameleons deep inside the Sun (R{approx}0.7R{sub sun}) and their subsequent conversion to photons near the magnetized surface of the Sun. We find that the spectrum of the regenerated photons lies in the soft x-ray region, hence addressing the solar corona problem. Moreover, these back-converted photons originating from chameleons have an intrinsic difference with regenerated photons from axions: their relative polarizations are mutually orthogonal before Compton interacting with the surrounding plasma. Depending on the photon-chameleon coupling and working in the strong coupling regime of the chameleons to matter, we find that the induced photon flux, whenmore » regenerated resonantly with the surrounding plasma, coincides with the solar flux within the soft x-ray energy range. Moreover, using the soft x-ray solar flux as a prior, we find that with a strong enough photon-chameleon coupling, the chameleons emitted by the Sun could lead to a regenerated photon flux in the CAST magnetic pipes, which could be within the reach of CAST with upgraded detector performance. Then, axion helioscopes have thus the potential to detect and identify particle candidates for the ubiquitous dark energy in the Universe.« less

  7. A spectral, quasi-cylindrical and dispersion-free Particle-In-Cell algorithm

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Andriyash, Igor A.; ...

    2016-02-17

    We propose a spectral Particle-In-Cell (PIC) algorithm that is based on the combination of a Hankel transform and a Fourier transform. For physical problems that have close-to-cylindrical symmetry, this algorithm can be much faster than full 3D PIC algorithms. In addition, unlike standard finite-difference PIC codes, the proposed algorithm is free of spurious numerical dispersion, in vacuum. This algorithm is benchmarked in several situations that are of interest for laser-plasma interactions. These benchmarks show that it avoids a number of numerical artifacts, that would otherwise affect the physics in a standard PIC algorithm - including the zero-order numerical Cherenkov effect.

  8. Building America Industrialized Housing Partnership (BAIHP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen

    This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in overmore » two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes, Cavalier and the manufacturers participating in the Northwest Energy Efficient Manufactured Home program. We worked with over two dozen Habitat for Humanity affiliates and helped them build over 700 Energy Star or near Energy Star homes. We have provided technical assistance to several show homes constructed for the International builders show in Orlando, FL and assisted with other prototype homes in cold climates that save 40% over the benchmark reference. In the Gainesville Fl area we have several builders that are consistently producing 15 to 30 homes per month in several subdivisions that meet the 30% benchmark savings goal. We have contributed to the 2006 DOE Joule goals by providing two community case studies meeting the 30% benchmark goal in marine climates.« less

  9. The Physical Mechanism for Retinal Discrete Dark Noise: Thermal Activation or Cellular Ultraweak Photon Emission?

    PubMed

    Salari, Vahid; Scholkmann, Felix; Bokkon, Istvan; Shahbazi, Farhad; Tuszynski, Jack

    2016-01-01

    For several decades the physical mechanism underlying discrete dark noise of photoreceptors in the eye has remained highly controversial and poorly understood. It is known that the Arrhenius equation, which is based on the Boltzmann distribution for thermal activation, can model only a part (e.g. half of the activation energy) of the retinal dark noise experimentally observed for vertebrate rod and cone pigments. Using the Hinshelwood distribution instead of the Boltzmann distribution in the Arrhenius equation has been proposed as a solution to the problem. Here, we show that the using the Hinshelwood distribution does not solve the problem completely. As the discrete components of noise are indistinguishable in shape and duration from those produced by real photon induced photo-isomerization, the retinal discrete dark noise is most likely due to 'internal photons' inside cells and not due to thermal activation of visual pigments. Indeed, all living cells exhibit spontaneous ultraweak photon emission (UPE), mainly in the optical wavelength range, i.e., 350-700 nm. We show here that the retinal discrete dark noise has a similar rate as UPE and therefore dark noise is most likely due to spontaneous cellular UPE and not due to thermal activation.

  10. Optimal portfolio selection in a Lévy market with uncontrolled cash flow and only risky assets

    NASA Astrophysics Data System (ADS)

    Zeng, Yan; Li, Zhongfei; Wu, Huiling

    2013-03-01

    This article considers an investor who has an exogenous cash flow evolving according to a Lévy process and invests in a financial market consisting of only risky assets, whose prices are governed by exponential Lévy processes. Two continuous-time portfolio selection problems are studied for the investor. One is a benchmark problem, and the other is a mean-variance problem. The first problem is solved by adopting the stochastic dynamic programming approach, and the obtained results are extended to the second problem by employing the duality theory. Closed-form solutions of these two problems are derived. Some existing results are found to be special cases of our results.

  11. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  12. Real-case benchmark for flow and tracer transport in the fractured rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hokr, M.; Shao, H.; Gardner, W. P.

    The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less

  13. Real-case benchmark for flow and tracer transport in the fractured rock

    DOE PAGES

    Hokr, M.; Shao, H.; Gardner, W. P.; ...

    2016-09-19

    The paper is intended to define a benchmark problem related to groundwater flow and natural tracer transport using observations of discharge and isotopic tracers in fractured, crystalline rock. Three numerical simulators: Flow123d, OpenGeoSys, and PFLOTRAN are compared. The data utilized in the project were collected in a water-supply tunnel in granite of the Jizera Mountains, Bedrichov, Czech Republic. The problem configuration combines subdomains of different dimensions, 3D continuum for hard-rock blocks or matrix and 2D features for fractures or fault zones, together with realistic boundary conditions for tunnel-controlled drainage. Steady-state and transient flow and a pulse injection tracer transport problemmore » are solved. The results confirm mostly consistent behavior of the codes. Both the codes Flow123d and OpenGeoSys with 3D–2D coupling implemented differ by several percent in most cases, which is appropriate to, e.g., effects of discrete unknown placing in the mesh. Some of the PFLOTRAN results differ more, which can be explained by effects of the dispersion tensor evaluation scheme and of the numerical diffusion. Here, the phenomenon can get stronger with fracture/matrix coupling and with parameter magnitude contrasts. Although the study was not aimed on inverse solution, the models were fit to the measured data approximately, demonstrating the intended real-case relevance of the benchmark.« less

  14. Non-Markovian dynamics of a qubit due to single-photon scattering in a waveguide

    NASA Astrophysics Data System (ADS)

    Fang, Yao-Lung L.; Ciccarello, Francesco; Baranger, Harold U.

    2018-04-01

    We investigate the open dynamics of a qubit due to scattering of a single photon in an infinite or semi-infinite waveguide. Through an exact solution of the time-dependent multi-photon scattering problem, we find the qubit's dynamical map. Tools of open quantum systems theory allow us then to show the general features of this map, find the corresponding non-Linbladian master equation, and assess in a rigorous way its non-Markovian nature. The qubit dynamics has distinctive features that, in particular, do not occur in emission processes. Two fundamental sources of non-Markovianity are present: the finite width of the photon wavepacket and the time delay for propagation between the qubit and the end of the semi-infinite waveguide.

  15. Model-Based Detection of Radioactive Contraband for Harbor Defense Incorporating Compton Scattering Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V; Chambers, D H; Breitfeller, E F

    2010-03-02

    The detection of radioactive contraband is a critical problem is maintaining national security for any country. Photon emissions from threat materials challenge both detection and measurement technologies especially when concealed by various types of shielding complicating the transport physics significantly. This problem becomes especially important when ships are intercepted by U.S. Coast Guard harbor patrols searching for contraband. The development of a sequential model-based processor that captures both the underlying transport physics of gamma-ray emissions including Compton scattering and the measurement of photon energies offers a physics-based approach to attack this challenging problem. The inclusion of a basic radionuclide representationmore » of absorbed/scattered photons at a given energy along with interarrival times is used to extract the physics information available from the noisy measurements portable radiation detection systems used to interdict contraband. It is shown that this physics representation can incorporated scattering physics leading to an 'extended' model-based structure that can be used to develop an effective sequential detection technique. The resulting model-based processor is shown to perform quite well based on data obtained from a controlled experiment.« less

  16. Search for resonances in diphoton events at √{s}=13 TeV with the ATLAS detector

    NASA Astrophysics Data System (ADS)

    Aaboud, M.; Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Aben, R.; AbouZeid, O. S.; Abraham, N. L.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Alconada Verzini, M. J.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Ali, B.; Aliev, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Alstaty, M.; Alvarez Gonzalez, B.; Álvarez Piqueras, D.; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Amaral Coutinho, Y.; Amelung, C.; Amidei, D.; Amor Dos Santos, S. P.; Amorim, A.; Amoroso, S.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antel, C.; Antonelli, M.; Antonov, A.; Anulli, F.; Aoki, M.; Aperio Bella, L.; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barisits, M.-S.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Barranco Navarro, L.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Benhar Noccioli, E.; Benitez, J.; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bessidskaia Bylund, O.; Bessner, M.; Besson, N.; Betancourt, C.; Bethani, A.; Bethke, S.; Bevan, A. J.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; Bilbao De Mendizabal, J.; Billoud, T. R. V.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bisanz, T.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bokan, P.; Bold, T.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Bossio Sola, J. D.; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Breaden Madden, W. D.; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Bruni, L. S.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Burr, J. T. P.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Cabrera Urbán, S.; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Callea, G.; Caloba, L. P.; Calvente Lopez, S.; Calvet, D.; Calvet, S.; Calvet, T. P.; Camacho Toro, R.; Camarda, S.; Camarri, P.; Cameron, D.; Caminal Armadans, R.; Camincher, C.; Campana, S.; Campanelli, M.; Camplani, A.; Campoverde, A.; Canale, V.; Canepa, A.; Cano Bret, M.; Cantero, J.; Cantrill, R.; Cao, T.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelijn, R.; Castelli, A.; Castillo Gimenez, V.; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavallaro, E.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerda Alberich, L.; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Chavez Barajas, C. A.; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Cherkaoui El Moursli, R.; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocca, C.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Citterio, M.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, M. R.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Colasurdo, L.; Cole, B.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Conde Muiño, P.; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cormier, K. J. R.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Crispin Ortuzar, M.; Cristinziani, M.; Croft, V.; Crosetti, G.; Cueto, A.; Cuhadar Donszelmann, T.; Cummings, J.; Curatolo, M.; Cúth, J.; Czirr, H.; Czodrowski, P.; D'amen, G.; D'Auria, S.; D'Onofrio, M.; Da Cunha Sargedas De Sousa, M. J.; Da Via, C.; Dabrowski, W.; Dado, T.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Dano Hoffmann, M.; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Maria, A.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J. B.; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Dehghanian, N.; Deigaard, I.; Del Gaudio, M.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dolejsi, J.; Dolezal, Z.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Dudder, A. Chr.; Duffield, E. M.; Duflot, L.; Dührssen, M.; Dumancic, M.; Dunford, M.; Duran Yildiz, H.; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; El Kacimi, M.; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farina, E. M.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Faucci Giannelli, M.; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Fernandez Martinez, P.; Fernandez Perez, S.; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferreira de Lima, D. E.; Ferrer, A.; Ferrere, D.; Ferretti, C.; Ferretto Parodi, A.; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Flores Castillo, L. R.; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Garay Walls, F. M.; García, C.; García Navarro, J. E.; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Gascon Bravo, A.; Gasnikova, K.; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisen, M.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; Gentsos, C.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Ghneimat, M.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuli, F.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Goncalves Pinto Firmino Da Costa, J.; Gonella, G.; Gonella, L.; Gongadze, A.; González de la Hoz, S.; Gonzalez Parra, G.; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gravila, P. M.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, R.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Gutierrez Ortiz, N. G.; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanisch, S.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Henriques Correia, A. M.; Henrot-Versille, S.; Herbert, G. H.; Herget, V.; Hernández Jiménez, Y.; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Hu, S.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Huo, P.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Ishijima, N.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Iturbe Ponce, J. M.; Iuppa, R.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Jimenez Pena, J.; Jin, S.; Jinaru, A.; Jinnouchi, O.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Juste Rozas, A.; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kaji, T.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kanjir, L.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kentaro, K.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khader, M.; Khalil-zada, F.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kilby, C. R.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koehler, N. M.; Koffas, T.; Koffeman, E.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozakai, C.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; La Rosa, A.; La Rosa Navarro, J. L.; La Rotonda, L.; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lanfermann, M. C.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Lasagni Manghi, F.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Le, B.; Le Dortz, O.; Le Guirriec, E.; Le Quilleuc, E. P.; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, B.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Lehmann Miotto, G.; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Lewis, D.; Leyko, A. M.; Leyton, M.; Li, B.; Li, C.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lionti, A. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Llorente Merino, J.; Lloyd, S. L.; Lo Sterzo, F.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Lopez Mateos, D.; Lopez Paredes, B.; Lopez Paz, I.; Lopez Solis, A.; Lorenz, J.; Lorenzo Martinez, N.; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Luzi, P. M.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Ma, Y.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Machado Miguens, J.; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Manhaes de Andrade Filho, L.; Manjarres Ramos, J.; Mann, A.; Manousos, A.; Mansoulie, B.; Mansour, J. D.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V. I.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Mc Fadden, N. C.; Mc Goldrick, G.; Mc Kee, S. P.; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McDonald, E. F.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meideck, T.; Meier, K.; Meineck, C.; Meirose, B.; Melini, D.; Mellado Garcia, B. R.; Melo, M.; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Meyer Zu Theenhausen, H.; Miano, F.; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Montejo Berlingen, J.; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Munoz Sanchez, F. J.; Murillo Quijada, J. A.; Murray, W. J.; Musheghyan, H.; Muškinja, M.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nagai, K.; Nagai, R.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Naranjo Garcia, R. F.; Narayan, R.; Narrias Villar, D. I.; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nguyen Manh, T.; Nickerson, R. B.; Nicolaidou, R.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Oleiro Seabra, L. F.; Olivares Pino, S. A.; Oliveira Damazio, D.; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Otero y Garzon, G.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pacheco Pages, A.; Pacheco Rodriguez, L.; Padilla Aranda, C.; Pagáčová, M.; Pagan Griso, S.; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palazzo, S.; Palestini, S.; Palka, M.; Pallin, D.; St. Panagiotopoulou, E.; Pandini, C. E.; Panduro Vazquez, J. G.; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Paredes Hernandez, D.; Parker, A. J.; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasqualucci, E.; Passaggio, S.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Pedraza Lopez, S.; Pedro, R.; Peleganchuk, S. V.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Perez Codina, E.; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Pozo Astigarraga, M. E.; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Raine, J. A.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Ravinovich, I.; Raymond, M.; Read, A. L.; Readioff, N. P.; Reale, M.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rimoldi, M.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Rodriguez Perez, A.; Rodriguez Rodriguez, D.; Roe, S.; Rogan, C. S.; Røhne, O.; Romaniouk, A.; Romano, M.; Romano Saez, S. M.; Romero Adam, E.; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rosien, N.-A.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Rzehorz, G. F.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Safai Tehrani, F.; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Salazar Loyola, J. E.; Salek, D.; Sales De Bruin, P. H.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Sanchez Martinez, V.; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Santoyo Castillo, I.; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Savic, N.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schachtner, B. M.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schier, S.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmidt-Sommerfeld, K. R.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schott, M.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schulte, A.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwartzman, A.; Schwarz, T. A.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Shoaleh Saadi, D.; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sickles, A. M.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smiesko, J.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Solans Sanchez, C. A.; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; St. Denis, R. D.; Stabile, A.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tan, K. G.; Tanaka, J.; Tanaka, M.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Tapia Araya, S.; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Tavares Delgado, A.; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Ten Kate, H.; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Ticse Torres, R. E.; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tu, Y.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Valdes Santurio, E.; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallier, A.; Valls Ferrer, J. A.; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasquez, J. G.; Vazeille, F.; Vazquez Schroeder, T.; Veatch, J.; Veeraraghavan, V.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Vickey Boeriu, O. E.; Viehhauser, G. H. A.; Viel, S.; Vigani, L.; Villa, M.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, W.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, M. D.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilk, F.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wolf, T. M. H.; Wolter, M. W.; Wolters, H.; Worm, S. D.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Yau Wong, K. H.; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zwalinski, L.

    2016-09-01

    Searches for new resonances decaying into two photons in the ATLAS experiment at the CERN Large Hadron Collider are described. The analysis is based on proton-proton collision data corresponding to an integrated luminosity of 3.2 fb-1 at √{s}=13 TeV recorded in 2015. Two searches are performed, one targeted at a spin-2 particle of mass larger than 500 GeV, using Randall-Sundrum graviton states as a benchmark model, and one optimized for a spin-0 particle of mass larger than 200 GeV. Varying both the mass and the decay width, the most significant deviation from the background-only hypothesis is observed at a diphoton invariant mass around 750 GeV with local significances of 3.8 and 3.9 standard deviations in the searches optimized for a spin-2 and spin-0 particle, respectively. The global significances are estimated to be 2.1 standard deviations for both analyses. The consistency between the data collected at 13 TeV and 8 TeV is also evaluated. Limits on the production cross section times branching ratio to two photons for the two resonance types are reported. [Figure not available: see fulltext.

  17. Atoms and molecules in cavities, from weak to strong coupling in quantum-electrodynamics (QED) chemistry

    PubMed Central

    Flick, Johannes; Ruggenthaler, Michael; Appel, Heiko

    2017-01-01

    In this work, we provide an overview of how well-established concepts in the fields of quantum chemistry and material sciences have to be adapted when the quantum nature of light becomes important in correlated matter–photon problems. We analyze model systems in optical cavities, where the matter–photon interaction is considered from the weak- to the strong-coupling limit and for individual photon modes as well as for the multimode case. We identify fundamental changes in Born–Oppenheimer surfaces, spectroscopic quantities, conical intersections, and efficiency for quantum control. We conclude by applying our recently developed quantum-electrodynamical density-functional theory to spontaneous emission and show how a straightforward approximation accurately describes the correlated electron–photon dynamics. This work paves the way to describe matter–photon interactions from first principles and addresses the emergence of new states of matter in chemistry and material science. PMID:28275094

  18. Conditions for entangled photon emission from (111)B site-controlled pyramidal quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juska, G., E-mail: gediminas.juska@tyndall.ie; Murray, E.; Dimastrodonato, V.

    A study of highly symmetric site-controlled pyramidal In{sub 0.25}Ga{sub 0.75}As quantum dots (QDs) is presented. It is discussed that polarization-entangled photons can be also obtained from pyramidal QDs of different designs from the one already reported in Juska et al. [Nat. Photonics 7, 527 (2013)]. Moreover, some of the limitations for a higher density of entangled photon emitters are addressed. Among these issues are (1) a remaining small fine-structure splitting and (2) an effective QD charging under non-resonant excitation conditions, which strongly reduce the number of useful biexciton-exciton recombination events. A possible solution of the charging problem is investigated exploitingmore » a dual-wavelength excitation technique, which allows a gradual QD charge tuning from strongly negative to positive and, eventually, efficient detection of entangled photons from QDs, which would be otherwise ineffective under a single-wavelength (non-resonant) excitation.« less

  19. γ-ray telescopes using conversions to e+e- pairs: event generators, angular resolution and polarimetry

    NASA Astrophysics Data System (ADS)

    Gros, P.; Bernard, D.

    2017-02-01

    We benchmark various available event generators in Geant4 and EGS5 in the light of ongoing projects for high angular-resolution pair-conversion telescopes at low energy. We compare the distributions of key kinematic variables extracted from the geometry of the three final state particles. We validate and use as reference an exact generator using the full 5D differential cross-section of the conversion process. We focus in particular on the effect of the unmeasured recoiling nucleus on the angular resolution. We show that for high resolution trackers, the choice of the generator affects the estimated resolution of the telescope. We also show that the current available generator are unable to describe accurately a linearly polarised photon source.

  20. Searching for long-lived particles: A compact detector for exotics at LHCb

    DOE PAGES

    Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; ...

    2018-01-31

    We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reachmore » either complements or exceeds that predicted for other LHC experiments.« less

  1. Optical modulation in silicon-vanadium dioxide photonic structures

    NASA Astrophysics Data System (ADS)

    Miller, Kevin J.; Hallman, Kent A.; Haglund, Richard F.; Weiss, Sharon M.

    2017-08-01

    All-optical modulators are likely to play an important role in future chip-scale information processing systems. In this work, through simulations, we investigate the potential of a recently reported vanadium dioxide (VO2) embedded silicon waveguide structure for ultrafast all-optical signal modulation. With a VO2 length of only 200 nm, finite-differencetime- domain simulations suggest broadband (200 nm) operation with a modulation greater than 12 dB and an insertion loss of less than 3 dB. Predicted performance metrics, including modulation speed, modulation depth, optical bandwidth, insertion loss, device footprint, and energy consumption of the proposed Si-VO2 all-optical modulator are benchmarked against those of current state-of-the-art all-optical modulators with in-plane optical excitation.

  2. The integral line-beam method for gamma skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shultis, J.K.; Faw, R.E.; Bassett, M.S.

    1991-03-01

    This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less

  3. New Multi-group Transport Neutronics (PHISICS) Capabilities for RELAP5-3D and its Application to Phase I of the OECD/NEA MHTGR-350 MW Benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom; Cristian Rabiti; Andrea Alfonsi

    2012-10-01

    PHISICS is a neutronics code system currently under development at the Idaho National Laboratory (INL). Its goal is to provide state of the art simulation capability to reactor designers. The different modules for PHISICS currently under development are a nodal and semi-structured transport core solver (INSTANT), a depletion module (MRTAU) and a cross section interpolation (MIXER) module. The INSTANT module is the most developed of the mentioned above. Basic functionalities are ready to use, but the code is still in continuous development to extend its capabilities. This paper reports on the effort of coupling the nodal kinetics code package PHISICSmore » (INSTANT/MRTAU/MIXER) to the thermal hydraulics system code RELAP5-3D, to enable full core and system modeling. This will enable the possibility to model coupled (thermal-hydraulics and neutronics) problems with more options for 3D neutron kinetics, compared to the existing diffusion theory neutron kinetics module in RELAP5-3D (NESTLE). In the second part of the paper, an overview of the OECD/NEA MHTGR-350 MW benchmark is given. This benchmark has been approved by the OECD, and is based on the General Atomics 350 MW Modular High Temperature Gas Reactor (MHTGR) design. The benchmark includes coupled neutronics thermal hydraulics exercises that require more capabilities than RELAP5-3D with NESTLE offers. Therefore, the MHTGR benchmark makes extensive use of the new PHISICS/RELAP5-3D coupling capabilities. The paper presents the preliminary results of the three steady state exercises specified in Phase I of the benchmark using PHISICS/RELAP5-3D.« less

  4. Potential Deep Seated Landslide Mapping from Various Temporal Data - Benchmark, Aerial Photo, and SAR

    NASA Astrophysics Data System (ADS)

    Wang, Kuo-Lung; Lin, Jun-Tin; Lee, Yi-Hsuan; Lin, Meei-Ling; Chen, Chao-Wei; Liao, Ray-Tang; Chi, Chung-Chi; Lin, Hsi-Hung

    2016-04-01

    Landslide is always not hazard until mankind development in highly potential area. The study tries to map deep seated landslide before the initiation of landslide. Study area in central Taiwan is selected and the geological condition is quite unique, which is slate. Major direction of bedding in this area is northeast and the dip ranges from 30-75 degree to southeast. Several deep seated landslides were discovered in the same side of bedding from rainfall events. The benchmarks from 2002 ~ 2009 are in this study. However, the benchmarks were measured along Highway No. 14B and the road was constructed along the peak of mountains. Taiwan located between sea plates and continental plate. The elevation of mountains is rising according to most GPS and benchmarks in the island. The same trend is discovered from benchmarks in this area. But some benchmarks are located in landslide area thus the elevation is below average and event negative. The aerial photos from 1979 to 2007 are used for orthophoto generation. The changes of land use are obvious during 30 years and enlargement of river channel is also observed in this area. Both benchmarks and aerial photos have discovered landslide potential did exist this area but how big of landslide in not easy to define currently. Thus SAR data utilization is adopted in this case. DInSAR and SBAS sar analysis are used in this research and ALOS/PALSAR from 2006 to 2010 is adopted. DInSAR analysis shows that landslide is possible mapped but the error is not easy to reduce. The error is possibly form several conditions such as vegetation, clouds, vapor, etc. To conquer the problem, time series analysis, SBAS, is adopted in this research. The result of SBAS in this area shows that large deep seated landslides are easy mapped and the accuracy of vertical displacement is reasonable.

  5. Astatine-211 imaging by a Compton camera for targeted radiotherapy.

    PubMed

    Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi

    2018-05-24

    Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Streak camera imaging of single photons at telecom wavelength

    NASA Astrophysics Data System (ADS)

    Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine

    2018-01-01

    Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.

  7. Fabrication of 3D polymer photonic crystals for near-IR applications

    NASA Astrophysics Data System (ADS)

    Yao, Peng; Qiu, Liang; Shi, Shouyuan; Schneider, Garrett J.; Prather, Dennis W.; Sharkawy, Ahmed; Kelmelis, Eric

    2008-02-01

    Photonic crystals[1, 2] have stirred enormous research interest and became a growing enterprise in the last 15 years. Generally, PhCs consist of periodic structures that possess periodicity comparable with the wavelength that the PhCs are designed to modulate. If material and periodic pattern are properly selected, PhCs can be applied to many applications based on their unique properties, including photonic band gaps (PBG)[3], self-collimation[4], super prism[5], etc. Strictly speaking, PhCs need to possess periodicity in three dimensions to maximize their advantageous capabilities. However, many current research is based on scaled two-dimensional PhCs, mainly due to the difficulty of fabrication such three-dimensional PhCs. Many approaches have been explored for the fabrication of 3D photonic crystals, including layer-by-layer surface micromachining[6], glancing angle deposition[7], 3D micro-sculpture method[8], self-assembly[9] and lithographical methods[10-12]. Among them, lithographic methods became increasingly accepted due to low costs and precise control over the photonic crystal structure. There are three mostly developed lithographical methods, namely X-ray lithography[10], holographic lithography[11] and two-photon polymerization[12]. Although significant progress has been made in developing these lithography-based technologies, these approaches still suffer from significant disadvantages. X-ray lithography relies on an expensive radiation source. Holographic lithography lacks the flexibility to create engineered defects, and multi-photon polymerization is not suitable for parallel fabrication. In our previous work, we developed a multi-layer photolithography processes[13, 14] that is based on multiple resist application and enhanced absorption upon exposure. Using a negative lift-off resist (LOR) and 254nm DUV source, we have demonstrated fabrication of 3D arbitrary structures with feature size of several microns. However, severe intermixing problem occurred as we reduced the lattice constant for near-IR applications. In this work, we address this problem by employing SU8. The exposure is vertically confined by using a mismatched 220nm DUV source. Intermixing problem is eliminated due to more densely crosslinked resist molecules. Using this method, we have demonstrated 3D "woodpile" structure with 1.55μm lattice constant and a 2mm-by-2mm pattern area.

  8. Data reading with the aid of one-photon and two-photon luminescence in three-dimensional optical memory devices based on photochromic materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimov, Denis A; Zheltikov, Aleksei M; Koroteev, Nikolai I

    1998-06-30

    The problem of nondestructive reading of the data stored in the interior of a photochromic sample was analysed. A comparison was made of the feasibility of reading based on one-photon and two-photon luminescence. A model was proposed for the processes of reading the data stored in photochromic molecules with the aid of one-photon and two-photon luminescence. In addition to photochromic transitions, account was taken of the transfer of populations between optically coupled transitions in molecules under the action of the exciting radiation. This model provided a satisfactory description of the kinetics of decay of the coloured form of bulk samplesmore » of spiropyran and made it possible to determine experimentally the quantum yield of the reverse photoreaction as well as the two-photon absorption cross section of the coloured form. Measurements were made of the characteristic erasure times of the data stored in a photochromic medium under one-photon and two-photon luminescence reading conditions. It was found that the use of two-photon luminescence made it possible to enhance considerably the contrast and localisation of the optical data reading scheme in three-dimensional optical memory devices. The experimental results were used to estimate the two-photon absorption cross section of the coloured form of a sample of indoline spiropyran in a polymethyl methacrylate matrix. (laser applications and other topics in quantum electronics)« less

  9. Optical Properties and Wave Propagation in Semiconductor-Based Two-Dimensional Photonic Crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agio, Mario

    2002-12-31

    This work is a theoretical investigation on the physical properties of semiconductor-based two-dimensional photonic crystals, in particular for what concerns systems embedded in planar dielectric waveguides (GaAs/AlGaAs, GaInAsP/InP heterostructures, and self-standing membranes) or based on macro-porous silicon. The photonic-band structure of photonic crystals and photonic-crystal slabs is numerically computed and the associated light-line problem is discussed, which points to the issue of intrinsic out-of-lane diffraction losses for the photonic bands lying above the light line. The photonic states are then classified by the group theory formalism: each mode is related to an irreducible representation of the corresponding small point group.more » The optical properties are investigated by means of the scattering matrix method, which numerically implements a variable-angle-reflectance experiment; comparison with experiments is also provided. The analysis of surface reflectance proves the existence of selection rules for coupling an external wave to a certain photonic mode. Such rules can be directly derived from symmetry considerations. Lastly, the control of wave propagation in weak-index contrast photonic-crystal slabs is tackled in view of designing building blocks for photonic integrated circuits. The proposed designs are found to comply with the major requirements of low-loss propagation, high and single-mode transmission. These notions are then collected to model a photonic-crystal combiner for an integrated multi-wavelength-source laser.« less

  10. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Data reading with the aid of one-photon and two-photon luminescence in three-dimensional optical memory devices based on photochromic materials

    NASA Astrophysics Data System (ADS)

    Akimov, Denis A.; Zheltikov, Aleksei M.; Koroteev, Nikolai I.; Magnitskiy, Sergey A.; Naumov, A. N.; Sidorov-Biryukov, D. A.; Sokolyuk, N. T.; Fedotov, Andrei B.

    1998-06-01

    The problem of nondestructive reading of the data stored in the interior of a photochromic sample was analysed. A comparison was made of the feasibility of reading based on one-photon and two-photon luminescence. A model was proposed for the processes of reading the data stored in photochromic molecules with the aid of one-photon and two-photon luminescence. In addition to photochromic transitions, account was taken of the transfer of populations between optically coupled transitions in molecules under the action of the exciting radiation. This model provided a satisfactory description of the kinetics of decay of the coloured form of bulk samples of spiropyran and made it possible to determine experimentally the quantum yield of the reverse photoreaction as well as the two-photon absorption cross section of the coloured form. Measurements were made of the characteristic erasure times of the data stored in a photochromic medium under one-photon and two-photon luminescence reading conditions. It was found that the use of two-photon luminescence made it possible to enhance considerably the contrast and localisation of the optical data reading scheme in three-dimensional optical memory devices. The experimental results were used to estimate the two-photon absorption cross section of the coloured form of a sample of indoline spiropyran in a polymethyl methacrylate matrix.

  11. Exploring New Physics Frontiers Through Numerical Relativity.

    PubMed

    Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Sperhake, Ulrich

    2015-01-01

    The demand to obtain answers to highly complex problems within strong-field gravity has been met with significant progress in the numerical solution of Einstein's equations - along with some spectacular results - in various setups. We review techniques for solving Einstein's equations in generic spacetimes, focusing on fully nonlinear evolutions but also on how to benchmark those results with perturbative approaches. The results address problems in high-energy physics, holography, mathematical physics, fundamental physics, astrophysics and cosmology.

  12. Differential Cloud Particles Evolution Algorithm Based on Data-Driven Mechanism for Applications of ANN

    PubMed Central

    2017-01-01

    Computational scientists have designed many useful algorithms by exploring a biological process or imitating natural evolution. These algorithms can be used to solve engineering optimization problems. Inspired by the change of matter state, we proposed a novel optimization algorithm called differential cloud particles evolution algorithm based on data-driven mechanism (CPDD). In the proposed algorithm, the optimization process is divided into two stages, namely, fluid stage and solid stage. The algorithm carries out the strategy of integrating global exploration with local exploitation in fluid stage. Furthermore, local exploitation is carried out mainly in solid stage. The quality of the solution and the efficiency of the search are influenced greatly by the control parameters. Therefore, the data-driven mechanism is designed for obtaining better control parameters to ensure good performance on numerical benchmark problems. In order to verify the effectiveness of CPDD, numerical experiments are carried out on all the CEC2014 contest benchmark functions. Finally, two application problems of artificial neural network are examined. The experimental results show that CPDD is competitive with respect to other eight state-of-the-art intelligent optimization algorithms. PMID:28761438

  13. A new effective operator for the hybrid algorithm for solving global optimisation problems

    NASA Astrophysics Data System (ADS)

    Duc, Le Anh; Li, Kenli; Nguyen, Tien Trong; Yen, Vu Minh; Truong, Tung Khac

    2018-04-01

    Hybrid algorithms have been recently used to solve complex single-objective optimisation problems. The ultimate goal is to find an optimised global solution by using these algorithms. Based on the existing algorithms (HP_CRO, PSO, RCCRO), this study proposes a new hybrid algorithm called MPC (Mean-PSO-CRO), which utilises a new Mean-Search Operator. By employing this new operator, the proposed algorithm improves the search ability on areas of the solution space that the other operators of previous algorithms do not explore. Specifically, the Mean-Search Operator helps find the better solutions in comparison with other algorithms. Moreover, the authors have proposed two parameters for balancing local and global search and between various types of local search, as well. In addition, three versions of this operator, which use different constraints, are introduced. The experimental results on 23 benchmark functions, which are used in previous works, show that our framework can find better optimal or close-to-optimal solutions with faster convergence speed for most of the benchmark functions, especially the high-dimensional functions. Thus, the proposed algorithm is more effective in solving single-objective optimisation problems than the other existing algorithms.

  14. A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems

    NASA Astrophysics Data System (ADS)

    Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.

    2012-04-01

    This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.

  15. A hybridizable discontinuous Galerkin method for modeling fluid-structure interaction

    NASA Astrophysics Data System (ADS)

    Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.

    2016-12-01

    This work presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid-structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian-Eulerian Navier-Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modeling is coupling together their disparate mathematics on the fluid-solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.

  16. A hybridizable discontinuous Galerkin method for modeling fluid–structure interaction

    DOE PAGES

    Sheldon, Jason P.; Miller, Scott T.; Pitt, Jonathan S.

    2016-08-31

    This study presents a novel application of the hybridizable discontinuous Galerkin (HDG) finite element method to the multi-physics simulation of coupled fluid–structure interaction (FSI) problems. Recent applications of the HDG method have primarily been for single-physics problems including both solids and fluids, which are necessary building blocks for FSI modeling. Utilizing these established models, HDG formulations for linear elastostatics, a nonlinear elastodynamic model, and arbitrary Lagrangian–Eulerian Navier–Stokes are derived. The elasticity formulations are written in a Lagrangian reference frame, with the nonlinear formulation restricted to hyperelastic materials. With these individual solid and fluid formulations, the remaining challenge in FSI modelingmore » is coupling together their disparate mathematics on the fluid–solid interface. This coupling is presented, along with the resultant HDG FSI formulation. Verification of the component models, through the method of manufactured solutions, is performed and each model is shown to converge at the expected rate. The individual components, along with the complete FSI model, are then compared to the benchmark problems proposed by Turek and Hron [1]. The solutions from the HDG formulation presented in this work trend towards the benchmark as the spatial polynomial order and the temporal order of integration are increased.« less

  17. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise Paul

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less

  18. Generation IV benchmarking of TRISO fuel performance models under accident conditions: Modeling input data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collin, Blaise P.

    2014-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less

  19. Present Status and Extensions of the Monte Carlo Performance Benchmark

    NASA Astrophysics Data System (ADS)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  20. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization.

    PubMed

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive.

  1. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  2. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  3. Two-fluid dusty shocks: simple benchmarking problems and applications to protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Lehmann, Andrew; Wardle, Mark

    2018-05-01

    The key role that dust plays in the interstellar medium has motivated the development of numerical codes designed to study the coupled evolution of dust and gas in systems such as turbulent molecular clouds and protoplanetary discs. Drift between dust and gas has proven to be important as well as numerically challenging. We provide simple benchmarking problems for dusty gas codes by numerically solving the two-fluid dust-gas equations for steady, plane-parallel shock waves. The two distinct shock solutions to these equations allow a numerical code to test different forms of drag between the two fluids, the strength of that drag and the dust to gas ratio. We also provide an astrophysical application of J-type dust-gas shocks to studying the structure of accretion shocks on to protoplanetary discs. We find that two-fluid effects are most important for grains larger than 1 μm, and that the peak dust temperature within an accretion shock provides a signature of the dust-to-gas ratio of the infalling material.

  4. Benchmarking for Bayesian Reinforcement Learning.

    PubMed

    Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  5. Cove benchmark calculations using SAGUARO and FEMTRAN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eaton, R.R.; Martinez, M.J.

    1986-10-01

    Three small-scale, time-dependent, benchmarking calculations have been made using the finite element codes SAGUARO, to determine hydraulic head and water velocity profiles, and FEMTRAN, to predict the solute transport. Sand and hard rock porous materials were used. Time scales for the problems, which ranged from tens of hours to thousands of years, have posed no particular diffculty for the two codes. Studies have been performed to determine the effects of computational mesh, boundary conditions, velocity formulation and SAGUARO/FEMTRAN code-coupling on water and solute transport. Results showed that mesh refinement improved mass conservation. Varying the drain-tile size in COVE 1N hadmore » a weak effect on the rate at which the tile field drained. Excellent agreement with published COVE 1N data was obtained for the hydrological field and reasonable agreement for the solute-concentration predictions. The question remains whether these types of calculations can be carried out on repository-scale problems using material characteristic curves representing tuff with fractures.« less

  6. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization

    PubMed Central

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive. PMID:26880872

  7. Photon-limited Sensing and Surveillance

    DTIC Science & Technology

    2015-01-29

    considerable time delay). More specifically, there were four main outcomes from this work: • Improved understanding of the fundmental limitations of...that we design novel cameras for photon-limited settings based on the principles of CS. Most prior theoretical results in compressed sensing and related...inverse problems apply to idealized settings where the noise is i.i.d., and do not account for signal-dependent noise and physical sensing

  8. Analytical solutions for benchmarking cold regions subsurface water flow and energy transport models: one-dimensional soil thaw with conduction and advection

    USGS Publications Warehouse

    Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.

    2014-01-01

    Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.

  9. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. ©2015 American Association for Cancer Research.

  10. Perspective: Recommendations for benchmarking pre-clinical studies of nanomedicines

    PubMed Central

    Dawidczyk, Charlene M.; Russell, Luisa M.; Searson, Peter C.

    2015-01-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small molecule drug therapy for cancer, and to achieve both therapeutic and diagnostic functions in the same platform. Pre-clinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of pre-clinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of pre-clinical trials and propose a protocol for benchmarking that we recommend be included in in vivo pre-clinical studies of drug delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies. PMID:26249177

  11. Exact solutions for the collaborative pickup and delivery problem.

    PubMed

    Gansterer, Margaretha; Hartl, Richard F; Salzmann, Philipp E H

    2018-01-01

    In this study we investigate the decision problem of a central authority in pickup and delivery carrier collaborations. Customer requests are to be redistributed among participants, such that the total cost is minimized. We formulate the problem as multi-depot traveling salesman problem with pickups and deliveries. We apply three well-established exact solution approaches and compare their performance in terms of computational time. To avoid unrealistic solutions with unevenly distributed workload, we extend the problem by introducing minimum workload constraints. Our computational results show that, while for the original problem Benders decomposition is the method of choice, for the newly formulated problem this method is clearly dominated by the proposed column generation approach. The obtained results can be used as benchmarks for decentralized mechanisms in collaborative pickup and delivery problems.

  12. The PNC-CAT insertion device beamline at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Heald, S. M.; Stern, E. A.; Brown, F. C.; Kim, K. H.; Barg, B.; Crozier, E. D.

    1996-09-01

    The PNC-CAT is a consortium of Pacific Northwest institutions formed to instrument a sector (number 20) at the Advanced Photon Source (APS). Research is planned in a variety of areas, with an emphasis on environmentally based problems. The insertion device beamline is based on the APS undulator A and will be optimized for producing microbeams as well as for applications requiring energy scanning capabilities. This paper describes the basic layout and some special features of the beamline. Two experimental stations are planned: one general purpose and one dedicated to MBE and surface science problems. Both tapered capillaries and Kirkpatrick-Baez optics will be used for producing microbeams, and a large optical bench is planned for the main station to allow for easy accommodation of new optics developments. Design calculations and initial capillary tests indicate that flux densities exceeding 1011 photons/sec/mm2 should be achievable. All major components are under construction or in procurement, and initial testing is planned for late 1996.

  13. Robotic Automation of In Vivo Two-Photon Targeted Whole-Cell Patch-Clamp Electrophysiology.

    PubMed

    Annecchino, Luca A; Morris, Alexander R; Copeland, Caroline S; Agabi, Oshiorenoya E; Chadderton, Paul; Schultz, Simon R

    2017-08-30

    Whole-cell patch-clamp electrophysiological recording is a powerful technique for studying cellular function. While in vivo patch-clamp recording has recently benefited from automation, it is normally performed "blind," meaning that throughput for sampling some genetically or morphologically defined cell types is unacceptably low. One solution to this problem is to use two-photon microscopy to target fluorescently labeled neurons. Combining this with robotic automation is difficult, however, as micropipette penetration induces tissue deformation, moving target cells from their initial location. Here we describe a platform for automated two-photon targeted patch-clamp recording, which solves this problem by making use of a closed loop visual servo algorithm. Our system keeps the target cell in focus while iteratively adjusting the pipette approach trajectory to compensate for tissue motion. We demonstrate platform validation with patch-clamp recordings from a variety of cells in the mouse neocortex and cerebellum. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Reconstructing high-dimensional two-photon entangled states via compressive sensing

    PubMed Central

    Tonolini, Francesco; Chan, Susan; Agnew, Megan; Lindsay, Alan; Leach, Jonathan

    2014-01-01

    Accurately establishing the state of large-scale quantum systems is an important tool in quantum information science; however, the large number of unknown parameters hinders the rapid characterisation of such states, and reconstruction procedures can become prohibitively time-consuming. Compressive sensing, a procedure for solving inverse problems by incorporating prior knowledge about the form of the solution, provides an attractive alternative to the problem of high-dimensional quantum state characterisation. Using a modified version of compressive sensing that incorporates the principles of singular value thresholding, we reconstruct the density matrix of a high-dimensional two-photon entangled system. The dimension of each photon is equal to d = 17, corresponding to a system of 83521 unknown real parameters. Accurate reconstruction is achieved with approximately 2500 measurements, only 3% of the total number of unknown parameters in the state. The algorithm we develop is fast, computationally inexpensive, and applicable to a wide range of quantum states, thus demonstrating compressive sensing as an effective technique for measuring the state of large-scale quantum systems. PMID:25306850

  15. Intraparticle FRET for Enhanced Efficiency of Two-Photon Activated Photodynamic Therapy.

    PubMed

    Cao, Hongqian; Yang, Yang; Qi, Yanfei; Li, Yue; Sun, Bingbing; Li, Ying; Cui, Wei; Li, Juan; Li, Junbai

    2018-06-01

    Photodynamic therapy (PDT) still faces two main problems on cancer therapy. One is how to improve PDT efficiency against hypoxic environment of tumors. The other one is how to overcome the limit of short wavelength light to increase PDT treatment depth. In this work, an intraparticle fluorescence resonance energy transfer (FRET) platform is designed to address these problems together. The nanoparticles are doped with multicomponents, such as catalase, two-photon dyes, and traditional photosensitizers, with a simple "one-pot" and green method. On the one hand, catalase can catalyze intracellular H 2 O 2 into O 2 and promote PDT efficiency. One the other hand, photosensitizers can be excited indirectly by two-photon lasers through an intraparticle FRET mechanism, which results in deeper tissue penetration for PDT. These properties are verified through the material induced cytotoxicity in light or in dark and in vivo blocking blood-vessel experiment. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  17. Distribution of electromagnetic field and group velocities in two-dimensional periodic systems with dissipative metallic components

    NASA Astrophysics Data System (ADS)

    Kuzmiak, Vladimir; Maradudin, Alexei A.

    1998-09-01

    We study the distribution of the electromagnetic field of the eigenmodes and corresponding group velocities associated with the photonic band structures of two-dimensional periodic systems consisting of an array of infinitely long parallel metallic rods whose intersections with a perpendicular plane form a simple square lattice. We consider both nondissipative and lossy metallic components characterized by a complex frequency-dependent dielectric function. Our analysis is based on the calculation of the complex photonic band structure obtained by using a modified plane-wave method that transforms the problem of solving Maxwell's equations into the problem of diagonalizing an equivalent non-Hermitian matrix. In order to investigate the nature and the symmetry properties of the eigenvectors, which significantly affect the optical properties of the photonic lattices, we evaluate the associated field distribution at the high symmetry points and along high symmetry directions in the two-dimensional first Brillouin zone of the periodic system. By considering both lossless and lossy metallic rods we study the effect of damping on the spatial distribution of the eigenvectors. Then we use the Hellmann-Feynman theorem and the eigenvectors and eigenfrequencies obtained from a photonic band-structure calculation based on a standard plane-wave approach applied to the nondissipative system to calculate the components of the group velocities associated with individual bands as functions of the wave vector in the first Brillouin zone. From the group velocity of each eigenmode the flow of energy is examined. The results obtained indicate a strong directional dependence of the group velocity, and confirm the experimental observation that a photonic crystal is a potentially efficient tool in controlling photon propagation.

  18. New analytical solutions to the two-phase water faucet problem

    DOE PAGES

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-06-17

    Here, the one-dimensional water faucet problem is one of the classical benchmark problems originally proposed by Ransom to study the two-fluid two-phase flow model. With certain simplifications, such as massless gas phase and no wall and interfacial frictions, analytical solutions had been previously obtained for the transient liquid velocity and void fraction distribution. The water faucet problem and its analytical solutions have been widely used for the purposes of code assessment, benchmark and numerical verifications. In our previous study, the Ransom’s solutions were used for the mesh convergence study of a high-resolution spatial discretization scheme. It was found that, atmore » the steady state, an anticipated second-order spatial accuracy could not be achieved, when compared to the existing Ransom’s analytical solutions. A further investigation showed that the existing analytical solutions do not actually satisfy the commonly used two-fluid single-pressure two-phase flow equations. In this work, we present a new set of analytical solutions of the water faucet problem at the steady state, considering the gas phase density’s effect on pressure distribution. This new set of analytical solutions are used for mesh convergence studies, from which anticipated second-order of accuracy is achieved for the 2nd order spatial discretization scheme. In addition, extended Ransom’s transient solutions for the gas phase velocity and pressure are derived, with the assumption of decoupled liquid and gas pressures. Numerical verifications on the extended Ransom’s solutions are also presented.« less

  19. Enhancing Solar Cell Efficiency Using Photon Upconversion Materials

    PubMed Central

    Shang, Yunfei; Hao, Shuwei; Yang, Chunhui; Chen, Guanying

    2015-01-01

    Photovoltaic cells are able to convert sunlight into electricity, providing enough of the most abundant and cleanest energy to cover our energy needs. However, the efficiency of current photovoltaics is significantly impeded by the transmission loss of sub-band-gap photons. Photon upconversion is a promising route to circumvent this problem by converting these transmitted sub-band-gap photons into above-band-gap light, where solar cells typically have high quantum efficiency. Here, we summarize recent progress on varying types of efficient upconversion materials as well as their outstanding uses in a series of solar cells, including silicon solar cells (crystalline and amorphous), gallium arsenide (GaAs) solar cells, dye-sensitized solar cells, and other types of solar cells. The challenge and prospect of upconversion materials for photovoltaic applications are also discussed. PMID:28347095

  20. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  1. Hybrid Toffoli gate on photons and quantum spins

    PubMed Central

    Luo, Ming-Xing; Ma, Song-Ya; Chen, Xiu-Bo; Wang, Xiaojun

    2015-01-01

    Quantum computation offers potential advantages in solving a number of interesting and difficult problems. Several controlled logic gates, the elemental building blocks of quantum computer, have been realized with various physical systems. A general technique was recently proposed that significantly reduces the realization complexity of multiple-control logic gates by harnessing multi-level information carriers. We present implementations of a key quantum circuit: the three-qubit Toffoli gate. By exploring the optical selection rules of one-sided optical microcavities, a Toffoli gate may be realized on all combinations of photon and quantum spins in the QD-cavity. The three general controlled-NOT gates are involved using an auxiliary photon with two degrees of freedom. Our results show that photons and quantum spins may be used alternatively in quantum information processing. PMID:26568078

  2. Hybrid Toffoli gate on photons and quantum spins.

    PubMed

    Luo, Ming-Xing; Ma, Song-Ya; Chen, Xiu-Bo; Wang, Xiaojun

    2015-11-16

    Quantum computation offers potential advantages in solving a number of interesting and difficult problems. Several controlled logic gates, the elemental building blocks of quantum computer, have been realized with various physical systems. A general technique was recently proposed that significantly reduces the realization complexity of multiple-control logic gates by harnessing multi-level information carriers. We present implementations of a key quantum circuit: the three-qubit Toffoli gate. By exploring the optical selection rules of one-sided optical microcavities, a Toffoli gate may be realized on all combinations of photon and quantum spins in the QD-cavity. The three general controlled-NOT gates are involved using an auxiliary photon with two degrees of freedom. Our results show that photons and quantum spins may be used alternatively in quantum information processing.

  3. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  4. Pairwise Measures of Causal Direction in the Epidemiology of Sleep Problems and Depression

    PubMed Central

    Rosenström, Tom; Jokela, Markus; Puttonen, Sampsa; Hintsanen, Mirka; Pulkki-Råback, Laura; Viikari, Jorma S.; Raitakari, Olli T.; Keltikangas-Järvinen, Liisa

    2012-01-01

    Depressive mood is often preceded by sleep problems, suggesting that they increase the risk of depression. Sleep problems can also reflect prodromal symptom of depression, thus temporal precedence alone is insufficient to confirm causality. The authors applied recently introduced statistical causal-discovery algorithms that can estimate causality from cross-sectional samples in order to infer the direction of causality between the two sets of symptoms from a novel perspective. Two common-population samples were used; one from the Young Finns study (690 men and 997 women, average age 37.7 years, range 30–45), and another from the Wisconsin Longitudinal study (3101 men and 3539 women, average age 53.1 years, range 52–55). These included three depression questionnaires (two in Young Finns data) and two sleep problem questionnaires. Three different causality estimates were constructed for each data set, tested in a benchmark data with a (practically) known causality, and tested for assumption violations using simulated data. Causality algorithms performed well in the benchmark data and simulations, and a prediction was drawn for future empirical studies to confirm: for minor depression/dysphoria, sleep problems cause significantly more dysphoria than dysphoria causes sleep problems. The situation may change as depression becomes more severe, or more severe levels of symptoms are evaluated; also, artefacts due to severe depression being less well presented in the population data than minor depression may intervene the estimation for depression scales that emphasize severe symptoms. The findings are consistent with other emerging epidemiological and biological evidence. PMID:23226400

  5. Photonic ADC: overcoming the bottleneck of electronic jitter.

    PubMed

    Khilo, Anatol; Spector, Steven J; Grein, Matthew E; Nejadmalayeri, Amir H; Holzwarth, Charles W; Sander, Michelle Y; Dahlem, Marcus S; Peng, Michael Y; Geis, Michael W; DiLello, Nicole A; Yoon, Jung U; Motamedi, Ali; Orcutt, Jason S; Wang, Jade P; Sorace-Agaskar, Cheryl M; Popović, Miloš A; Sun, Jie; Zhou, Gui-Rong; Byun, Hyunil; Chen, Jian; Hoyt, Judy L; Smith, Henry I; Ram, Rajeev J; Perrott, Michael; Lyszczarz, Theodore M; Ippen, Erich P; Kärtner, Franz X

    2012-02-13

    Accurate conversion of wideband multi-GHz analog signals into the digital domain has long been a target of analog-to-digital converter (ADC) developers, driven by applications in radar systems, software radio, medical imaging, and communication systems. Aperture jitter has been a major bottleneck on the way towards higher speeds and better accuracy. Photonic ADCs, which perform sampling using ultra-stable optical pulse trains generated by mode-locked lasers, have been investigated for many years as a promising approach to overcome the jitter problem and bring ADC performance to new levels. This work demonstrates that the photonic approach can deliver on its promise by digitizing a 41 GHz signal with 7.0 effective bits using a photonic ADC built from discrete components. This accuracy corresponds to a timing jitter of 15 fs - a 4-5 times improvement over the performance of the best electronic ADCs which exist today. On the way towards an integrated photonic ADC, a silicon photonic chip with core photonic components was fabricated and used to digitize a 10 GHz signal with 3.5 effective bits. In these experiments, two wavelength channels were implemented, providing the overall sampling rate of 2.1 GSa/s. To show that photonic ADCs with larger channel counts are possible, a dual 20-channel silicon filter bank has been demonstrated.

  6. Information theoretic approach for assessing image fidelity in photon-counting arrays.

    PubMed

    Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram

    2010-02-01

    The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.

  7. Cavity Born-Oppenheimer Approximation for Correlated Electron-Nuclear-Photon Systems.

    PubMed

    Flick, Johannes; Appel, Heiko; Ruggenthaler, Michael; Rubio, Angel

    2017-04-11

    In this work, we illustrate the recently introduced concept of the cavity Born-Oppenheimer approximation [ Flick et al. PNAS 2017 , 10.1073/pnas.1615509114 ] for correlated electron-nuclear-photon problems in detail. We demonstrate how an expansion in terms of conditional electronic and photon-nuclear wave functions accurately describes eigenstates of strongly correlated light-matter systems. For a GaAs quantum ring model in resonance with a photon mode we highlight how the ground-state electronic potential-energy surface changes the usual harmonic potential of the free photon mode to a dressed mode with a double-well structure. This change is accompanied by a splitting of the electronic ground-state density. For a model where the photon mode is in resonance with a vibrational transition, we observe in the excited-state electronic potential-energy surface a splitting from a single minimum to a double minimum. Furthermore, for a time-dependent setup, we show how the dynamics in correlated light-matter systems can be understood in terms of population transfer between potential energy surfaces. This work at the interface of quantum chemistry and quantum optics paves the way for the full ab initio description of matter-photon systems.

  8. Frequency-tuned microwave photon counter based on a superconductive quantum interferometer

    NASA Astrophysics Data System (ADS)

    Shnyrkov, V. I.; Yangcao, Wu; Soroka, A. A.; Turutanov, O. G.; Lyakhno, V. Yu.

    2018-03-01

    Various types of single-photon counters operating in infrared, ultraviolet, and optical wavelength ranges are successfully used to study electromagnetic fields, analyze radiation sources, and solve problems in quantum informatics. However, their operating principles become ineffective at millimeter band, S-band, and ultra-high frequency bands of wavelengths due to the decrease in quantum energy by 4-5 orders of magnitude. Josephson circuits with discrete Hamiltonians and qubits are a good foundation for the construction of single-photon counters at these frequencies. This paper presents a frequency-tuned microwave photon counter based on a single-junction superconducting quantum interferometer and flux qutrit. The control pulse converts the interferometer into a two-level system for resonance absorption of photons. Decay of the photon-induced excited state changes the magnetic flux in the interferometer, which is measured by a SQUID magnetometer. Schemes for recording the magnetic flux using a DC SQUID or ideal parametric detector, based on a qutrit with high-frequency excitation, are discussed. It is shown that the counter consisting of an interferometer with a Josephson junction and a parametric detector demonstrates high performance and is capable of detecting single photons in a microwave band.

  9. Ada issues in implementing ART-Ada

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1990-01-01

    Due to the Ada mandate of a number of government agencies, interest in deploying expert systems such as Ada has increased. Recently, several Ada-based expert system tools have been developed. According to a recent benchmark report, these tools do not perform as well as similar tools written in C. While poorly implemented Ada compilers contribute to the poor benchmark result, some fundamental problems of the Ada language itself have been uncovered. Here, the authors describe Ada language issues encountered during the deployment of ART-Ada, an expert system tool for Ada deployment. ART-Ada is being used to implement several prototype expert systems for the Space Station Freedom and the U.S. Air Force.

  10. Adaptive Neuron Model: An architecture for the rapid learning of nonlinear topological transformations

    NASA Technical Reports Server (NTRS)

    Tawel, Raoul (Inventor)

    1994-01-01

    A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.

  11. Research on IoT-based water environment benchmark data acquisition management

    NASA Astrophysics Data System (ADS)

    Yan, Bai; Xue, Bai; Ling, Lin; Jin, Huang; Ren, Liu

    2017-11-01

    Over the past more than 30 years of reform and opening up, China’s economy has developed at a full speed. However, this rapid growth is under restrictions of resource exhaustion and environmental pollution. Green sustainable development has become a common goal of all humans. As part of environmental resources, water resources are faced with such problems as pollution and shortage, thus hindering sustainable development. The top priority in water resources protection and research is to manage the basic data on water resources, and determine what is the footstone and scientific foundation of water environment management. By studying the aquatic organisms in the Yangtze River Basin, the Yellow River Basin, the Liaohe River Basin and the 5 lake areas, this paper puts forward an IoT-based water environment benchmark data management platform which can transform parameters measured to electric signals by way of chemical probe identification, and then send the benchmark test data of the water environment to node servers. The management platform will provide data and theoretical support for environmental chemistry, toxicology, ecology, etc., promote researches on environmental sciences, lay a solid foundation for comprehensive and systematic research on China’s regional environment characteristics, biotoxicity effects and environment criteria, and provide objective data for compiling standards of the water environment benchmark data.

  12. Experimental instruction in photonics for high school students: approaches to managing problems faced

    NASA Astrophysics Data System (ADS)

    Choong, Zhengyang

    2017-08-01

    Student research projects are increasingly common at the K-12 level. However, students often face difficulties in the course of their school research projects such as setting realistic timelines and expectations, handling problems stemming from a lack of self-confidence, as well as being sufficiently disciplined for sustained communication and experimentation. In this work, we explore manifestations of these problems in the context of a photonics project, characterising the spectrum of the breakdown flash from Silicon Avalanche Photodiodes. We report on the process of planning and building the setup, data collection, analysis and troubleshooting, as well as the technical and human problems at each step. Approaches that were found to be helpful in managing the aforementioned problems are discussed, including an attention to detail during experimental work, as well as communicating in a forthcoming manner. Œe former allowed for clearer planning and the setting of quantifiable proximal goals; the latter helped in motivating discipline, and also helped in the understanding of research as an iterative learning process without a clear definition of success or failure.

  13. Efficient heralding of O-band passively spatial-multiplexed photons for noise-tolerant quantum key distribution.

    PubMed

    Liu, Mao Tong; Lim, Han Chuen

    2014-09-22

    When implementing O-band quantum key distribution on optical fiber transmission lines carrying C-band data traffic, noise photons that arise from spontaneous Raman scattering or insufficient filtering of the classical data channels could cause the quantum bit-error rate to exceed the security threshold. In this case, a photon heralding scheme may be used to reject the uncorrelated noise photons in order to restore the quantum bit-error rate to a low level. However, the secure key rate would suffer unless one uses a heralded photon source with sufficiently high heralding rate and heralding efficiency. In this work we demonstrate a heralded photon source that has a heralding efficiency that is as high as 74.5%. One disadvantage of a typical heralded photon source is that the long deadtime of the heralding detector results in a significant drop in the heralding rate. To counter this problem, we propose a passively spatial-multiplexed configuration at the heralding arm. Using two heralding detectors in this configuration, we obtain an increase in the heralding rate by 37% and a corresponding increase in the heralded photon detection rate by 16%. We transmit the O-band photons over 10 km of noisy optical fiber to observe the relation between quantum bit-error rate and noise-degraded second-order correlation function of the transmitted photons. The effects of afterpulsing when we shorten the deadtime of the heralding detectors are also observed and discussed.

  14. An Examination of Higher-Order Treatments of Boundary Conditions in Split-Step Fourier Parabolic Equation Models

    DTIC Science & Technology

    2015-06-01

    method provides improved agreement with a benchmark solution at longer ranges. 14. SUBJECT TERMS parabolic equation , Monterey Miami...elliptic Helmholtz wave equation dates back to mid-1940s, when Leontovich and Fock introduced the PE method to the problem of radio-wave propagation in...improvements in the solutions . B. PROBLEM STATEMENT The Monterey-Miami Parabolic Equation (MMPE) model was developed in the mid-1990s and since then has

  15. A formative evaluation of CU-SeeMe

    NASA Astrophysics Data System (ADS)

    Bibeau, Michael

    1995-02-01

    CU-SeeMe is a video conferencing software package that was designed and programmed at Cornell University. The program works with the TCP/IP network protocol and allows two or more parties to conduct a real-time video conference with full audio support. In this paper we evaluate CU-SeeMe through the process of Formative Evaluation. We first perform a Critical Review of the software using a subset of the Smith and Mosier Guidelines for Human-Computer Interaction. Next, we empirically review the software interface through a series of benchmark tests that are derived directly from a set of scenarios. The scenarios attempt to model real world situations that might be encountered by an individual in the target user class. Designing benchmark tasks becomes a natural and straightforward process when they are derived from the scenario set. Empirical measures are taken for each task, including completion times and error counts. These measures are accompanied by critical incident analysis 2 7 13 which serves to identify problems with the interface and the cognitive roots of those problems. The critical incidents reported by participants are accompanied by explanations of what caused the problem and why This helps in the process of formulating solutions for observed usability problems. All the testing results are combined in the Appendix in an illustrated partial redesign of the CU-SeeMe Interface.

  16. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  17. A set-covering based heuristic algorithm for the periodic vehicle routing problem.

    PubMed

    Cacchiani, V; Hemmelmayr, V C; Tricoire, F

    2014-01-30

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems.

  18. A set-covering based heuristic algorithm for the periodic vehicle routing problem

    PubMed Central

    Cacchiani, V.; Hemmelmayr, V.C.; Tricoire, F.

    2014-01-01

    We present a hybrid optimization algorithm for mixed-integer linear programming, embedding both heuristic and exact components. In order to validate it we use the periodic vehicle routing problem (PVRP) as a case study. This problem consists of determining a set of minimum cost routes for each day of a given planning horizon, with the constraints that each customer must be visited a required number of times (chosen among a set of valid day combinations), must receive every time the required quantity of product, and that the number of routes per day (each respecting the capacity of the vehicle) does not exceed the total number of available vehicles. This is a generalization of the well-known vehicle routing problem (VRP). Our algorithm is based on the linear programming (LP) relaxation of a set-covering-like integer linear programming formulation of the problem, with additional constraints. The LP-relaxation is solved by column generation, where columns are generated heuristically by an iterated local search algorithm. The whole solution method takes advantage of the LP-solution and applies techniques of fixing and releasing of the columns as a local search, making use of a tabu list to avoid cycling. We show the results of the proposed algorithm on benchmark instances from the literature and compare them to the state-of-the-art algorithms, showing the effectiveness of our approach in producing good quality solutions. In addition, we report the results on realistic instances of the PVRP introduced in Pacheco et al. (2011)  [24] and on benchmark instances of the periodic traveling salesman problem (PTSP), showing the efficacy of the proposed algorithm on these as well. Finally, we report the new best known solutions found for all the tested problems. PMID:24748696

  19. Visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals.

    PubMed

    Wang, Baoju; Zhan, Qiuqiang; Zhao, Yuxiang; Wu, Ruitao; Liu, Jing; He, Sailing

    2016-01-25

    Further development of multiphoton microscopic imaging is confronted with a number of limitations, including high-cost, high complexity and relatively low spatial resolution due to the long excitation wavelength. To overcome these problems, for the first time, we propose visible-to-visible four-photon ultrahigh resolution microscopic imaging by using a common cost-effective 730-nm laser diode to excite the prepared Nd(3+)-sensitized upconversion nanoparticles (Nd(3+)-UCNPs). An ordinary multiphoton scanning microscope system was built using a visible CW diode laser and the lateral imaging resolution as high as 161-nm was achieved via the four-photon upconversion process. The demonstrated large saturation excitation power for Nd(3+)-UCNPs would be more practical and facilitate the four-photon imaging in the application. A sample with fine structure was imaged to demonstrate the advantages of visible-to-visible four-photon ultrahigh resolution microscopic imaging with 730-nm diode laser excited nanocrystals. Combining the uniqueness of UCNPs, the proposed visible-to-visible four-photon imaging would be highly promising and attractive in the field of multiphoton imaging.

  20. Transparency of an instantaneously created electron-positron-photon plasma

    NASA Astrophysics Data System (ADS)

    Bégué, D.; Vereshchagin, G. V.

    2014-03-01

    The problem of the expansion of a relativistic plasma generated when a large amount of energy is released in a small volume has been considered by many authors. We use the analytical solution of Bisnovatyi-Kogan and Murzina for the spherically symmetric relativistic expansion. The light curves and the spectra from transparency of an electron-positron-photon plasma are obtained. We compare our results with the work of Goodman.

  1. Coherent beam control through inhomogeneous media in multi-photon microscopy

    NASA Astrophysics Data System (ADS)

    Paudel, Hari Prasad

    Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.

  2. Modeling of photon migration in the human lung using a finite volume solver

    NASA Astrophysics Data System (ADS)

    Sikorski, Zbigniew; Furmanczyk, Michal; Przekwas, Andrzej J.

    2006-02-01

    The application of the frequency domain and steady-state diffusive optical spectroscopy (DOS) and steady-state near infrared spectroscopy (NIRS) to diagnosis of the human lung injury challenges many elements of these techniques. These include the DOS/NIRS instrument performance and accurate models of light transport in heterogeneous thorax tissue. The thorax tissue not only consists of different media (e.g. chest wall with ribs, lungs) but its optical properties also vary with time due to respiration and changes in thorax geometry with contusion (e.g. pneumothorax or hemothorax). This paper presents a finite volume solver developed to model photon migration in the diffusion approximation in heterogeneous complex 3D tissues. The code applies boundary conditions that account for Fresnel reflections. We propose an effective diffusion coefficient for the void volumes (pneumothorax) based on the assumption of the Lambertian diffusion of photons entering the pleural cavity and accounting for the local pleural cavity thickness. The code has been validated using the MCML Monte Carlo code as a benchmark. The code environment enables a semi-automatic preparation of 3D computational geometry from medical images and its rapid automatic meshing. We present the application of the code to analysis/optimization of the hybrid DOS/NIRS/ultrasound technique in which ultrasound provides data on the localization of thorax tissue boundaries. The code effectiveness (3D complex case computation takes 1 second) enables its use to quantitatively relate detected light signal to absorption and reduced scattering coefficients that are indicators of the pulmonary physiologic state (hemoglobin concentration and oxygenation).

  3. Experimental high gradient testing of a 17.1 GHz photonic band-gap accelerator structure

    DOE PAGES

    Munroe, Brian J.; Zhang, JieXi; Xu, Haoran; ...

    2016-03-29

    In this paper, we report the design, fabrication, and high gradient testing of a 17.1 GHz photonic band-gap (PBG) accelerator structure. Photonic band-gap (PBG) structures are promising candidates for electron accelerators capable of high-gradient operation because they have the inherent damping of high order modes required to avoid beam breakup instabilities. The 17.1 GHz PBG structure tested was a single cell structure composed of a triangular array of round copper rods of radius 1.45 mm spaced by 8.05 mm. The test assembly consisted of the test PBG cell located between conventional (pillbox) input and output cells, with input power ofmore » up to 4 MW from a klystron supplied via a TM 01 mode launcher. Breakdown at high gradient was observed by diagnostics including reflected power, downstream and upstream current monitors and visible light emission. The testing procedure was first benchmarked with a conventional disc-loaded waveguide structure, which reached a gradient of 87 MV=m at a breakdown probability of 1.19 × 10 –1 per pulse per meter. The PBG structure was tested with 100 ns pulses at gradient levels of less than 90 MV=m in order to limit the surface temperature rise to 120 K. The PBG structure reached up to 89 MV=m at a breakdown probability of 1.09 × 10 –1 per pulse per meter. These test results show that a PBG structure can simultaneously operate at high gradients and low breakdown probability, while also providing wakefield damping.« less

  4. Simulation of the Mg(Ar) ionization chamber currents by different Monte Carlo codes in benchmark gamma fields

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Chun; Liu, Yuan-Hao; Nievaart, Sander; Chen, Yen-Fu; Wu, Shu-Wei; Chou, Wen-Tsae; Jiang, Shiang-Huei

    2011-10-01

    High energy photon (over 10 MeV) and neutron beams adopted in radiobiology and radiotherapy always produce mixed neutron/gamma-ray fields. The Mg(Ar) ionization chambers are commonly applied to determine the gamma-ray dose because of its neutron insensitive characteristic. Nowadays, many perturbation corrections for accurate dose estimation and lots of treatment planning systems are based on Monte Carlo technique. The Monte Carlo codes EGSnrc, FLUKA, GEANT4, MCNP5, and MCNPX were used to evaluate energy dependent response functions of the Exradin M2 Mg(Ar) ionization chamber to a parallel photon beam with mono-energies from 20 keV to 20 MeV. For the sake of validation, measurements were carefully performed in well-defined (a) primary M-100 X-ray calibration field, (b) primary 60Co calibration beam, (c) 6-MV, and (d) 10-MV therapeutic beams in hospital. At energy region below 100 keV, MCNP5 and MCNPX both had lower responses than other codes. For energies above 1 MeV, the MCNP ITS-mode greatly resembled other three codes and the differences were within 5%. Comparing to the measured currents, MCNP5 and MCNPX using ITS-mode had perfect agreement with the 60Co, and 10-MV beams. But at X-ray energy region, the derivations reached 17%. This work shows us a better insight into the performance of different Monte Carlo codes in photon-electron transport calculation. Regarding the application of the mixed field dosimetry like BNCT, MCNP with ITS-mode is recognized as the most suitable tool by this work.

  5. Efficient modeling of photonic crystals with local Hermite polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boucher, C. R.; Li, Zehao; Albrecht, J. D.

    2014-04-21

    Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (planemore » wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.« less

  6. The Problem of Boys' Literacy Underachievement: Raising Some Questions

    ERIC Educational Resources Information Center

    Watson, Anne; Kehler, Michael; Martino, Wayne

    2010-01-01

    Boys' literacy underachievement continues to garner significant attention and has been identified by journalists, educational policymakers, and scholars in the field as the cause for much concern. It has been established that boys perform less well than girls on literacy benchmark or standardized tests. According to the National Assessment of…

  7. The Tyranny of Age.

    ERIC Educational Resources Information Center

    Start, K. B.

    This paper challenges the belief that the only way the Australian educational system can work is by grouping students by chronological age. Chronological age developed over time as the benchmark for school readiness, despite readiness problems created by individual differences among students of the same age. The organization of the classroom,…

  8. Report on the 1999 ONR Shallow-Water Reverberation Focus Workshop

    DTIC Science & Technology

    1999-12-31

    Pseudo Spectral models. • Develop reverberation and scattering benchmarks accepted by the scientific community. (The ASA penetrable wedge problem has...Paul C. Hines, W. Cary Risley , and Martin P. O’Connor, "A Wide-Band Sonar for underwater acoustics measurements in shallow water," in Oceans 󈨦

  9. Evolving the Role of Campus Security

    ERIC Educational Resources Information Center

    May, Vern

    2008-01-01

    One of the problems security professionals see in security is that there are few benchmarks to quantify the effectiveness of proactive security initiatives. This hurts them with funding support and also with ensuring community buy-in outside of crisis situations. The reactive nature of many institutions makes it difficult to move forward with…

  10. The Problem with "Proficient"

    ERIC Educational Resources Information Center

    Harvey, James

    2018-01-01

    New research sheds light on the claim that U.S. students' achievement lags behind that of students worldwide. This research reveals a paradox: While large amounts of U.S. students who take the National Assessment of Educational Progress (NAEP) fail to meet its Proficient benchmarks in reading and math, when students' results on NAEP are…

  11. Ground truth and benchmarks for performance evaluation

    NASA Astrophysics Data System (ADS)

    Takeuchi, Ayako; Shneier, Michael; Hong, Tsai Hong; Chang, Tommy; Scrapper, Christopher; Cheok, Geraldine S.

    2003-09-01

    Progress in algorithm development and transfer of results to practical applications such as military robotics requires the setup of standard tasks, of standard qualitative and quantitative measurements for performance evaluation and validation. Although the evaluation and validation of algorithms have been discussed for over a decade, the research community still faces a lack of well-defined and standardized methodology. The range of fundamental problems include a lack of quantifiable measures of performance, a lack of data from state-of-the-art sensors in calibrated real-world environments, and a lack of facilities for conducting realistic experiments. In this research, we propose three methods for creating ground truth databases and benchmarks using multiple sensors. The databases and benchmarks will provide researchers with high quality data from suites of sensors operating in complex environments representing real problems of great relevance to the development of autonomous driving systems. At NIST, we have prototyped a High Mobility Multi-purpose Wheeled Vehicle (HMMWV) system with a suite of sensors including a Riegl ladar, GDRS ladar, stereo CCD, several color cameras, Global Position System (GPS), Inertial Navigation System (INS), pan/tilt encoders, and odometry . All sensors are calibrated with respect to each other in space and time. This allows a database of features and terrain elevation to be built. Ground truth for each sensor can then be extracted from the database. The main goal of this research is to provide ground truth databases for researchers and engineers to evaluate algorithms for effectiveness, efficiency, reliability, and robustness, thus advancing the development of algorithms.

  12. All-Optical Implementation of the Ant Colony Optimization Algorithm

    PubMed Central

    Hu, Wenchao; Wu, Kan; Shum, Perry Ping; Zheludev, Nikolay I.; Soci, Cesare

    2016-01-01

    We report all-optical implementation of the optimization algorithm for the famous “ant colony” problem. Ant colonies progressively optimize pathway to food discovered by one of the ants through identifying the discovered route with volatile chemicals (pheromones) secreted on the way back from the food deposit. Mathematically this is an important example of graph optimization problem with dynamically changing parameters. Using an optical network with nonlinear waveguides to represent the graph and a feedback loop, we experimentally show that photons traveling through the network behave like ants that dynamically modify the environment to find the shortest pathway to any chosen point in the graph. This proof-of-principle demonstration illustrates how transient nonlinearity in the optical system can be exploited to tackle complex optimization problems directly, on the hardware level, which may be used for self-routing of optical signals in transparent communication networks and energy flow in photonic systems. PMID:27222098

  13. Viscous flow computations using a second-order upwind differencing scheme

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.

    1988-01-01

    In the present computations of a wide range of fluid flow problems by means of the primitive variables-incorporating Navier-Stokes equations, a mixed second-order upwinding scheme approximates the convective terms of the transport equations and the scheme's accuracy is verified for convection-dominated high Re number flow problems. An adaptive dissipation scheme is used as a monotonic supersonic shock flow capture mechanism. Many benchmark fluid flow problems, including the compressible and incompressible, laminar and turbulent, over a wide range of M and Re numbers, are presently studied to verify the accuracy and robustness of this numerical method.

  14. New optical museum at Saint-Petersburg for education and training

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. N.; Stafeef, S. K.; Tomilin, M. G.

    2009-06-01

    Nowadays the educational problem of teaching optics and photonics is to attract the young generation to the wonderful and magic world of light, optical science, technology and systems. The main issue is to explain that in the course of last several hundred years optics has been representing the most clear world view for humanity. In fact, the optics itself is a multidisciplinary complex of independent scientific directions, and, moreover, it has always been a generator of new fields of knowledge. Besides, optics and photonics are the fields within which the most fundamental problems of today's reality are to be resolved. It is absolutely necessary to encourage our scholars in getting optics and photonics education as an alternative physical basis to gaining solely computer knowledge. The main obstacle is the poor connection between program of optical education and the real optical researches, disintegration of different branches of the optical science, the demographic situation, some problems with teaching mathematics and physics at schools, and the collision between traditional educational methods and the mentality of the new generation. In Russia the Saint-Petersburg State University of Information Technologies, Mechanics and Optics offers partial solution to these problems: the organization of a real place for interactive optical science in a form of a new museum of optics, intended for education and training, seems to be the most effective way. This was the main reason for establishing such a museum in Saint-Petersburg at the end of 2008.

  15. Split exponential track length estimator for Monte-Carlo simulations of small-animal radiation therapy

    NASA Astrophysics Data System (ADS)

    Smekens, F.; Létang, J. M.; Noblet, C.; Chiavassa, S.; Delpon, G.; Freud, N.; Rit, S.; Sarrut, D.

    2014-12-01

    We propose the split exponential track length estimator (seTLE), a new kerma-based method combining the exponential variant of the TLE and a splitting strategy to speed up Monte Carlo (MC) dose computation for low energy photon beams. The splitting strategy is applied to both the primary and the secondary emitted photons, triggered by either the MC events generator for primaries or the photon interactions generator for secondaries. Split photons are replaced by virtual particles for fast dose calculation using the exponential TLE. Virtual particles are propagated by ray-tracing in voxelized volumes and by conventional MC navigation elsewhere. Hence, the contribution of volumes such as collimators, treatment couch and holding devices can be taken into account in the dose calculation. We evaluated and analysed the seTLE method for two realistic small animal radiotherapy treatment plans. The effect of the kerma approximation, i.e. the complete deactivation of electron transport, was investigated. The efficiency of seTLE against splitting multiplicities was also studied. A benchmark with analog MC and TLE was carried out in terms of dose convergence and efficiency. The results showed that the deactivation of electrons impacts the dose at the water/bone interface in high dose regions. The maximum and mean dose differences normalized to the dose at the isocenter were, respectively of 14% and 2% . Optimal splitting multiplicities were found to be around 300. In all situations, discrepancies in integral dose were below 0.5% and 99.8% of the voxels fulfilled a 1%/0.3 mm gamma index criterion. Efficiency gains of seTLE varied from 3.2 × 105 to 7.7 × 105 compared to analog MC and from 13 to 15 compared to conventional TLE. In conclusion, seTLE provides results similar to the TLE while increasing the efficiency by a factor between 13 and 15, which makes it particularly well-suited to typical small animal radiation therapy applications.

  16. Quantum Dot Photonics

    NASA Astrophysics Data System (ADS)

    Kinnischtzke, Laura A.

    We report on several experiments using single excitons confined to single semiconductor quantum dots (QDs). Electric and magnetic fields have previously been used as experimental knobs to understand and control individual excitons in single quantum dots. We realize new ways of electric field control by changing materials and device geometry in the first two experiments with strain-based InAs QDs. A standard Schottky diode heterostructure is demonstrated with graphene as the Schottky gate material, and its performance is bench-marked against a diode with a standard gate material, semi-transparent nickel-chromium (NiCr). This change of materials increases the photon collection rate by eliminating absorption in the metallic NiCr layer. A second set of experiments investigates the electric field response of QDs as a possible metrology source. A linear voltage potential drop in a plane near the QDs is used to describe how the spatially varying voltage profile is also imparted on the QDs. We demonstrate a procedure to map this voltage profile as a preliminary route towards a full quantum sensor array. Lastly, InAs QDs are explored as potential spin-photon interfaces. We describe how a magnetic field is used to realize a reversible exchange of information between light and matter, including a discussion of the polarization-dependence of the photoluminesence, and how that can be linked to the spin of a resident electron or hole. We present evidence of this in two wavelength regimes for InAs quantum dots, and discuss how an external magnetic field informs the spin physics of these 2-level systems. This thesis concludes with the discovery of a new class of quantum dots. As-yet unidentified defect states in single layer tungsten diselenide (WSe 2 ) are shown to host quantum light emission. We explore the spatial extent of electron confinement and tentatively identify a radiative lifetime of 1 ns for these single photon emitters.

  17. Design of an Evolutionary Approach for Intrusion Detection

    PubMed Central

    2013-01-01

    A novel evolutionary approach is proposed for effective intrusion detection based on benchmark datasets. The proposed approach can generate a pool of noninferior individual solutions and ensemble solutions thereof. The generated ensembles can be used to detect the intrusions accurately. For intrusion detection problem, the proposed approach could consider conflicting objectives simultaneously like detection rate of each attack class, error rate, accuracy, diversity, and so forth. The proposed approach can generate a pool of noninferior solutions and ensembles thereof having optimized trade-offs values of multiple conflicting objectives. In this paper, a three-phase, approach is proposed to generate solutions to a simple chromosome design in the first phase. In the first phase, a Pareto front of noninferior individual solutions is approximated. In the second phase of the proposed approach, the entire solution set is further refined to determine effective ensemble solutions considering solution interaction. In this phase, another improved Pareto front of ensemble solutions over that of individual solutions is approximated. The ensemble solutions in improved Pareto front reported improved detection results based on benchmark datasets for intrusion detection. In the third phase, a combination method like majority voting method is used to fuse the predictions of individual solutions for determining prediction of ensemble solution. Benchmark datasets, namely, KDD cup 1999 and ISCX 2012 dataset, are used to demonstrate and validate the performance of the proposed approach for intrusion detection. The proposed approach can discover individual solutions and ensemble solutions thereof with a good support and a detection rate from benchmark datasets (in comparison with well-known ensemble methods like bagging and boosting). In addition, the proposed approach is a generalized classification approach that is applicable to the problem of any field having multiple conflicting objectives, and a dataset can be represented in the form of labelled instances in terms of its features. PMID:24376390

  18. Photon sieve telescope

    NASA Astrophysics Data System (ADS)

    Andersen, Geoff; Tullson, Drew

    2006-06-01

    In designing next-generation, ultra-large (>20m) apertures for space, many current concepts involve compactable, curved membrane reflectors. Here we present the idea of using a flat diffractive element that requires no out-of-plane deformation and so is much simpler to deploy. The primary is a photon sieve - a diffractive element consisting of a large number of precisely positioned holes distributed according to an underlying Fresnel Zone Plate (FZP) geometry. The advantage of the photon sieve over the FZP is that all the regions are connected, so the membrane substrate under simple tension can avoid buckling. Also, the hole distribution can be varied to generate any conic or apodization for specialized telescope requirements such as exo-solar planet detection. We have designed and tested numerous photon sieves as telescope primaries. Some of these have over 10 million holes in a 0.1 m diameter aperture and all of them give diffraction limited imaging. While photon sieves are diffractive elements and thus suffer from dispersion, we will present two successful solutions to this problem.

  19. Quantum Biometrics with Retinal Photon Counting

    NASA Astrophysics Data System (ADS)

    Loulakis, M.; Blatsios, G.; Vrettou, C. S.; Kominis, I. K.

    2017-10-01

    It is known that the eye's scotopic photodetectors, rhodopsin molecules, and their associated phototransduction mechanism leading to light perception, are efficient single-photon counters. We here use the photon-counting principles of human rod vision to propose a secure quantum biometric identification based on the quantum-statistical properties of retinal photon detection. The photon path along the human eye until its detection by rod cells is modeled as a filter having a specific transmission coefficient. Precisely determining its value from the photodetection statistics registered by the conscious observer is a quantum parameter estimation problem that leads to a quantum secure identification method. The probabilities for false-positive and false-negative identification of this biometric technique can readily approach 10-10 and 10-4, respectively. The security of the biometric method can be further quantified by the physics of quantum measurements. An impostor must be able to perform quantum thermometry and quantum magnetometry with energy resolution better than 10-9ℏ , in order to foil the device by noninvasively monitoring the biometric activity of a user.

  20. Stratification of unresponsive patients by an independently validated index of brain complexity

    PubMed Central

    Casarotto, Silvia; Comanducci, Angela; Rosanova, Mario; Sarasso, Simone; Fecchio, Matteo; Napolitani, Martino; Pigorini, Andrea; G. Casali, Adenauer; Trimarchi, Pietro D.; Boly, Melanie; Gosseries, Olivia; Bodart, Olivier; Curto, Francesco; Landi, Cristina; Mariotti, Maurizio; Devalle, Guya; Laureys, Steven; Tononi, Giulio

    2016-01-01

    Objective Validating objective, brain‐based indices of consciousness in behaviorally unresponsive patients represents a challenge due to the impossibility of obtaining independent evidence through subjective reports. Here we address this problem by first validating a promising metric of consciousness—the Perturbational Complexity Index (PCI)—in a benchmark population who could confirm the presence or absence of consciousness through subjective reports, and then applying the same index to patients with disorders of consciousness (DOCs). Methods The benchmark population encompassed 150 healthy controls and communicative brain‐injured subjects in various states of conscious wakefulness, disconnected consciousness, and unconsciousness. Receiver operating characteristic curve analysis was performed to define an optimal cutoff for discriminating between the conscious and unconscious conditions. This cutoff was then applied to a cohort of noncommunicative DOC patients (38 in a minimally conscious state [MCS] and 43 in a vegetative state [VS]). Results We found an empirical cutoff that discriminated with 100% sensitivity and specificity between the conscious and the unconscious conditions in the benchmark population. This cutoff resulted in a sensitivity of 94.7% in detecting MCS and allowed the identification of a number of unresponsive VS patients (9 of 43) with high values of PCI, overlapping with the distribution of the benchmark conscious condition. Interpretation Given its high sensitivity and specificity in the benchmark and MCS population, PCI offers a reliable, independently validated stratification of unresponsive patients that has important physiopathological and therapeutic implications. In particular, the high‐PCI subgroup of VS patients may retain a capacity for consciousness that is not expressed in behavior. Ann Neurol 2016;80:718–729 PMID:27717082

  1. Entanglement evaluation of non-Gaussian states generated by photon subtraction from squeezed states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Akira; Takeoka, Masahiro; Sasaki, Masahide

    2006-04-15

    We consider the problem of evaluating the entanglement of non-Gaussian mixed states generated by photon subtraction from entangled squeezed states. The entanglement measures we use are the negativity and the logarithmic negativity. These measures possess the unusual property of being computable with linear algebra packages even for high-dimensional quantum systems. We numerically evaluate these measures for the non-Gaussian mixed states which are generated by photon subtraction with on/off photon detectors. The results are compared with the behavior of certain operational measures, namely the teleportation fidelity and the mutual information in the dense coding scheme. It is found that all ofmore » these results are mutually consistent, in the sense that whenever the enhancement is seen in terms of the operational measures, the negativity and the logarithmic negativity are also enhanced.« less

  2. Transmission-line model to design matching stage for light coupling into two-dimensional photonic crystals.

    PubMed

    Miri, Mehdi; Khavasi, Amin; Mehrany, Khashayar; Rashidian, Bizhan

    2010-01-15

    The transmission-line analogy of the planar electromagnetic reflection problem is exploited to obtain a transmission-line model that can be used to design effective, robust, and wideband interference-based matching stages. The proposed model based on a new definition for a scalar impedance is obtained by using the reflection coefficient of the zeroth-order diffracted plane wave outside the photonic crystal. It is shown to be accurate for in-band applications, where the normalized frequency is low enough to ensure that the zeroth-order diffracted plane wave is the most important factor in determining the overall reflection. The frequency limitation of employing the proposed approach is explored, highly dispersive photonic crystals are considered, and wideband matching stages based on binomial impedance transformers are designed to work at the first two photonic bands.

  3. One-Dimensional Photonic Crystal Superprisms

    NASA Technical Reports Server (NTRS)

    Ting, David

    2005-01-01

    Theoretical calculations indicate that it should be possible for one-dimensional (1D) photonic crystals (see figure) to exhibit giant dispersions known as the superprism effect. Previously, three-dimensional (3D) photonic crystal superprisms have demonstrated strong wavelength dispersion - about 500 times that of conventional prisms and diffraction gratings. Unlike diffraction gratings, superprisms do not exhibit zero-order transmission or higher-order diffraction, thereby eliminating cross-talk problems. However, the fabrication of these 3D photonic crystals requires complex electron-beam substrate patterning and multilayer thin-film sputtering processes. The proposed 1D superprism is much simpler in structural complexity and, therefore, easier to design and fabricate. Like their 3D counterparts, the 1D superprisms can exhibit giant dispersions over small spectral bands that can be tailored by judicious structure design and tuned by varying incident beam direction. Potential applications include miniature gas-sensing devices.

  4. Broad Redshifted Line as a Signature of Outflow

    NASA Astrophysics Data System (ADS)

    Titarchuk, Lev; Kazanas, Demos; Becker, Peter A.

    2003-11-01

    We formulate and solve the diffusion problem of line photon propagation in a bulk outflow from a compact object (black hole or neutron star) using a generic assumption regarding the distribution of line photons within the outflow. Thomson scattering of the line photons within the expanding flow leads to a decrease of their energy which is of first order in v/c, where v is the outflow velocity and c is the speed of light. We demonstrate that the emergent line profile is closely related to the time distribution of photons diffusing through the flow (the light curve) and consists of a broad redshifted feature. We analyzed the line profiles for the general case of outflow density distribution. We emphasize that the redshifted lines are intrinsic properties of the powerful outflow that are supposed to be in many compact objects.

  5. Broad Red-Shifted Lines as a Signature of Outflow

    NASA Astrophysics Data System (ADS)

    Kazanas, Demosthenes; Titarchuk, Lev; Becker, Peter A.

    2004-07-01

    We formulate and solve the diffusion problem of line photon propagation in a bulk outflow from a compact object (black hole or neutron star) using a generic assumption regarding the distribution of line photons within the outflow. Thomson scattering of the line photons within the expanding flow leads to a decrease of their energy which is of first order in v/c, where v is the outflow velocity and c the speed of light. We demonstrate that the emergent line profile is closely related to the time distribution of photons diffusing through the flow (the light curve) and consists of a broad redshifted feature. We analyzed the line profiles for the general case of outflow density distribution. We emphasize that the redshifted lines are intrinsic properties of the powerful outflow that are supposed to be in many compact objects.

  6. Broad Red-Shifted Lines as a Signature of Outflows

    NASA Astrophysics Data System (ADS)

    Titarchuck, Lev; Kazanas, Demos; Becker, Peter A.

    2006-02-01

    We formulate and solve the diffusion problem of line photon propagation in a bulk outflow from a compact object (black hole or neutron star) using a generic assumption regarding the distribution of line photons within the outflow. Thomson scattering of the line photons within the expanding flow leads to a decrease of their energy which is of first order in υ/c, where υ the outflow velocity and c is the speed of light. We demonstrate that the emergent line profile is closely related to the time distribution of photons diffusing through the flow (the light curve) and consists of a broad redshifted feature. We analyzed the line profiles for the general case of outflow density distribution. We emphasize that the redshifted lines are intrinsic properties of the powerful outflow that are supposed to be in many compact objects.

  7. Imaging photonic crystals using hemispherical digital condensers and phase-recovery techniques.

    PubMed

    Alotaibi, Maged; Skinner-Ramos, Sueli; Farooq, Hira; Alharbi, Nouf; Alghasham, Hawra; de Peralta, Luis Grave

    2018-05-10

    We describe experiments where Fourier ptychographic microscopy (FPM) and dual-space microscopy (DSM) are implemented for imaging photonic crystals using a hemispherical digital condenser (HDC). Phase-recovery imaging simulations show that both techniques should be able to image photonic crystals with a period below the Rayleigh resolution limit. However, after processing the experimental images using both phase-recovery algorithms, we found that DSM can, but FPM cannot, image periodic structures with a period below the diffraction limit. We studied the origin of this apparent contradiction between simulations and experiments, and we concluded that the occurrence of unwanted reflections in the HDC is the source of the apparent failure of FPM. We thereafter solved the problem of reflections by using a single-directional illumination source and showed that FPM can image photonic crystals with a period below the Rayleigh resolution limit.

  8. Embodiment of Learning in Electro-Optical Signal Processors

    NASA Astrophysics Data System (ADS)

    Hermans, Michiel; Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2016-09-01

    Delay-coupled electro-optical systems have received much attention for their dynamical properties and their potential use in signal processing. In particular, it has recently been demonstrated, using the artificial intelligence algorithm known as reservoir computing, that photonic implementations of such systems solve complex tasks such as speech recognition. Here, we show how the backpropagation algorithm can be physically implemented on the same electro-optical delay-coupled architecture used for computation with only minor changes to the original design. We find that, compared to when the backpropagation algorithm is not used, the error rate of the resulting computing device, evaluated on three benchmark tasks, decreases considerably. This demonstrates that electro-optical analog computers can embody a large part of their own training process, allowing them to be applied to new, more difficult tasks.

  9. Enhancements to the MCNP6 background source

    DOE PAGES

    McMath, Garrett E.; McKinney, Gregg W.

    2015-10-19

    The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less

  10. Embodiment of Learning in Electro-Optical Signal Processors.

    PubMed

    Hermans, Michiel; Antonik, Piotr; Haelterman, Marc; Massar, Serge

    2016-09-16

    Delay-coupled electro-optical systems have received much attention for their dynamical properties and their potential use in signal processing. In particular, it has recently been demonstrated, using the artificial intelligence algorithm known as reservoir computing, that photonic implementations of such systems solve complex tasks such as speech recognition. Here, we show how the backpropagation algorithm can be physically implemented on the same electro-optical delay-coupled architecture used for computation with only minor changes to the original design. We find that, compared to when the backpropagation algorithm is not used, the error rate of the resulting computing device, evaluated on three benchmark tasks, decreases considerably. This demonstrates that electro-optical analog computers can embody a large part of their own training process, allowing them to be applied to new, more difficult tasks.

  11. Energy resolved actinometry for simultaneous measurement of atomic oxygen densities and local mean electron energies in radio-frequency driven plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greb, Arthur, E-mail: ag941@york.ac.uk; Niemi, Kari; O'Connell, Deborah

    2014-12-08

    A diagnostic method for the simultaneous determination of atomic oxygen densities and mean electron energies is demonstrated for an atmospheric pressure radio-frequency plasma jet. The proposed method is based on phase resolved optical emission measurements of the direct and dissociative electron-impact excitation dynamics of three distinct emission lines, namely, Ar 750.4 nm, O 777.4 nm, and O 844.6 nm. The energy dependence of these lines serves as basis for analysis by taking into account two line ratios. In this frame, the method is highly adaptable with regard to pressure and gas composition. Results are benchmarked against independent numerical simulations and two-photon absorption laser-inducedmore » fluorescence experiments.« less

  12. Exact solution for spin precession in the radiationless relativistic Kepler problem

    NASA Astrophysics Data System (ADS)

    Mane, S. R.

    2014-11-01

    There is interest in circulating beams of polarized particles in all-electric storage rings to search for nonzero permanent electric dipole moments of subatomic particles. To this end, it is helpful to derive exact analytical solutions of the spin precession in idealized models, both for pedagogical reasons and to serve as benchmark tests for analysis and design of experiments. This paper derives exact solutions for the spin precession in the relativistic Kepler problem. Some counterintuitive properties of the solutions are pointed out.

  13. Implementation experiences of NASTRAN on CDC CYBER 74 SCOPE 3.4 operating system

    NASA Technical Reports Server (NTRS)

    Go, J. C.; Hill, R. G.

    1973-01-01

    The implementation of the NASTRAN system on the CDC CYBER 74 SCOPE 3.4 Operating System is described. The flexibility of the NASTRAN system made it possible to accomplish the change with no major problems. Various sizes of benchmark and test problems, ranging from two hours to less than one minute CP time were run on the CDC CYBER SCOPE 3.3, Univac EXEC-8, and CDC CYBER SCOPE 3.4. The NASTRAN installation deck is provided.

  14. Pattern-set generation algorithm for the one-dimensional multiple stock sizes cutting stock problem

    NASA Astrophysics Data System (ADS)

    Cui, Yaodong; Cui, Yi-Ping; Zhao, Zhigang

    2015-09-01

    A pattern-set generation algorithm (PSG) for the one-dimensional multiple stock sizes cutting stock problem (1DMSSCSP) is presented. The solution process contains two stages. In the first stage, the PSG solves the residual problems repeatedly to generate the patterns in the pattern set, where each residual problem is solved by the column-generation approach, and each pattern is generated by solving a single large object placement problem. In the second stage, the integer linear programming model of the 1DMSSCSP is solved using a commercial solver, where only the patterns in the pattern set are considered. The computational results of benchmark instances indicate that the PSG outperforms existing heuristic algorithms and rivals the exact algorithm in solution quality.

  15. Photonic integrated transmitter and receiver for NG-PON2

    NASA Astrophysics Data System (ADS)

    Tavares, Ana; Lopes, Ana; Rodrigues, Cláudio; Mãocheia, Paulo; Mendes, Tiago; Brandão, Simão.; Rodrigues, Francisco; Ferreira, Ricardo; Teixeira, António

    2014-08-01

    In this paper the authors present a monolithic Photonic Integrated Circuit which includes a transmitter and a receiver for NG-PON2. With this layout it is possible to build an OLT and, by redesigning some filters, also an ONU. This technology allows reducing the losses in the transmitter and in the receiver, increasing power budget, and also reducing the OEO conversions, which has been a major problem that operators want to surpass.

  16. COST 288: Nanoscale and Ultrafast Photonics. Action Identification Data

    DTIC Science & Technology

    2008-08-01

    a wideband light source is suggested for avoiding the problem of usage of Si emitter. Transmission properties of symmetrical structure of a modulator...Britain to discuss science, technology and the view forward for ultrafast photonics. The commitment of the Japanese was impressive to all- fibre to...on the multi-wavelength amplification properties of GaInNAs quantum wells and quantum dots for broad-band SOAs. (WG1-approved by MC ) Year 3 12

  17. Experimental two-dimensional quantum walk on a photonic chip.

    PubMed

    Tang, Hao; Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin; Jin, Xian-Min

    2018-05-01

    Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon-level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems.

  18. Minimum Copies of Schrödinger’s Cat State in the Multi-Photon System

    PubMed Central

    Lu, Yiping; Zhao, Qing

    2016-01-01

    Multi-photon entanglement has been successfully studied by many theoretical and experimental groups. However, as the number of entangled photons increases, some problems are encountered, such as the exponential increase of time necessary to prepare the same number of copies of entangled states in experiment. In this paper, a new scheme is proposed based on the Lagrange multiplier and Feedback, which cuts down the required number of copies of Schrödinger’s Cat state in multi-photon experiment, which is realized with some noise in actual measurements, and still keeps the standard deviation in the error of fidelity unchanged. It reduces about five percent of the measuring time of eight-photon Schrödinger’s Cat state compared with the scheme used in the usual planning of actual measurements, and moreover it guarantees the same low error in fidelity. In addition, we also applied the same approach to the simulation of ten-photon entanglement, and we found that it reduces in priciple about twenty two percent of the required copies of Schrödinger’s Cat state compared with the conventionally used scheme of the uniform distribution; yet the distribution of optimized copies of the ten-photon Schrödinger’s Cat state gives better fidelity estimation than the uniform distribution for the same number of copies of the ten-photon Schrödinger’s Cat state. PMID:27576585

  19. Assessing 1D Atmospheric Solar Radiative Transfer Models: Interpretation and Handling of Unresolved Clouds.

    NASA Astrophysics Data System (ADS)

    Barker, H. W.; Stephens, G. L.; Partain, P. T.; Bergman, J. W.; Bonnel, B.; Campana, K.; Clothiaux, E. E.; Clough, S.; Cusack, S.; Delamere, J.; Edwards, J.; Evans, K. F.; Fouquart, Y.; Freidenreich, S.; Galin, V.; Hou, Y.; Kato, S.; Li, J.;  Mlawer, E.;  Morcrette, J.-J.;  O'Hirok, W.;  Räisänen, P.;  Ramaswamy, V.;  Ritter, B.;  Rozanov, E.;  Schlesinger, M.;  Shibata, K.;  Sporyshev, P.;  Sun, Z.;  Wendisch, M.;  Wood, N.;  Yang, F.

    2003-08-01

    The primary purpose of this study is to assess the performance of 1D solar radiative transfer codes that are used currently both for research and in weather and climate models. Emphasis is on interpretation and handling of unresolved clouds. Answers are sought to the following questions: (i) How well do 1D solar codes interpret and handle columns of information pertaining to partly cloudy atmospheres? (ii) Regardless of the adequacy of their assumptions about unresolved clouds, do 1D solar codes perform as intended?One clear-sky and two plane-parallel, homogeneous (PPH) overcast cloud cases serve to elucidate 1D model differences due to varying treatments of gaseous transmittances, cloud optical properties, and basic radiative transfer. The remaining four cases involve 3D distributions of cloud water and water vapor as simulated by cloud-resolving models. Results for 25 1D codes, which included two line-by-line (LBL) models (clear and overcast only) and four 3D Monte Carlo (MC) photon transport algorithms, were submitted by 22 groups. Benchmark, domain-averaged irradiance profiles were computed by the MC codes. For the clear and overcast cases, all MC estimates of top-of-atmosphere albedo, atmospheric absorptance, and surface absorptance agree with one of the LBL codes to within ±2%. Most 1D codes underestimate atmospheric absorptance by typically 15-25 W m-2 at overhead sun for the standard tropical atmosphere regardless of clouds.Depending on assumptions about unresolved clouds, the 1D codes were partitioned into four genres: (i) horizontal variability, (ii) exact overlap of PPH clouds, (iii) maximum/random overlap of PPH clouds, and (iv) random overlap of PPH clouds. A single MC code was used to establish conditional benchmarks applicable to each genre, and all MC codes were used to establish the full 3D benchmarks. There is a tendency for 1D codes to cluster near their respective conditional benchmarks, though intragenre variances typically exceed those for the clear and overcast cases. The majority of 1D codes fall into the extreme category of maximum/random overlap of PPH clouds and thus generally disagree with full 3D benchmark values. Given the fairly limited scope of these tests and the inability of any one code to perform extremely well for all cases begs the question that a paradigm shift is due for modeling 1D solar fluxes for cloudy atmospheres.

  20. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    PubMed Central

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

Top