Sample records for wimsd5 deterministic multigroup

  1. Progress on China nuclear data processing code system

    NASA Astrophysics Data System (ADS)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  2. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deen, J.R.; Woodruff, W.L.; Leal, L.E.

    1995-01-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  3. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  4. Multigroup cross section library for GFR2400

    NASA Astrophysics Data System (ADS)

    Čerba, Štefan; Vrban, Branislav; Lüley, Jakub; Haščík, Ján; Nečas, Vladimír

    2017-09-01

    In this paper the development and optimization of the SBJ_E71 multigroup cross section library for GFR2400 applications is discussed. A cross section processing scheme, merging Monte Carlo and deterministic codes, was developed. Several fine and coarse group structures and two weighting flux options were analysed through 18 benchmark experiments selected from the handbook of ICSBEP and based on performed similarity assessments. The performance of the collapsed version of the SBJ_E71 library was compared with MCNP5 CE ENDF/B VII.1 and the Korean KAFAX-E70 library. The comparison was made based on integral parameters of calculations performed on full core homogenous models.

  5. MT71x: Multi-Temperature Library Based on ENDF/B-VII.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard

    The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less

  6. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  7. Validation of the WIMSD4M cross-section generation code with benchmark results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leal, L.C.; Deen, J.R.; Woodruff, W.L.

    1995-02-01

    The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less

  8. Multigroup SIR epidemic model with stochastic perturbation

    NASA Astrophysics Data System (ADS)

    Ji, Chunyan; Jiang, Daqing; Shi, Ningzhong

    2011-05-01

    In this paper, we discuss a multigroup SIR model with stochastic perturbation. We deduce the globally asymptotic stability of the disease-free equilibrium when R0≤1, which means the disease will die out. On the other hand, when R0>1, we derive the disease will prevail, which is measured through the difference between the solution and the endemic equilibrium of the deterministic model in time average. Furthermore, we prove the system is persistent in the mean which also reflects the disease will prevail. The key to our analysis is choosing appropriate Lyapunov functions. Finally, we illustrate the dynamic behavior of the model with n=2 and their approximations via a range of numerical experiments.

  9. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  10. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  11. Improvement of Modeling HTGR Neutron Physics by Uncertainty Analysis with the Use of Cross-Section Covariance Information

    NASA Astrophysics Data System (ADS)

    Boyarinov, V. F.; Grol, A. V.; Fomichenko, P. A.; Ternovykh, M. Yu

    2017-01-01

    This work is aimed at improvement of HTGR neutron physics design calculations by application of uncertainty analysis with the use of cross-section covariance information. Methodology and codes for preparation of multigroup libraries of covariance information for individual isotopes from the basic 44-group library of SCALE-6 code system were developed. A 69-group library of covariance information in a special format for main isotopes and elements typical for high temperature gas cooled reactors (HTGR) was generated. This library can be used for estimation of uncertainties, associated with nuclear data, in analysis of HTGR neutron physics with design codes. As an example, calculations of one-group cross-section uncertainties for fission and capture reactions for main isotopes of the MHTGR-350 benchmark, as well as uncertainties of the multiplication factor (k∞) for the MHTGR-350 fuel compact cell model and fuel block model were performed. These uncertainties were estimated by the developed technology with the use of WIMS-D code and modules of SCALE-6 code system, namely, by TSUNAMI, KENO-VI and SAMS. Eight most important reactions on isotopes for MHTGR-350 benchmark were identified, namely: 10B(capt), 238U(n,γ), ν5, 235U(n,γ), 238U(el), natC(el), 235U(fiss)-235U(n,γ), 235U(fiss).

  12. Mixed Legendre moments and discrete scattering cross sections for anisotropy representation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calloo, A.; Vidal, J. F.; Le Tellier, R.

    2012-07-01

    This paper deals with the resolution of the integro-differential form of the Boltzmann transport equation for neutron transport in nuclear reactors. In multigroup theory, deterministic codes use transfer cross sections which are expanded on Legendre polynomials. This modelling leads to negative values of the transfer cross section for certain scattering angles, and hence, the multigroup scattering source term is wrongly computed. The first part compares the convergence of 'Legendre-expanded' cross sections with respect to the order used with the method of characteristics (MOC) for Pressurised Water Reactor (PWR) type cells. Furthermore, the cross section is developed using piecewise-constant functions, whichmore » better models the multigroup transfer cross section and prevents the occurrence of any negative value for it. The second part focuses on the method of solving the transport equation with the above-mentioned piecewise-constant cross sections for lattice calculations for PWR cells. This expansion thereby constitutes a 'reference' method to compare the conventional Legendre expansion to, and to determine its pertinence when applied to reactor physics calculations. (authors)« less

  13. Improved Convergence Rate of Multi-Group Scattering Moment Tallies for Monte Carlo Neutron Transport Codes

    NASA Astrophysics Data System (ADS)

    Nelson, Adam

    Multi-group scattering moment matrices are critical to the solution of the multi-group form of the neutron transport equation, as they are responsible for describing the change in direction and energy of neutrons. These matrices, however, are difficult to correctly calculate from the measured nuclear data with both deterministic and stochastic methods. Calculating these parameters when using deterministic methods requires a set of assumptions which do not hold true in all conditions. These quantities can be calculated accurately with stochastic methods, however doing so is computationally expensive due to the poor efficiency of tallying scattering moment matrices. This work presents an improved method of obtaining multi-group scattering moment matrices from a Monte Carlo neutron transport code. This improved method of tallying the scattering moment matrices is based on recognizing that all of the outgoing particle information is known a priori and can be taken advantage of to increase the tallying efficiency (therefore reducing the uncertainty) of the stochastically integrated tallies. In this scheme, the complete outgoing probability distribution is tallied, supplying every one of the scattering moment matrices elements with its share of data. In addition to reducing the uncertainty, this method allows for the use of a track-length estimation process potentially offering even further improvement to the tallying efficiency. Unfortunately, to produce the needed distributions, the probability functions themselves must undergo an integration over the outgoing energy and scattering angle dimensions. This integration is too costly to perform during the Monte Carlo simulation itself and therefore must be performed in advance by way of a pre-processing code. The new method increases the information obtained from tally events and therefore has a significantly higher efficiency than the currently used techniques. The improved method has been implemented in a code system containing a new pre-processor code, NDPP, and a Monte Carlo neutron transport code, OpenMC. This method is then tested in a pin cell problem and a larger problem designed to accentuate the importance of scattering moment matrices. These tests show that accuracy was retained while the figure-of-merit for generating scattering moment matrices and fission energy spectra was significantly improved.

  14. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, R; Fallone, B; Cross Cancer Institute, Edmonton, AB

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. Themore » LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been formulated on GPU, resulting in two orders of magnitude speedup. Funding support from Natural Sciences and Engineering Research Council and Alberta Innovates Health Solutions. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  15. Fuel burnup analysis for IRIS reactor using MCNPX and WIMS-D5 codes

    NASA Astrophysics Data System (ADS)

    Amin, E. A.; Bashter, I. I.; Hassan, Nabil M.; Mustafa, S. S.

    2017-02-01

    International Reactor Innovative and Secure (IRIS) reactor is a compact power reactor designed with especial features. It contains Integral Fuel Burnable Absorber (IFBA). The core is heterogeneous both axially and radially. This work provides the full core burn up analysis for IRIS reactor using MCNPX and WIMDS-D5 codes. Criticality calculations, radial and axial power distributions and nuclear peaking factor at the different stages of burnup were studied. Effective multiplication factor values for the core were estimated by coupling MCNPX code with WIMS-D5 code and compared with SAS2H/KENO-V code values at different stages of burnup. The two calculation codes show good agreement and correlation. The values of radial and axial powers for the full core were also compared with published results given by SAS2H/KENO-V code (at the beginning and end of reactor operation). The behavior of both radial and axial power distribution is quiet similar to the other data published by SAS2H/KENO-V code. The peaking factor values estimated in the present work are close to its values calculated by SAS2H/KENO-V code.

  16. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  17. Travelling Wave Solutions in Multigroup Age-Structured Epidemic Models

    NASA Astrophysics Data System (ADS)

    Ducrot, Arnaut; Magal, Pierre; Ruan, Shigui

    2010-01-01

    Age-structured epidemic models have been used to describe either the age of individuals or the age of infection of certain diseases and to determine how these characteristics affect the outcomes and consequences of epidemiological processes. Most results on age-structured epidemic models focus on the existence, uniqueness, and convergence to disease equilibria of solutions. In this paper we investigate the existence of travelling wave solutions in a deterministic age-structured model describing the circulation of a disease within a population of multigroups. Individuals of each group are able to move with a random walk which is modelled by the classical Fickian diffusion and are classified into two subclasses, susceptible and infective. A susceptible individual in a given group can be crisscross infected by direct contact with infective individuals of possibly any group. This process of transmission can depend upon the age of the disease of infected individuals. The goal of this paper is to provide sufficient conditions that ensure the existence of travelling wave solutions for the age-structured epidemic model. The case of two population groups is numerically investigated which applies to the crisscross transmission of feline immunodeficiency virus (FIV) and some sexual transmission diseases.

  18. Validation of updated neutronic calculation models proposed for Atucha-II PHWR. Part I: Benchmark comparisons of WIMS-D5 and DRAGON cell and control rod parameters with MCNP5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mollerach, R.; Leszczynski, F.; Fink, J.

    2006-07-01

    In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less

  19. A Comparison of Monte Carlo and Deterministic Solvers for keff and Sensitivity Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haeck, Wim; Parsons, Donald Kent; White, Morgan Curtis

    Verification and validation of our solutions for calculating the neutron reactivity for nuclear materials is a key issue to address for many applications, including criticality safety, research reactors, power reactors, and nuclear security. Neutronics codes solve variations of the Boltzmann transport equation. The two main variants are Monte Carlo versus deterministic solutions, e.g. the MCNP [1] versus PARTISN [2] codes, respectively. There have been many studies over the decades that examined the accuracy of such solvers and the general conclusion is that when the problems are well-posed, either solver can produce accurate results. However, the devil is always in themore » details. The current study examines the issue of self-shielding and the stress it puts on deterministic solvers. Most Monte Carlo neutronics codes use continuous-energy descriptions of the neutron interaction data that are not subject to this effect. The issue of self-shielding occurs because of the discretisation of data used by the deterministic solutions. Multigroup data used in these solvers are the average cross section and scattering parameters over an energy range. Resonances in cross sections can occur that change the likelihood of interaction by one to three orders of magnitude over a small energy range. Self-shielding is the numerical effect that the average cross section in groups with strong resonances can be strongly affected as neutrons within that material are preferentially absorbed or scattered out of the resonance energies. This affects both the average cross section and the scattering matrix.« less

  20. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  1. AMPX: a modular code system for generating coupled multigroup neutron-gamma libraries from ENDF/B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Lucius, J.L.; Petrie, L.M.

    1976-03-01

    AMPX is a modular system for producing coupled multigroup neutron-gamma cross section sets. Basic neutron and gamma cross-section data for AMPX are obtained from ENDF/B libraries. Most commonly used operations required to generate and collapse multigroup cross-section sets are provided in the system. AMPX is flexibly dimensioned; neutron group structures, and gamma group structures, and expansion orders to represent anisotropic processes are all arbitrary and limited only by available computer core and budget. The basic processes provided will (1) generate multigroup neutron cross sections; (2) generate multigroup gamma cross sections; (3) generate gamma yields for gamma-producing neutron interactions; (4) combinemore » neutron cross sections, gamma cross sections, and gamma yields into final ''coupled sets''; (5) perform one-dimensional discrete ordinates transport or diffusion theory calculations for neutrons and gammas and, on option, collapse the cross sections to a broad-group structure, using the one-dimensional results as weighting functions; (6) plot cross sections, on option, to facilitate the ''evaluation'' of a particular multigroup set of data; (7) update and maintain multigroup cross section libraries in such a manner as to make it not only easy to combine new data with previously processed data but also to do it in a single pass on the computer; and (8) output multigroup cross sections in convenient formats for other codes. (auth)« less

  2. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  3. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  4. Production and testing of the ENEA-Bologna VITJEFF32.BOLIB (JEFF-3.2) multi-group (199 n + 42 γ) cross section library in AMPX format for nuclear fission applications

    NASA Astrophysics Data System (ADS)

    Pescarini, Massimo; Orsi, Roberto; Frisoni, Manuela

    2017-09-01

    The ENEA-Bologna Nuclear Data Group produced the VITJEFF32.BOLIB multi-group coupled neutron/photon (199 n + 42 γ) cross section library in AMPX format, based on the OECD-NEA Data Bank JEFF-3.2 evaluated nuclear data library. VITJEFF32.BOLIB was conceived for nuclear fission applications as European counterpart of the ORNL VITAMIN-B7 similar library (ENDF/B-VII.0 data). VITJEFF32.BOLIB has the same neutron and photon energy group structure as the former ORNL VITAMIN-B6 reference library (ENDF/B-VI.3 data) and was produced using similar data processing methodologies, based on the LANL NJOY-2012.53 nuclear data processing system for the generation of the nuclide cross section data files in GENDF format. Then the ENEA-Bologna 2007 Revision of the ORNL SCAMPI nuclear data processing system was used for the conversion into the AMPX format. VITJEFF32.BOLIB contains processed cross section data files for 190 nuclides, obtained through the Bondarenko (f-factor) method for the treatment of neutron resonance self-shielding and temperature effects. Collapsed working libraries of self-shielded cross sections in FIDO-ANISN format, used by the deterministic transport codes of the ORNL DOORS system, can be generated from VITJEFF32.BOLIB through the cited SCAMPI version. This paper describes the methodology and specifications of the data processing performed and presents some results of the VITJEFF32.BOLIB validation.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  6. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    NASA Astrophysics Data System (ADS)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  7. Release of Continuous Representation for S(α,β) ACE Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conlin, Jeremy Lloyd; Parsons, Donald Kent

    2014-03-20

    For low energy neutrons, the default free gas model for scattering cross sections is not always appropriate. Molecular effects or crystalline structure effects can affect the neutron scattering cross sections. These effects are included in the S(α; β) thermal neutron scattering data and are tabulated in file 7 of the ENDF6 format files. S stands for scattering. α is a momentum transfer variable and is an energy transfer variable. The S(α; β) cross sections can include coherent elastic scattering (no E change for the neutron, but specific scattering angles), incoherent elastic scattering (no E change for the neutron, but continuousmore » scattering angles), and inelastic scattering (E change for the neutron, and change in angle as well). Every S(α; β) material will have inelastic scattering and may have either coherent or incoherent elastic scattering (but not both). Coherent elastic scattering cross sections have distinctive jagged-looking Bragg edges, whereas the other cross sections are much smoother. The evaluated files from the NNDC are processed locally in the THERMR module of NJOY. Data can be produced either for continuous energy Monte Carlo codes (using ACER) or embedded in multi-group cross sections for deterministic (or even multi-group Monte Carlo) codes (using GROUPR). Currently, the S(α; β) files available for MCNP use discrete energy changes for inelastic scattering. That is, the scattered neutrons can only be emitted at specific energies— rather than across a continuous spectrum of energies. The discrete energies are chosen to preserve the average secondary neutron energy, i.e., in an integral sense, but the discrete treatment does not preserve any differential quantities in energy or angle.« less

  8. Importance of resonance interference effects in multigroup self-shielding calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stachowski, R.E.; Protsik, R.

    1995-12-31

    The impact of the resonance interference method (RIF) on multigroup neutron cross sections is significant for major isotopes in the fuel, indicating the importance of resonance interference in the computation of gadolinia burnout and plutonium buildup. The self-shielding factor method with the RIF method effectively eliminates shortcomings in multigroup resonance calculations.

  9. Procedure to Generate the MPACT Multigroup Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog

    The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the light water reactor. The objective of this document is focused on reviewing the current procedure to generate the MPACT multigroup library. Detailed methodologies and procedures are included in this document for further discussion to improve the MPACT multigroup library.

  10. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  11. MC 2 -3: Multigroup Cross Section Generation Code for Fast Reactor Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Changho; Yang, Won Sik

    This paper presents the methods and performance of the MC2 -3 code, which is a multigroup cross-section generation code for fast reactor analysis, developed to improve the resonance self-shielding and spectrum calculation methods of MC2 -2 and to simplify the current multistep schemes generating region-dependent broad-group cross sections. Using the basic neutron data from ENDF/B data files, MC2 -3 solves the consistent P1 multigroup transport equation to determine the fundamental mode spectra for use in generating multigroup neutron cross sections. A homogeneous medium or a heterogeneous slab or cylindrical unit cell problem is solved in ultrafine (2082) or hyperfine (~400more » 000) group levels. In the resolved resonance range, pointwise cross sections are reconstructed with Doppler broadening at specified temperatures. The pointwise cross sections are directly used in the hyperfine group calculation, whereas for the ultrafine group calculation, self-shielded cross sections are prepared by numerical integration of the pointwise cross sections based upon the narrow resonance approximation. For both the hyperfine and ultrafine group calculations, unresolved resonances are self-shielded using the analytic resonance integral method. The ultrafine group calculation can also be performed for a two-dimensional whole-core problem to generate region-dependent broad-group cross sections. Verification tests have been performed using the benchmark problems for various fast critical experiments including Los Alamos National Laboratory critical assemblies; Zero-Power Reactor, Zero-Power Physics Reactor, and Bundesamt für Strahlenschutz experiments; Monju start-up core; and Advanced Burner Test Reactor. Verification and validation results with ENDF/B-VII.0 data indicated that eigenvalues from MC2 -3/DIF3D agreed well with Monte Carlo N-Particle5 MCNP5 or VIM Monte Carlo solutions within 200 pcm and regionwise one-group fluxes were in good agreement with Monte Carlo solutions.« less

  12. CEPXS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-19

    CEPXS is a multigroup-Legendre cross-section generating code. The cross sections produced by CEPXS enable coupled electron-photon transport calculations to be performed with multigroup radiation transport codes, e.g. MITS and SCEPTRE. CEPXS generates multigroup-Legendre cross sections for photons, electrons and positrons over the energy range from 100 MeV to 1.0 keV. The continuous slowing-down approximation is used for those electron interactions that result in small-energy losses. The extended transport correction is applied to the forward-peaked elastic scattering cross section for electrons. A standard multigroup-Legendre treatment is used for the other coupled electron-photon cross sections. CEPXS extracts electron cross-section information from themore » DATAPAC data set and photon cross-section information from Biggs-Lighthill data. The model that is used for ionization/relaxation in CEPXS is essentially the same as that employed in ITS.« less

  13. Social comparison and perceived breach of psychological contract: their effects on burnout in a multigroup analysis.

    PubMed

    Cantisano, Gabriela Topa; Domínguez, J Francisco Morales; García, J Luis Caeiro

    2007-05-01

    This study focuses on the mediator role of social comparison in the relationship between perceived breach of psychological contract and burnout. A previous model showing the hypothesized effects of perceived breach on burnout, both direct and mediated, is proposed. The final model reached an optimal fit to the data and was confirmed through multigroup analysis using a sample of Spanish teachers (N = 401) belonging to preprimary, primary, and secondary schools. Multigroup analyses showed that the model fit all groups adequately.

  14. Methodes iteratives paralleles: Applications en neutronique et en mecanique des fluides

    NASA Astrophysics Data System (ADS)

    Qaddouri, Abdessamad

    Dans cette these, le calcul parallele est applique successivement a la neutronique et a la mecanique des fluides. Dans chacune de ces deux applications, des methodes iteratives sont utilisees pour resoudre le systeme d'equations algebriques resultant de la discretisation des equations du probleme physique. Dans le probleme de neutronique, le calcul des matrices des probabilites de collision (PC) ainsi qu'un schema iteratif multigroupe utilisant une methode inverse de puissance sont parallelises. Dans le probleme de mecanique des fluides, un code d'elements finis utilisant un algorithme iteratif du type GMRES preconditionne est parallelise. Cette these est presentee sous forme de six articles suivis d'une conclusion. Les cinq premiers articles traitent des applications en neutronique, articles qui representent l'evolution de notre travail dans ce domaine. Cette evolution passe par un calcul parallele des matrices des PC et un algorithme multigroupe parallele teste sur un probleme unidimensionnel (article 1), puis par deux algorithmes paralleles l'un mutiregion l'autre multigroupe, testes sur des problemes bidimensionnels (articles 2--3). Ces deux premieres etapes sont suivies par l'application de deux techniques d'acceleration, le rebalancement neutronique et la minimisation du residu aux deux algorithmes paralleles (article 4). Finalement, on a mis en oeuvre l'algorithme multigroupe et le calcul parallele des matrices des PC sur un code de production DRAGON ou les tests sont plus realistes et peuvent etre tridimensionnels (article 5). Le sixieme article (article 6), consacre a l'application a la mecanique des fluides, traite la parallelisation d'un code d'elements finis FES ou le partitionneur de graphe METIS et la librairie PSPARSLIB sont utilises.

  15. Taking into account the impact of attrition on the assessment of response shift and true change: a multigroup structural equation modeling approach.

    PubMed

    Verdam, Mathilde G E; Oort, Frans J; van der Linden, Yvette M; Sprangers, Mirjam A G

    2015-03-01

    Missing data due to attrition present a challenge for the assessment and interpretation of change and response shift in HRQL outcomes. The objective was to handle such missingness and to assess response shift and 'true change' with the use of an attrition-based multigroup structural equation modeling (SEM) approach. Functional limitations and health impairments were measured in 1,157 cancer patients, who were treated with palliative radiotherapy for painful bone metastases, before [time (T) 0], every week after treatment (T1 through T12), and then monthly for up to 2 years (T13 through T24). To handle missing data due to attrition, the SEM procedure was extended to a multigroup approach, in which we distinguished three groups: short survival (3-5 measurements), medium survival (6-12 measurements), and long survival (>12 measurements). Attrition after third, sixth, and 13th measurement occasions was 11, 24, and 41 %, respectively. Results show that patterns of change in functional limitations and health impairments differ between patients with short, medium, or long survival. Moreover, three response-shift effects were detected: recalibration of 'pain' and 'sickness' and reprioritization of 'physical functioning.' If response-shift effects would not have been taken into account, functional limitations and health impairments would generally be underestimated across measurements. The multigroup SEM approach enables the analysis of data from patients with different patterns of missing data due to attrition. This approach does not only allow for detection of response shift and assessment of true change across measurements, but also allow for detection of differences in response shift and true change across groups of patients with different attrition rates.

  16. WWER-1000 core and reflector parameters investigation in the LR-0 reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaritsky, S. M.; Alekseev, N. I.; Bolshagin, S. N.

    2006-07-01

    Measurements and calculations carried out in the core and reflector of WWER-1000 mock-up are discussed: - the determination of the pin-to-pin power distribution in the core by means of gamma-scanning of fuel pins and pin-to-pin calculations with Monte Carlo code MCU-REA and diffusion codes MOBY-DICK (with WIMS-D4 cell constants preparation) and RADAR - the fast neutron spectra measurements by proton recoil method inside the experimental channel in the core and inside the channel in the baffle, and corresponding calculations in P{sub 3}S{sub 8} approximation of discrete ordinates method with code DORT and BUGLE-96 library - the neutron spectra evaluations (adjustment)more » in the same channels in energy region 0.5 eV-18 MeV based on the activation and solid state track detectors measurements. (authors)« less

  17. Application of the discrete generalized multigroup method to ultra-fine energy mesh in infinite medium calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, N. A.; Forget, B.

    2012-07-01

    The Discrete Generalized Multigroup (DGM) method uses discrete Legendre orthogonal polynomials to expand the energy dependence of the multigroup neutron transport equation. This allows a solution on a fine energy mesh to be approximated for a cost comparable to a solution on a coarse energy mesh. The DGM method is applied to an ultra-fine energy mesh (14,767 groups) to avoid using self-shielding methodologies without introducing the cost usually associated with such energy discretization. Results show DGM to converge to the reference ultra-fine solution after a small number of recondensation steps for multiple infinite medium compositions. (authors)

  18. Comparison of a 3-D multi-group SN particle transport code with Monte Carlo for intracavitary brachytherapy of the cervix uteri.

    PubMed

    Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas

    2009-12-03

    A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.

  19. Sex Differences in Latent Cognitive Abilities Ages 5 to 17: Evidence from the Differential Ability Scales--Second Edition

    ERIC Educational Resources Information Center

    Keith, Timothy Z.; Reynolds, Matthew R.; Roberts, Lisa G.; Winter, Amanda L.; Austin, Cynthia A.

    2011-01-01

    Sex differences in the latent general and broad cognitive abilities underlying the Differential Ability Scales, Second Edition were investigated for children and youth ages 5 through 17. Multi-group mean and covariance structural equation modeling was used to investigate sex differences in latent cognitive abilities as well as changes in these…

  20. Testing Specific Hypotheses Concerning Latent Group Differences in Multi-group Covariance Structure Analysis with Structured Means.

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Molenaar, Peter C. M.

    1994-01-01

    In multigroup covariance structure analysis with structured means, the traditional latent selection model is formulated as a special case of phenotypic selection. Illustrations with real and simulated data demonstrate how one can test specific hypotheses concerning selection on latent variables. (SLD)

  1. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, S. Z.; Ouisloumen, M.; Ougouag, A. M.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied.more » (authors)« less

  2. Familial Correlates of Overt and Relational Aggression between Young Adolescent Siblings

    ERIC Educational Resources Information Center

    Yu, Jeong Jin; Gamble, Wendy C.

    2008-01-01

    Multi-group confirmatory factor analysis and multi-group structural equation modeling were used to test correlates of overt and relational aggression between young adolescent siblings across four groups (i.e., male/male, male/female, female/male, and female/female sibling pairs), using 433 predominately European American families. Similar patterns…

  3. Testing Measurement Invariance in the Target Rotated Multigroup Exploratory Factor Model

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Oort, Frans J.; Stoel, Reinoud D.; Wicherts, Jelte M.

    2009-01-01

    We propose a method to investigate measurement invariance in the multigroup exploratory factor model, subject to target rotation. We consider both oblique and orthogonal target rotation. This method has clear advantages over other approaches, such as the use of congruence measures. We demonstrate that the model can be implemented readily in the…

  4. Construction, classification and parametrization of complex Hadamard matrices

    NASA Astrophysics Data System (ADS)

    Szöllősi, Ferenc

    To improve the design of nuclear systems, high-fidelity neutron fluxes are required. Leadership-class machines provide platforms on which very large problems can be solved. Computing such fluxes efficiently requires numerical methods with good convergence properties and algorithms that can scale to hundreds of thousands of cores. Many 3-D deterministic transport codes are decomposable in space and angle only, limiting them to tens of thousands of cores. Most codes rely on methods such as Gauss Seidel for fixed source problems and power iteration for eigenvalue problems, which can be slow to converge for challenging problems like those with highly scattering materials or high dominance ratios. Three methods have been added to the 3-D SN transport code Denovo that are designed to improve convergence and enable the full use of cutting-edge computers. The first is a multigroup Krylov solver that converges more quickly than Gauss Seidel and parallelizes the code in energy such that Denovo can use hundreds of thousand of cores effectively. The second is Rayleigh quotient iteration (RQI), an old method applied in a new context. This eigenvalue solver finds the dominant eigenvalue in a mathematically optimal way and should converge in fewer iterations than power iteration. RQI creates energy-block-dense equations that the new Krylov solver treats efficiently. However, RQI can have convergence problems because it creates poorly conditioned systems. This can be overcome with preconditioning. The third method is a multigrid-in-energy preconditioner. The preconditioner takes advantage of the new energy decomposition because the grids are in energy rather than space or angle. The preconditioner greatly reduces iteration count for many problem types and scales well in energy. It also allows RQI to be successful for problems it could not solve otherwise. The methods added to Denovo accomplish the goals of this work. They converge in fewer iterations than traditional methods and enable the use of hundreds of thousands of cores. Each method can be used individually, with the multigroup Krylov solver and multigrid-in-energy preconditioner being particularly successful on their own. The largest benefit, though, comes from using these methods in concert.

  5. Use of the ETA-1 reactor for the validation of the multi-group APOLLO2-MORET 5 code and the Monte Carlo continuous energy MORET 5 code

    NASA Astrophysics Data System (ADS)

    Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.

    2014-06-01

    The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.

  6. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  7. A lumped parameter method of characteristics approach and multigroup kernels applied to the subgroup self-shielding calculation in MPACT

    DOE PAGES

    Stimpson, Shane G.; Liu, Yuxuan; Collins, Benjamin S.; ...

    2017-07-17

    An essential component of the neutron transport solver is the resonance self-shielding calculation used to determine equivalence cross sections. The neutron transport code, MPACT, is currently using the subgroup self-shielding method, in which the method of characteristics (MOC) is used to solve purely absorbing fixed-source problems. Recent efforts incorporating multigroup kernels to the MOC solvers in MPACT have reduced runtime by roughly 2×. Applying the same concepts for self-shielding and developing a novel lumped parameter approach to MOC, substantial improvements have also been made to the self-shielding computational efficiency without sacrificing any accuracy. These new multigroup and lumped parameter capabilitiesmore » have been demonstrated on two test cases: (1) a single lattice with quarter symmetry known as VERA (Virtual Environment for Reactor Applications) Progression Problem 2a and (2) a two-dimensional quarter-core slice known as Problem 5a-2D. From these cases, self-shielding computational time was reduced by roughly 3–4×, with a corresponding 15–20% increase in overall memory burden. An azimuthal angle sensitivity study also shows that only half as many angles are needed, yielding an additional speedup of 2×. In total, the improvements yield roughly a 7–8× speedup. Furthermore given these performance benefits, these approaches have been adopted as the default in MPACT.« less

  8. ANALYSIS OF THE MOMENTS METHOD EXPERIMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kloster, R.L.

    1959-09-01

    Monte Cario calculations show the effects of a plane water-air boundary on both fast neutron and gamma dose rates. Multigroup diffusion theory calculation for a reactor source shows the effects of a plane water-air boundary on thermal neutron dose rate. The results of Monte Cario and multigroup calculations are compared with experimental values. The predicted boundary effect for fast neutrons of 7.3% agrees within 16% with the measured effect of 6.3%. The gamma detector did not measure a boundary effect because it lacked sensitivity at low energies. However, the effect predicted for gamma rays of 5 to 10% is asmore » large as that for neutrons. An estimate of the boundary effect for thermal neutrons from a PoBe source is obtained from the results of muitigroup diffusion theory calcuiations for a reactor source. The calculated boundary effect agrees within 13% with the measured values. (auth)« less

  9. SU-F-T-111: Investigation of the Attila Deterministic Solver as a Supplement to Monte Carlo for Calculating Out-Of-Field Radiotherapy Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Lee, C; Failla, G

    Purpose: To use the Attila deterministic solver as a supplement to Monte Carlo for calculating out-of-field organ dose in support of epidemiological studies looking at the risks of second cancers. Supplemental dosimetry tools are needed to speed up dose calculations for studies involving large-scale patient cohorts. Methods: Attila is a multi-group discrete ordinates code which can solve the 3D photon-electron coupled linear Boltzmann radiation transport equation on a finite-element mesh. Dose is computed by multiplying the calculated particle flux in each mesh element by a medium-specific energy deposition cross-section. The out-of-field dosimetry capability of Attila is investigated by comparing averagemore » organ dose to that which is calculated by Monte Carlo simulation. The test scenario consists of a 6 MV external beam treatment of a female patient with a tumor in the left breast. The patient is simulated by a whole-body adult reference female computational phantom. Monte Carlo simulations were performed using MCNP6 and XVMC. Attila can export a tetrahedral mesh for MCNP6, allowing for a direct comparison between the two codes. The Attila and Monte Carlo methods were also compared in terms of calculation speed and complexity of simulation setup. A key perquisite for this work was the modeling of a Varian Clinac 2100 linear accelerator. Results: The solid mesh of the torso part of the adult female phantom for the Attila calculation was prepared using the CAD software SpaceClaim. Preliminary calculations suggest that Attila is a user-friendly software which shows great promise for our intended application. Computational performance is related to the number of tetrahedral elements included in the Attila calculation. Conclusion: Attila is being explored as a supplement to the conventional Monte Carlo radiation transport approach for performing retrospective patient dosimetry. The goal is for the dosimetry to be sufficiently accurate for use in retrospective epidemiological investigations.« less

  10. Testing for Two-Way Interactions in the Multigroup Common Factor Model

    ERIC Educational Resources Information Center

    van Smeden, Maarten; Hessen, David J.

    2013-01-01

    In this article, a 2-way multigroup common factor model (MG-CFM) is presented. The MG-CFM can be used to estimate interaction effects between 2 grouping variables on 1 or more hypothesized latent variables. For testing the significance of such interactions, a likelihood ratio test is presented. In a simulation study, the robustness of the…

  11. Using Multi-Group Confirmatory Factor Analysis to Evaluate Cross-Cultural Research: Identifying and Understanding Non-Invariance

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Harris, Lois R.; O'Quin, Chrissie; Lane, Kenneth E.

    2017-01-01

    Multi-group confirmatory factor analysis (MGCFA) allows researchers to determine whether a research inventory elicits similar response patterns across samples. If statistical equivalence in responding is found, then scale score comparisons become possible and samples can be said to be from the same population. This paper illustrates the use of…

  12. The Problem of Convergence and Commitment in Multigroup Evaluation Planning.

    ERIC Educational Resources Information Center

    Hausken, Chester A.

    This paper outlines a model for multigroup evaluation planning in a rural-education setting wherein the commitment to the structure necessary to evaluate a program is needed on the part of a research and development laboratory, the state departments of education, county supervisors, and the rural schools. To bridge the gap between basic research,…

  13. Exploring Student, Family, and School Predictors of Self-Determination Using NLTS2 Data

    ERIC Educational Resources Information Center

    Shogren, Karrie A.; Garnier Villarreal, Mauricio; Dowsett, Chantelle; Little, Todd D.

    2016-01-01

    This study conducted secondary analysis of data from the National Longitudinal Transition Study-2 (NLTS2) to examine the degree to which student, family, and school constructs predicted self-determination outcomes. Multi-group structural equation modeling was used to examine predictive relationships between 5 students, 4 family, and 7 school…

  14. Exploring Student, Family, and School Predictors of Self-Determination Using NLTS2 Data

    ERIC Educational Resources Information Center

    Shogren, Karrie A.; Garnier Villarreal, Mauricio; Dowsett, Chantelle; Little, Todd D.

    2016-01-01

    This study conducted secondary analysis of data from the National Longitudinal Transition Study-2 (NLTS2) to examine the degree to which student, family, and school constructs predicted self-determination outcomes. Multi-group structural equation modeling was used to examine predictive relationships between 5 student, 4 family, and 7 school…

  15. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  16. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  17. DMM: A MULTIGROUP, MULTIREGION ONE-SPACE-DIMENSIONAL COMPUTER PROGRAM USING NEUTRON DIFFUSION THEORY. PART II. DMM PROGRAM DESCRIPTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavanagh, D.L.; Antchagno, M.J.; Egawa, E.K.

    1960-12-31

    Operating instructions are presented for DMM, a Remington Rand 1103A program using one-space-dimensional multigroup diffusion theory to calculate the reactivity or critical conditions and flux distribution of a multiregion reactor. Complete descriptions of the routines and problem input and output specifications are also included. (D.L.C.)

  18. COMPLETE DETERMINATION OF POLARIZATION FOR A HIGH-ENERGY DEUTERON BEAM (thesis)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Button, J

    1959-05-01

    please delete the no. 17076<>13:017077The P/sub 1/ multigroup code was written for the IBM-704 in order to determine the accuracy of the few- group diffusion scheme with various imposed conditions and also to provide an alternate computational method when this scheme fails to be sufficiently accurate. The code solves for the spatially dependent multigroup flux, taking into account such nuclear phenomena is slowing down of neutrons resulting from elastic and inelastic scattering, the removal of neutrons resulting from epithermal capture and fission resonances, and the regeneration of fist neutrons resulting from fissioning which may occur in any of as manymore » as 80 fast multigroups or in the one thermal group. The code will accept as input a physical description of the reactor (that is: slab, cylindrical, or spherical geometry, number of points and regions, composition description group dependent boundary condition, transverse buckling, and mesh sizes) and a prepared library of nuclear properties of all the isotopes in each composition. The code will produce as output multigroup fluxes, currents, and isotopic slowing-down densities, in addition to pointwise and regionwise few-group macroscopic cross sections. (auth)« less

  19. Psychometric Evaluation of the 6-item Version of the Multigroup Ethnic Identity Measure with East Asian Adolescents in Canada

    PubMed Central

    Homma, Yuko; Zumbo, Bruno D.; Saewyc, Elizabeth M.; Wong, Sabrina T.

    2016-01-01

    We examined the psychometric properties of scores on a 6-item version of the Multigroup Ethnic Identity Measure (MEIM) among East Asian adolescents in Canada. A series of confirmatory factor analysis (CFA) was conducted for 4,190 East Asians who completed a provincial survey of students in grades 7 to 12. The MEIM measured highly correlated dimensions of ethnic identity (exploration and commitment). Further, multi-group CFA indicated that the scale measured the same constructs on the same metric across three age groups and across four groups with varying degrees of exposure to Canadian and East Asian cultures. The findings suggest the short version of the MEIM can be used to compare levels of ethnic identity across different age or acculturation groups. PMID:27833471

  20. Gray and multigroup radiation transport through 3D binary stochastic media with different sphere radii distributions

    DOE PAGES

    Olson, Gordon Lee

    2016-12-06

    Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less

  1. Gray and multigroup radiation transport through 3D binary stochastic media with different sphere radii distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Gordon Lee

    Here, gray and multigroup radiation is transported through 3D media consisting of spheres randomly placed in a uniform background. Comparisons are made between using constant radii spheres and three different distributions of sphere radii. Because of the computational cost of 3D calculations, only the lowest angle order, n=1, is tested. If the mean chord length is held constant, using different radii distributions makes little difference. This is true for both gray and multigroup solutions. 3D transport solutions are compared to 2D and 1D solutions with the same mean chord lengths. 2D disk and 3D sphere media give solutions that aremore » nearly identical while 1D slab solutions are fundamentally different.« less

  2. Longitudinal multigroup invariance analysis of the satisfaction with food-related life scale in university students.

    PubMed

    Schnettler, Berta; Miranda, Horacio; Miranda-Zapata, Edgardo; Salinas-Oñate, Natalia; Grunert, Klaus G; Lobos, Germán; Sepúlveda, José; Orellana, Ligia; Hueche, Clementina; Bonilla, Héctor

    2017-06-01

    This study examined longitudinal measurement invariance in the Satisfaction with Food-related Life (SWFL) scale using follow-up data from university students. We examined this measure of the SWFL in different groups of students, separated by various characteristics. Through non-probabilistic longitudinal sampling, 114 university students (65.8% female, mean age: 22.5) completed the SWFL questionnaire three times, over intervals of approximately one year. Confirmatory factor analysis was used to examine longitudinal measurement invariance. Two types of analysis were conducted: first, a longitudinal invariance by time, and second, a multigroup longitudinal invariance by sex, age, socio-economic status and place of residence during the study period. Results showed that the 3-item version of the SWFL exhibited strong longitudinal invariance (equal factor loadings and equal indicator intercepts). Longitudinal multigroup invariance analysis also showed that the 3-item version of the SWFL displays strong invariance by socio-economic status and place of residence during the study period over time. Nevertheless, it was only possible to demonstrate equivalence of the longitudinal factor structure among students of both sexes, and among those older and younger than 22 years. Generally, these findings suggest that the SWFL scale has satisfactory psychometric properties for longitudinal measurement invariance in university students with similar characteristics as the students that participated in this research. It is also possible to suggest that satisfaction with food-related life is associated with sex and age. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. The Power to Detect Sex Differences in IQ Test Scores Using Multi-Group Covariance and Means Structure Analyses

    ERIC Educational Resources Information Center

    Molenaar, Dylan; Dolan, Conor V.; Wicherts, Jelle M.

    2009-01-01

    Research into sex differences in general intelligence, g, has resulted in two opposite views. In the first view, a g-difference is nonexistent, while in the second view, g is associated with a male advantage. Past research using Multi-Group Covariance and Mean Structure Analysis (MG-CMSA) found no sex difference in g. This failure raised the…

  4. NASA-Lewis experiences with multigroup cross sections and shielding calculations

    NASA Technical Reports Server (NTRS)

    Lahti, G. P.

    1972-01-01

    The nuclear reactor shield analysis procedures employed at NASA-Lewis are described. Emphasis is placed on the generation, use, and testing of multigroup cross section data. Although coupled neutron and gamma ray cross section sets are useful in two dimensional Sn transport calculations, much insight has been gained from examination of uncoupled calculations. These have led to experimental and analytic studies of areas deemed to be of first order importance to reactor shield calculations. A discussion is given of problems encountered in using multigroup cross sections in the resolved resonance energy range. The addition to ENDF files of calculated and/or measured neutron-energy-dependent capture gamma ray spectra for shielding calculations is questioned for the resonance region. Anomalies inherent in two dimensional Sn transport calculations which may overwhelm any cross section discrepancies are illustrated.

  5. Scoping analysis of the Advanced Test Reactor using SN2ND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolters, E.; Smith, M.; SC)

    2012-07-26

    A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less

  6. Separating "Rotators" from "Nonrotators" in the Mental Rotations Test: A Multigroup Latent Class Analysis

    ERIC Educational Resources Information Center

    Geiser, Christian; Lehmann, Wolfgang; Eid, Michael

    2006-01-01

    Items of mental rotation tests can not only be solved by mental rotation but also by other solution strategies. A multigroup latent class analysis of 24 items of the Mental Rotations Test (MRT) was conducted in a sample of 1,695 German pupils and students to find out how many solution strategies can be identified for the items of this test. The…

  7. A multigroup radiation diffusion test problem: Comparison of code results with analytic solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shestakov, A I; Harte, J A; Bolstad, J H

    2006-12-21

    We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.

  8. Acuros CTS: A fast, linear Boltzmann transport equation solver for computed tomography scatter - Part I: Core algorithms and validation.

    PubMed

    Maslowski, Alexander; Wang, Adam; Sun, Mingshan; Wareing, Todd; Davis, Ian; Star-Lack, Josh

    2018-05-01

    To describe Acuros ® CTS, a new software tool for rapidly and accurately estimating scatter in x-ray projection images by deterministically solving the linear Boltzmann transport equation (LBTE). The LBTE describes the behavior of particles as they interact with an object across spatial, energy, and directional (propagation) domains. Acuros CTS deterministically solves the LBTE by modeling photon transport associated with an x-ray projection in three main steps: (a) Ray tracing photons from the x-ray source into the object where they experience their first scattering event and form scattering sources. (b) Propagating photons from their first scattering sources across the object in all directions to form second scattering sources, then repeating this process until all high-order scattering sources are computed using the source iteration method. (c) Ray-tracing photons from scattering sources within the object to the detector, accounting for the detector's energy and anti-scatter grid responses. To make this process computationally tractable, a combination of analytical and discrete methods is applied. The three domains are discretized using the Linear Discontinuous Finite Elements, Multigroup, and Discrete Ordinates methods, respectively, which confer the ability to maintain the accuracy of a continuous solution. Furthermore, through the implementation in CUDA, we sought to exploit the parallel computing capabilities of graphics processing units (GPUs) to achieve the speeds required for clinical utilization. Acuros CTS was validated against Geant4 Monte Carlo simulations using two digital phantoms: (a) a water phantom containing lung, air, and bone inserts (WLAB phantom) and (b) a pelvis phantom derived from a clinical CT dataset. For these studies, we modeled the TrueBeam ® (Varian Medical Systems, Palo Alto, CA) kV imaging system with a source energy of 125 kVp. The imager comprised a 600 μm-thick Cesium Iodide (CsI) scintillator and a 10:1 one-dimensional anti-scatter grid. For the WLAB studies, the full-fan geometry without a bowtie filter was used (with and without the anti-scatter grid). For the pelvis phantom studies, a half-fan geometry with bowtie was used (with the anti-scatter grid). Scattered and primary photon fluences and energies deposited in the detector were recorded. The Acuros CTS and Monte Carlo results demonstrated excellent agreement. For the WLAB studies, the average percent difference between the Monte Carlo- and Acuros-generated scattered photon fluences at the face of the detector was -0.7%. After including the detector response, the average percent differences between the Monte Carlo- and Acuros-generated scatter fractions (SF) were -0.1% without the grid and 0.6% with the grid. For the digital pelvis simulation, the Monte Carlo- and Acuros-generated SFs agreed to within 0.1% on average, despite the scatter-to-primary ratios (SPRs) being as high as 5.5. The Acuros CTS computation time for each scatter image was ~1 s using a single GPU. Acuros CTS enables a fast and accurate calculation of scatter images by deterministically solving the LBTE thus offering a computationally attractive alternative to Monte Carlo methods. Part II describes the application of Acuros CTS to scatter correction of CBCT scans on the TrueBeam system. © 2018 American Association of Physicists in Medicine.

  9. A hybrid multigroup neutron-pattern model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.

  10. Comparing Indirect Effects in Different Groups in Single-Group and Multi-Group Structural Equation Models

    PubMed Central

    Ryu, Ehri; Cheong, Jeewon

    2017-01-01

    In this article, we evaluated the performance of statistical methods in single-group and multi-group analysis approaches for testing group difference in indirect effects and for testing simple indirect effects in each group. We also investigated whether the performance of the methods in the single-group approach was affected when the assumption of equal variance was not satisfied. The assumption was critical for the performance of the two methods in the single-group analysis: the method using a product term for testing the group difference in a single path coefficient, and the Wald test for testing the group difference in the indirect effect. Bootstrap confidence intervals in the single-group approach and all methods in the multi-group approach were not affected by the violation of the assumption. We compared the performance of the methods and provided recommendations. PMID:28553248

  11. A Multigroup Method for the Calculation of Neutron Fluence with a Source Term

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.; Clowdsley, M. S.

    1998-01-01

    Current research on the Grant involves the development of a multigroup method for the calculation of low energy evaporation neutron fluences associated with the Boltzmann equation. This research will enable one to predict radiation exposure under a variety of circumstances. Knowledge of radiation exposure in a free-space environment is a necessity for space travel, high altitude space planes and satellite design. This is because certain radiation environments can cause damage to biological and electronic systems involving both short term and long term effects. By having apriori knowledge of the environment one can use prediction techniques to estimate radiation damage to such systems. Appropriate shielding can be designed to protect both humans and electronic systems that are exposed to a known radiation environment. This is the goal of the current research efforts involving the multi-group method and the Green's function approach.

  12. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  13. Hybrid Monte Carlo/deterministic methods for radiation shielding problems

    NASA Astrophysics Data System (ADS)

    Becker, Troy L.

    For the past few decades, the most common type of deep-penetration (shielding) problem simulated using Monte Carlo methods has been the source-detector problem, in which a response is calculated at a single location in space. Traditionally, the nonanalog Monte Carlo methods used to solve these problems have required significant user input to generate and sufficiently optimize the biasing parameters necessary to obtain a statistically reliable solution. It has been demonstrated that this laborious task can be replaced by automated processes that rely on a deterministic adjoint solution to set the biasing parameters---the so-called hybrid methods. The increase in computational power over recent years has also led to interest in obtaining the solution in a region of space much larger than a point detector. In this thesis, we propose two methods for solving problems ranging from source-detector problems to more global calculations---weight windows and the Transform approach. These techniques employ sonic of the same biasing elements that have been used previously; however, the fundamental difference is that here the biasing techniques are used as elements of a comprehensive tool set to distribute Monte Carlo particles in a user-specified way. The weight window achieves the user-specified Monte Carlo particle distribution by imposing a particular weight window on the system, without altering the particle physics. The Transform approach introduces a transform into the neutron transport equation, which results in a complete modification of the particle physics to produce the user-specified Monte Carlo distribution. These methods are tested in a three-dimensional multigroup Monte Carlo code. For a basic shielding problem and a more realistic one, these methods adequately solved source-detector problems and more global calculations. Furthermore, they confirmed that theoretical Monte Carlo particle distributions correspond to the simulated ones, implying that these methods can be used to achieve user-specified Monte Carlo distributions. Overall, the Transform approach performed more efficiently than the weight window methods, but it performed much more efficiently for source-detector problems than for global problems.

  14. Structural Relationships among Variables Affecting Elementary School Students' Career Preparation Behavior: Using a Multi-Group Structural Equation Approach

    ERIC Educational Resources Information Center

    Park, Sun Hee; Jun, JuSung

    2017-01-01

    The purpose of this study was to analyze the structural relationships between parent support, career decision self-efficacy, career maturity, and career preparation behavior for elementary school students (5th and 6th grade) in Korea and to examine if there are gender differences. A total of 609 students of 7 elementary schools in Seoul, Korea was…

  15. Honor Thy Parents: An Ethnic Multigroup Analysis of Filial Responsibility, Health Perceptions, and Caregiving Decisions.

    PubMed

    Santoro, Maya S; Van Liew, Charles; Holloway, Breanna; McKinnon, Symone; Little, Timothy; Cronan, Terry A

    2016-08-01

    The present study explores patterns of parity and disparity in the effect of filial responsibility on health-related evaluations and caregiving decisions. Participants who identified as White, Black, Hispanic, or Asian/Pacific Islander read a vignette about an older man needing medical care. They were asked to imagine that they were the man's son and answer questions regarding their likelihood of hiring a health care advocate (HCA) for services related to the father's care. A multigroup (ethnicity) path analysis was performed, and an intercept invariant multigroup model fits the data best. Direct and indirect effect estimation showed that filial responsibility mediated the relationship between both the perceived severity of the father's medical condition and the perceived need for medical assistance and the likelihood of hiring an HCA only for White and Hispanic participants, albeit differently. The findings demonstrate that culture and ethnicity affect health evaluations and caregiving decision making. © The Author(s) 2015.

  16. Necessary but Insufficient

    PubMed Central

    2017-01-01

    Abstract Cross-national data production in social science research has increased dramatically in recent decades. Assessing the comparability of data is necessary before drawing substantive conclusions that are based on cross-national data. Researchers assessing data comparability typically use either quantitative methods such as multigroup confirmatory factor analysis or qualitative methods such as online probing. Because both methods have complementary strengths and weaknesses, this study applies both multigroup confirmatory factor analysis and online probing in a mixed-methods approach to assess the comparability of constructive patriotism and nationalism, two important concepts in the study of national identity. Previous measurement invariance tests failed to achieve scalar measurement invariance, which prohibits a cross-national comparison of latent means (Davidov 2009). The arrival of the 2013 ISSP Module on National Identity has encouraged a reassessment of both constructs and a push to understand why scalar invariance cannot be achieved. Using the example of constructive patriotism and nationalism, this study demonstrates how the combination of multigroup confirmatory factor analysis and online probing can uncover and explain issues related to cross-national comparability. PMID:28579643

  17. A stable 1D multigroup high-order low-order method

    DOE PAGES

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot; ...

    2016-07-13

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  18. Full core analysis of IRIS reactor by using MCNPX.

    PubMed

    Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S

    2016-07-01

    This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Gray and multigroup radiation transport models for two-dimensional binary stochastic media using effective opacities

    DOE PAGES

    Olson, Gordon L.

    2015-09-24

    One-dimensional models for the transport of radiation through binary stochastic media do not work in multi-dimensions. In addition, authors have attempted to modify or extend the 1D models to work in multidimensions without success. Analytic one-dimensional models are successful in 1D only when assuming greatly simplified physics. State of the art theories for stochastic media radiation transport do not address multi-dimensions and temperature-dependent physics coefficients. Here, the concept of effective opacities and effective heat capacities is found to well represent the ensemble averaged transport solutions in cases with gray or multigroup temperature-dependent opacities and constant or temperature-dependent heat capacities. Inmore » every case analyzed here, effective physics coefficients fit the transport solutions over a useful range of parameter space. The transport equation is solved with the spherical harmonics method with angle orders of n=1 and 5. Although the details depend on what order of solution is used, the general results are similar, independent of angular order.« less

  20. Gray and multigroup radiation transport models for two-dimensional binary stochastic media using effective opacities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Gordon L.

    One-dimensional models for the transport of radiation through binary stochastic media do not work in multi-dimensions. In addition, authors have attempted to modify or extend the 1D models to work in multidimensions without success. Analytic one-dimensional models are successful in 1D only when assuming greatly simplified physics. State of the art theories for stochastic media radiation transport do not address multi-dimensions and temperature-dependent physics coefficients. Here, the concept of effective opacities and effective heat capacities is found to well represent the ensemble averaged transport solutions in cases with gray or multigroup temperature-dependent opacities and constant or temperature-dependent heat capacities. Inmore » every case analyzed here, effective physics coefficients fit the transport solutions over a useful range of parameter space. The transport equation is solved with the spherical harmonics method with angle orders of n=1 and 5. Although the details depend on what order of solution is used, the general results are similar, independent of angular order.« less

  1. Impact of refining the assessment of dietary exposure to cadmium in the European adult population.

    PubMed

    Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan

    2013-01-01

    Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.

  2. Validation of Yoon's Critical Thinking Disposition Instrument.

    PubMed

    Shin, Hyunsook; Park, Chang Gi; Kim, Hyojin

    2015-12-01

    The lack of reliable and valid evaluation tools targeting Korean nursing students' critical thinking (CT) abilities has been reported as one of the barriers to instructing and evaluating students in undergraduate programs. Yoon's Critical Thinking Disposition (YCTD) instrument was developed for Korean nursing students, but few studies have assessed its validity. This study aimed to validate the YCTD. Specifically, the YCTD was assessed to identify its cross-sectional and longitudinal measurement invariance. This was a validation study in which a cross-sectional and longitudinal (prenursing and postnursing practicum) survey was used to validate the YCTD using 345 nursing students at three universities in Seoul, Korea. The participants' CT abilities were assessed using the YCTD before and after completing an established pediatric nursing practicum. The validity of the YCTD was estimated and then group invariance test using multigroup confirmatory factor analysis was performed to confirm the measurement compatibility of multigroups. A test of the seven-factor model showed that the YCTD demonstrated good construct validity. Multigroup confirmatory factor analysis findings for the measurement invariance suggested that this model structure demonstrated strong invariance between groups (i.e., configural, factor loading, and intercept combined) but weak invariance within a group (i.e., configural and factor loading combined). In general, traditional methods for assessing instrument validity have been less than thorough. In this study, multigroup confirmatory factor analysis using cross-sectional and longitudinal measurement data allowed validation of the YCTD. This study concluded that the YCTD can be used for evaluating Korean nursing students' CT abilities. Copyright © 2015. Published by Elsevier B.V.

  3. Multi-group measurement invariance of the multiple sclerosis walking scale-12?

    PubMed

    Motl, Robert W; Mullen, Sean; McAuley, Edward

    2012-03-01

    One primary assumption underlying the interpretation of composite multiple sclerosis walking scale-12 (MSWS-12) scores across levels of disability status is multi-group measurement invariance. This assumption was tested in the present study between samples that differed in self-reported disability status. Participants (n = 867) completed a battery of questionnaires that included the MSWS-12 and patient-determined disease step (PDDS) scale. The multi-group invariance was tested between samples that had PDDS scores of ≤2 (i.e. no mobility limitation; n = 470) and PDDS scores ≥3 (onset of mobility limitation; n = 397) using Mplus 6·0. The omnibus test of equal covariance matrices indicated that the MSWS-12 was not invariant between the two samples that differed in disability status. The source of non-invariance occurred with the initial equivalence test of the factor structure itself. We provide evidence that questions the unambiguous interpretation of scores from the MSWS-12 as a measure of walking impairment between samples of persons with multiple sclerosis who differ in disability status.

  4. SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, T; Finlay, J; Mesina, C

    Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less

  5. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  6. A new multigroup method for cross-sections that vary rapidly in energy

    DOE PAGES

    Haut, Terry Scot; Ahrens, Cory D.; Jonko, Alexandra; ...

    2016-11-04

    Here, we present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods.

  7. New Methodologies for Generation of Multigroup Cross Sections for Shielding Applications

    NASA Astrophysics Data System (ADS)

    Arzu Alpan, F.; Haghighat, Alireza

    2003-06-01

    Coupled neutron and gamma multigroup (broad-group) libraries used for Light Water Reactor shielding and dosimetry commonly include 47-neutron and 20-gamma groups. These libraries are derived from the 199-neutron, 42-gamma fine-group VITAMIN-B6 library. In this paper, we introduce modifications to the generation procedure of the broad-group libraries. Among these modifications, we show that the fine-group structure and collapsing technique have the largest impact. We demonstrate that a more refined fine-group library and the bi-linear adjoint weighting collapsing technique can improve the accuracy of transport calculation results.

  8. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  9. Measurement invariance via multigroup SEM: Issues and solutions with chi-square-difference tests.

    PubMed

    Yuan, Ke-Hai; Chan, Wai

    2016-09-01

    Multigroup structural equation modeling (SEM) plays a key role in studying measurement invariance and in group comparison. When population covariance matrices are deemed not equal across groups, the next step to substantiate measurement invariance is to see whether the sample covariance matrices in all the groups can be adequately fitted by the same factor model, called configural invariance. After configural invariance is established, cross-group equalities of factor loadings, error variances, and factor variances-covariances are then examined in sequence. With mean structures, cross-group equalities of intercepts and factor means are also examined. The established rule is that if the statistic at the current model is not significant at the level of .05, one then moves on to testing the next more restricted model using a chi-square-difference statistic. This article argues that such an established rule is unable to control either Type I or Type II errors. Analysis, an example, and Monte Carlo results show why and how chi-square-difference tests are easily misused. The fundamental issue is that chi-square-difference tests are developed under the assumption that the base model is sufficiently close to the population, and a nonsignificant chi-square statistic tells little about how good the model is. To overcome this issue, this article further proposes that null hypothesis testing in multigroup SEM be replaced by equivalence testing, which allows researchers to effectively control the size of misspecification before moving on to testing a more restricted model. R code is also provided to facilitate the applications of equivalence testing for multigroup SEM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. FY15 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Shemon, E. R.; Smith, M. A.

    2015-09-30

    This report summarizes the current status of NEAMS activities in FY2015. The tasks this year are (1) to improve solution methods for steady-state and transient conditions, (2) to develop features and user friendliness to increase the usability and applicability of the code, (3) to improve and verify the multigroup cross section generation scheme, (4) to perform verification and validation tests of the code using SFRs and thermal reactor cores, and (5) to support early users of PROTEUS and update the user manuals.

  11. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    such as weighted sum method, weighted 5 product method, and the Analytic Hierarchy Process ( AHP ). This research focuses on only weighted sum...different groups. They can be termed as deterministic, stochastic, or fuzzy multi-objective decision methods if they are classified according to the...weighted product model (WPM), and analytic hierarchy process ( AHP ). His method attempts to identify the most important criteria weight and the most

  12. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P$sub 1$) in up to three- dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First- order perturbation analysis capability is available at the macroscopic cross section level. (auth)

  13. The Group-Level Consequences of Sexual Conflict in Multigroup Populations

    PubMed Central

    Eldakar, Omar Tonsi; Gallup, Andrew C.

    2011-01-01

    In typical sexual conflict scenarios, males best equipped to exploit females are favored locally over more prudent males, despite reducing female fitness. However, local advantage is not the only relevant form of selection. In multigroup populations, groups with less sexual conflict will contribute more offspring to the next generation than higher conflict groups, countering the local advantage of harmful males. Here, we varied male aggression within-and between-groups in a laboratory population of water striders and measured resulting differences in local population growth over a period of three weeks. The overall pool fitness (i.e., adults produced) of less aggressive pools exceeded that of high aggression pools by a factor of three, with the high aggression pools essentially experiencing no population growth over the course of the study. When comparing the fitness of individuals across groups, aggression appeared to be under stabilizing selection in the multigroup population. The use of contextual analysis revealed that overall stabilizing selection was a product of selection favoring aggression within groups, but selected against it at the group-level. Therefore, this report provides further evidence to show that what evolves in the total population is not merely an extension of within-group dynamics. PMID:22039491

  14. Multi-group acculturation orientations in a changing context: Palestinian Christian Arab adolescents in Israel after the lost decade.

    PubMed

    Munayer, Salim J; Horenczyk, Gabriel

    2014-10-01

    Grounded in a contextual approach to acculturation of minorities, this study examines changes in acculturation orientations among Palestinian Christian Arab adolescents in Israel following the "lost decade of Arab-Jewish coexistence." Multi-group acculturation orientations among 237 respondents were assessed vis-à-vis two majorities--Muslim Arabs and Israeli Jews--and compared to 1998 data. Separation was the strongest endorsed orientation towards both majority groups. Comparisons with the 1998 data also show a weakening of the Integration attitude towards Israeli Jews, and also distancing from Muslim Arabs. For the examination of the "Westernisation" hypothesis, multi-dimensional scaling (MDS) analyses of perceptions of Self and group values clearly showed that, after 10 years, Palestinian Christian Arabs perceive Israeli Jewish culture as less close to Western culture, and that Self and the Christian Arab group have become much closer, suggesting an increasing identification of Palestinian Christian Arab adolescents with their ethnoreligious culture. We discuss the value of a multi-group, multi-method, and multi-wave approach to the examination of the role of the political context in acculturation processes. © 2014 International Union of Psychological Science.

  15. Geospatial Data Fusion and Multigroup Decision Support for Surface Water Quality Management

    NASA Astrophysics Data System (ADS)

    Sun, A. Y.; Osidele, O.; Green, R. T.; Xie, H.

    2010-12-01

    Social networking and social media have gained significant popularity and brought fundamental changes to many facets of our everyday life. With the ever-increasing adoption of GPS-enabled gadgets and technology, location-based content is likely to play a central role in social networking sites. While location-based content is not new to the geoscience community, where geographic information systems (GIS) are extensively used, the delivery of useful geospatial data to targeted user groups for decision support is new. Decision makers and modelers ought to make more effective use of the new web-based tools to expand the scope of environmental awareness education, public outreach, and stakeholder interaction. Environmental decision processes are often rife with uncertainty and controversy, requiring integration of multiple sources of information and compromises between diverse interests. Fusing of multisource, multiscale environmental data for multigroup decision support is a challenging task. Toward this goal, a multigroup decision support platform should strive to achieve transparency, impartiality, and timely synthesis of information. The latter criterion often constitutes a major technical bottleneck to traditional GIS-based media, featuring large file or image sizes and requiring special processing before web deployment. Many tools and design patterns have appeared in recent years to ease the situation somewhat. In this project, we explore the use of Web 2.0 technologies for “pushing” location-based content to multigroups involved in surface water quality management and decision making. In particular, our granular bottom-up approach facilitates effective delivery of information to most relevant user groups. Our location-based content includes in-situ and remotely sensed data disseminated by NASA and other national and local agencies. Our project is demonstrated for managing the total maximum daily load (TMDL) program in the Arroyo Colorado coastal river basin in Texas. The overall design focuses on assigning spatial information to decision support elements and on efficiently using Web 2.0 technologies to relay scientific information to the nonscientific community. We conclude that (i) social networking, if appropriately used, has great potential for mitigating difficulty associated with multigroup decision making; (ii) all potential stakeholder groups should be involved in creating a useful decision support system; and (iii) environmental decision support systems should be considered a must-have, instead of an optional component of TMDL decision support projects. Acknowledgment: This project was supported by NASA grant NNX09AR63G.

  16. Development of the V4.2m5 and V5.0m0 Multigroup Cross Section Libraries for MPACT for PWR and BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kang Seog; Clarno, Kevin T.; Gentry, Cole

    2017-03-01

    The MPACT neutronics module of the Consortium for Advanced Simulation of Light Water Reactors (CASL) core simulator is a 3-D whole core transport code being developed for the CASL toolset, Virtual Environment for Reactor Analysis (VERA). Key characteristics of the MPACT code include (1) a subgroup method for resonance selfshielding and (2) a whole-core transport solver with a 2-D/1-D synthesis method. The MPACT code requires a cross section library to support all the MPACT core simulation capabilities which would be the most influencing component for simulation accuracy.

  17. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  18. Asymptotic, multigroup flux reconstruction and consistent discontinuity factors

    DOE PAGES

    Trahan, Travis J.; Larsen, Edward W.

    2015-05-12

    Recent theoretical work has led to an asymptotically derived expression for reconstructing the neutron flux from lattice functions and multigroup diffusion solutions. The leading-order asymptotic term is the standard expression for flux reconstruction, i.e., it is the product of a shape function, obtained through a lattice calculation, and the multigroup diffusion solution. The first-order asymptotic correction term is significant only where the gradient of the diffusion solution is not small. Inclusion of this first-order correction term can significantly improve the accuracy of the reconstructed flux. One may define discontinuity factors (DFs) to make certain angular moments of the reconstructed fluxmore » continuous across interfaces between assemblies in 1-D. Indeed, the standard assembly discontinuity factors make the zeroth moment (scalar flux) of the reconstructed flux continuous. The inclusion of the correction term in the flux reconstruction provides an additional degree of freedom that can be used to make two angular moments of the reconstructed flux continuous across interfaces by using current DFs in addition to flux DFs. Thus, numerical results demonstrate that using flux and current DFs together can be more accurate than using only flux DFs, and that making the second angular moment continuous can be more accurate than making the zeroth moment continuous.« less

  19. Calculation evaluation of multiplying properties of LWR with thorium fuel

    NASA Astrophysics Data System (ADS)

    Shamanin, I. V.; Grachev, V. M.; Knyshev, V. V.; Bedenko, S. V.; Novikova, N. G.

    2017-01-01

    The results of multiplying properties design research of the unit cell and LWR fuel assembly with the high temperature gas-cooled thorium reactor fuel pellet are presented in the work. The calculation evaluation showed the possibility of using thorium in LWR effectively. In this case the amount of fissile isotope is 2.45 times smaller in comparison with the standard loading of LWR. The research and numerical experiments were carried out using the verified accounting code of the program MCU5, modern libraries of evaluated nuclear data and multigroup approximations.

  20. A NUMERICAL ALGORITHM FOR MODELING MULTIGROUP NEUTRINO-RADIATION HYDRODYNAMICS IN TWO SPATIAL DIMENSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swesty, F. Douglas; Myra, Eric S.

    It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less

  1. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  2. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  3. Deterministic Squeezed States with Collective Measurements and Feedback.

    PubMed

    Cox, Kevin C; Greve, Graham P; Weiner, Joshua M; Thompson, James K

    2016-03-04

    We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N=5×10^{4} laser-cooled ^{87}Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms-comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N=4×10^{5}  atoms. This is one of the largest reported entanglement enhancements to date in any system.

  4. Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.

    PubMed

    Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob

    2015-09-18

    Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications. Copyright © 2015, American Association for the Advancement of Science.

  5. The psychometric properties of the 5-item gratitude questionnaire in Chinese adolescents.

    PubMed

    Zeng, Y; Ling, Y; Huebner, E S; He, Y; Lei, X

    2017-05-01

    WHAT IS KNOWN ON THE SUBJECT?: The GQ-6 is one of the most widely used self-report questionnaires to evaluate the level of gratitude among adults. The GQ-5 appears suitable for adolescents. WHAT THIS PAPER ADDS TO EXISTING KNOWLEDGE?: We developed a Chinese version of the GQ-5 and examined evidence for its reliability and validity. Results demonstrated adequate reliability and validity, indicating that it is appropriate for the assessment of gratitude in Chinese adolescents. In addition, Chinese early adolescent females reported higher gratitude than adolescent males. WHAT ARE THE IMPLICATIONS FOR PRACTICE?: Screening adolescents who have lower levels of gratitude through the GQ-5 could help identify students who may benefit from empirically validated interventions to promote higher levels of gratitude in an effort to promote positive psychosocial and academic outcomes. Background This study was conducted to evaluate the psychometric properties of the Chinese version of the 5-item Gratitude Questionnaire (GQ-5). Method The sample consisted of 2093 middle school students (46.8% males) in mainland China. Confirmatory factor analysis and multigroup confirmatory factor analysis were performed to examine the factor structure and the measurement equivalence across gender. The convergent validity, Cronbach's α and mean interitem correlations of the GQ-5 were also evaluated. Results The results provided evidence of internal consistency reliability through a Cronbach's α of 0.812 and a mean interitem correlation of 0.463 for the total sample. The results also supported a one-dimensional factor structure. In addition, convergent validity was assessed by statistically significant positive correlations between the GQ-5 and the two subscales of the Children's Hope Scale (CHS) and the Brief Multidimensional Students' Life Satisfaction Scale (BMSLSS) total score. Finally, multigroup confirmatory factor analysis also demonstrated measurement equivalence across gender. Subsequent analyses of latent mean revealed gender differences in early adolescent male and female students. Conclusions The Chinese version of the GQ-5 appears to be a reliable and valid measure of gratitude among Chinese early adolescents. Early adolescent female students reported higher gratitude than early adolescent male students. © 2017 John Wiley & Sons Ltd.

  6. Factorial invariance of child self-report across age subgroups: a confirmatory factor analysis of ages 5 to 16 years utilizing the PedsQL 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-01-01

    The utilization of health-related quality of life (HRQOL) measurement in an effort to improve pediatric health and well-being and determine the value of health care services has grown dramatically over the past decade. The paradigm shift toward patient-reported outcomes (PROs) in clinical trials has provided the opportunity to emphasize the value and essential need for pediatric patient self-report. In order for HRQOL/PRO comparisons to be meaningful for subgroup analyses, it is essential to demonstrate factorial invariance. This study examined age subgroup factorial invariance of child self-report for ages 5 to 16 years on more than 8,500 children utilizing the PedsQL 4.0 Generic Core Scales. Multigroup Confirmatory Factor Analysis (MGCFA) was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed to compare the factor loadings across the age subgroups. Metric invariance (i.e., equal factor loadings) across the age subgroups was demonstrated based on stability of the Comparative Fit Index between the two models, and several additional indices of practical fit including the Root Mean Squared Error of Approximation, the Non-Normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure across the age subgroups. Based on these data, it can be concluded that children across the age subgroups in this study interpreted items on the PedsQL 4.0 Generic Core Scales in a similar manner regardless of their age.

  7. PROTEUS-SN User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shemon, Emily R.; Smith, Micheal A.; Lee, Changho

    2016-02-16

    PROTEUS-SN is a three-dimensional, highly scalable, high-fidelity neutron transport code developed at Argonne National Laboratory. The code is applicable to all spectrum reactor transport calculations, particularly those in which a high degree of fidelity is needed either to represent spatial detail or to resolve solution gradients. PROTEUS-SN solves the second order formulation of the transport equation using the continuous Galerkin finite element method in space, the discrete ordinates approximation in angle, and the multigroup approximation in energy. PROTEUS-SN’s parallel methodology permits the efficient decomposition of the problem by both space and angle, permitting large problems to run efficiently on hundredsmore » of thousands of cores. PROTEUS-SN can also be used in serial or on smaller compute clusters (10’s to 100’s of cores) for smaller homogenized problems, although it is generally more computationally expensive than traditional homogenized methodology codes. PROTEUS-SN has been used to model partially homogenized systems, where regions of interest are represented explicitly and other regions are homogenized to reduce the problem size and required computational resources. PROTEUS-SN solves forward and adjoint eigenvalue problems and permits both neutron upscattering and downscattering. An adiabatic kinetics option has recently been included for performing simple time-dependent calculations in addition to standard steady state calculations. PROTEUS-SN handles void and reflective boundary conditions. Multigroup cross sections can be generated externally using the MC2-3 fast reactor multigroup cross section generation code or internally using the cross section application programming interface (API) which can treat the subgroup or resonance table libraries. PROTEUS-SN is written in Fortran 90 and also includes C preprocessor definitions. The code links against the PETSc, METIS, HDF5, and MPICH libraries. It optionally links against the MOAB library and is a part of the SHARP multi-physics suite for coupled multi-physics analysis of nuclear reactors. This user manual describes how to set up a neutron transport simulation with the PROTEUS-SN code. A companion methodology manual describes the theory and algorithms within PROTEUS-SN.« less

  8. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  9. Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code

    NASA Astrophysics Data System (ADS)

    Wemple, Charles; Zwermann, Winfried

    2017-09-01

    Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.

  10. An Analysis of Coherent Digital Receivers in the Presence of Colored Noise Interference.

    DTIC Science & Technology

    1985-06-01

    115 6.4 Pe for Det-erministic Jamnmers, JSR = 0.01, E0.3---------------------------------------------116 6.5 Pe for Deterministic Jamnmers, JSR = 0.1...k k where h p(t) and hhi(t) are the particular and homogeneous solutions, respectively, to a differential equation derived from the Fredholm I...yields 2 2D(s2)c (s) = N(s ) (3.4)c Multiplication by s corresponds to differentiation with respect to t in the time domain. So, Eq. (3.4) becomes D(p 2)K

  11. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Díez, C.J., E-mail: cj.diez@upm.es; Cabellos, O.; Instituto de Fusión Nuclear, Universidad Politécnica de Madrid, 28006 Madrid

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has tomore » be performed in order to analyse the limitations of using one-group uncertainties.« less

  12. Nuclear Data Uncertainty Propagation in Depletion Calculations Using Cross Section Uncertainties in One-group or Multi-group

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Cabellos, O.; Martínez, J. S.

    2015-01-01

    Several approaches have been developed in last decades to tackle nuclear data uncertainty propagation problems of burn-up calculations. One approach proposed was the Hybrid Method, where uncertainties in nuclear data are propagated only on the depletion part of a burn-up problem. Because only depletion is addressed, only one-group cross sections are necessary, and hence, their collapsed one-group uncertainties. This approach has been applied successfully in several advanced reactor systems like EFIT (ADS-like reactor) or ESFR (Sodium fast reactor) to assess uncertainties on the isotopic composition. However, a comparison with using multi-group energy structures was not carried out, and has to be performed in order to analyse the limitations of using one-group uncertainties.

  13. A multi-group firefly algorithm for numerical optimization

    NASA Astrophysics Data System (ADS)

    Tong, Nan; Fu, Qiang; Zhong, Caiming; Wang, Pengjun

    2017-08-01

    To solve the problem of premature convergence of firefly algorithm (FA), this paper analyzes the evolution mechanism of the algorithm, and proposes an improved Firefly algorithm based on modified evolution model and multi-group learning mechanism (IMGFA). A Firefly colony is divided into several subgroups with different model parameters. Within each subgroup, the optimal firefly is responsible for leading the others fireflies to implement the early global evolution, and establish the information mutual system among the fireflies. And then, each firefly achieves local search by following the brighter firefly in its neighbors. At the same time, learning mechanism among the best fireflies in various subgroups to exchange information can help the population to obtain global optimization goals more effectively. Experimental results verify the effectiveness of the proposed algorithm.

  14. Factorial invariance of child self-report across healthy and chronic health condition groups: a confirmatory factor analysis utilizing the PedsQLTM 4.0 Generic Core Scales.

    PubMed

    Limbers, Christine A; Newman, Daniel A; Varni, James W

    2008-07-01

    The objective of the present study was to examine the factorial invariance of the PedsQL 4.0 Generic Core Scales for child self-report across 11,433 children ages 5-18 with chronic health conditions and healthy children. Multigroup Confirmatory Factor Analysis was performed specifying a five-factor model. Two multigroup structural equation models, one with constrained parameters and the other with unconstrained parameters, were proposed in order to compare the factor loadings across children with chronic health conditions and healthy children. Metric invariance (i.e., equal factor loadings) was demonstrated based on stability of the Comparative Fit Index (CFI) between the two models, and several additional indices of practical fit including the root mean squared error of approximation, the Non-normed Fit Index, and the Parsimony Normed Fit Index. The findings support an equivalent five-factor structure on the PedsQL 4.0 Generic Core Scales across healthy and chronic health condition groups. These findings suggest that when differences are found across chronic health condition and healthy groups when utilizing the PedsQL, these differences are more likely real differences in self-perceived health-related quality of life, rather than differences in interpretation of the PedsQL items as a function of health status.

  15. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  16. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less

  17. Analytical three-dimensional neutron transport benchmarks for verification of nuclear engineering codes. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganapol, B.D.; Kornreich, D.E.

    Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less

  18. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditionalmore » Monte Carlo simulation of ``real`` particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ``black box``. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases.« less

  19. A constant stress-drop model for producing broadband synthetic seismograms: Comparison with the next generation attenuation relations

    USGS Publications Warehouse

    Frankel, A.

    2009-01-01

    Broadband (0.1-20 Hz) synthetic seismograms for finite-fault sources were produced for a model where stress drop is constant with seismic moment to see if they can match the magnitude dependence and distance decay of response spectral amplitudes found in the Next Generation Attenuation (NGA) relations recently developed from strong-motion data of crustal earthquakes in tectonically active regions. The broadband synthetics were constructed for earthquakes of M 5.5, 6.5, and 7.5 by combining deterministic synthetics for plane-layered models at low frequencies with stochastic synthetics at high frequencies. The stochastic portion used a source model where the Brune stress drop of 100 bars is constant with seismic moment. The deterministic synthetics were calculated using an average slip velocity, and hence, dynamic stress drop, on the fault that is uniform with magnitude. One novel aspect of this procedure is that the transition frequency between the deterministic and stochastic portions varied with magnitude, so that the transition frequency is inversely related to the rise time of slip on the fault. The spectral accelerations at 0.2, 1.0, and 3.0 sec periods from the synthetics generally agreed with those from the set of NGA relations for M 5.5-7.5 for distances of 2-100 km. At distances of 100-200 km some of the NGA relations for 0.2 sec spectral acceleration were substantially larger than the values of the synthetics for M 7.5 and M 6.5 earthquakes because these relations do not have a term accounting for Q. At 3 and 5 sec periods, the synthetics for M 7.5 earthquakes generally had larger spectral accelerations than the NGA relations, although there was large scatter in the results from the synthetics. The synthetics showed a sag in response spectra at close-in distances for M 5.5 between 0.3 and 0.7 sec that is not predicted from the NGA relations.

  20. A systematic review of health promotion intervention studies in the police force: study characteristics, intervention design and impacts on health.

    PubMed

    MacMillan, Freya; Karamacoska, Diana; El Masri, Aymen; McBride, Kate A; Steiner, Genevieve Z; Cook, Amelia; Kolt, Gregory S; Klupp, Nerida; George, Emma S

    2017-12-01

    To systematically review studies of health promotion intervention in the police force. Four databases were searched for articles reporting on prepost single and multigroup studies in police officers and trainees. Data were extracted and bias assessed to evaluate study characteristics, intervention design and the impact of interventions on health. Database searching identified 25 articles reporting on 21 studies relevant to the aims of this review. Few studies (n=3) were of long duration (≥6 months). Nine of 21 studies evaluated structured physical activity and/or diet programmes only, 5 studies used education and behaviour change support-only interventions, 5 combined structured programmes with education and behaviour change support, and 2 studies used computer prompts to minimise sedentary behaviour. A wide array of lifestyle behaviour and health outcomes was measured, with 11/13 multigroup and 8/8 single-group studies reporting beneficial impacts on outcomes. High risk of bias was evident across most studies. In those with the lowest risk of bias (n=2), a large effect on blood pressure and small effects on diet, sleep quality, stress and tobacco use, were reported. Health promotion interventions can impact beneficially on health of the police force, particularly blood pressure, diet, sleep, stress and tobacco use. Limited reporting made comparison of findings challenging. Combined structured programmes with education and behaviour change support and programmes including peer support resulted in the most impact on health-related outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran

    NASA Astrophysics Data System (ADS)

    Ney, B.; Askari, M.

    2009-04-01

    Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.

  2. Analyzing average and conditional effects with multigroup multilevel structural equation models

    PubMed Central

    Mayer, Axel; Nagengast, Benjamin; Fletcher, John; Steyer, Rolf

    2014-01-01

    Conventionally, multilevel analysis of covariance (ML-ANCOVA) has been the recommended approach for analyzing treatment effects in quasi-experimental multilevel designs with treatment application at the cluster-level. In this paper, we introduce the generalized ML-ANCOVA with linear effect functions that identifies average and conditional treatment effects in the presence of treatment-covariate interactions. We show how the generalized ML-ANCOVA model can be estimated with multigroup multilevel structural equation models that offer considerable advantages compared to traditional ML-ANCOVA. The proposed model takes into account measurement error in the covariates, sampling error in contextual covariates, treatment-covariate interactions, and stochastic predictors. We illustrate the implementation of ML-ANCOVA with an example from educational effectiveness research where we estimate average and conditional effects of early transition to secondary schooling on reading comprehension. PMID:24795668

  3. An Automatic Detection System of Lung Nodule Based on Multi-Group Patch-Based Deep Learning Network.

    PubMed

    Jiang, Hongyang; Ma, He; Qian, Wei; Gao, Mengdi; Li, Yan

    2017-07-14

    High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography (CT) image transformation, the lung nodule segmentation and the feature extraction, to construct a whole CADe system. It is difficult for those schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multi-group patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks (CNN) model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multi-group patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.

  4. A new multigroup method for cross-sections that vary rapidly in energy

    NASA Astrophysics Data System (ADS)

    Haut, T. S.; Ahrens, C.; Jonko, A.; Lowrie, R.; Till, A.

    2017-01-01

    We present a numerical method for solving the time-independent thermal radiative transfer (TRT) equation or the neutron transport (NT) equation when the opacity (cross-section) varies rapidly in frequency (energy) on the microscale ε; ε corresponds to the characteristic spacing between absorption lines or resonances, and is much smaller than the macroscopic frequency (energy) variation of interest. The approach is based on a rigorous homogenization of the TRT/NT equation in the frequency (energy) variable. Discretization of the homogenized TRT/NT equation results in a multigroup-type system, and can therefore be solved by standard methods. We demonstrate the accuracy and efficiency of the approach on three model problems. First we consider the Elsasser band model with constant temperature and a line spacing ε =10-4 . Second, we consider a neutron transport application for fast neutrons incident on iron, where the characteristic resonance spacing ε necessitates ≈ 16 , 000 energy discretization parameters if Planck-weighted cross sections are used. Third, we consider an atmospheric TRT problem for an opacity corresponding to water vapor over a frequency range 1000-2000 cm-1, where we take 12 homogeneous layers between 1-15 km, and temperature/pressure values in each layer from the standard US atmosphere. For all three problems, we demonstrate that we can achieve between 0.1 and 1 percent relative error in the solution, and with several orders of magnitude fewer parameters than a standard multigroup formulation using Planck-weighted (source-weighted) opacities for a comparable accuracy.

  5. Jitter and phase noise of ADPLL due to PSN with deterministic frequency

    NASA Astrophysics Data System (ADS)

    Deng, Xiaoying; Yang, Jun; Wu, Jianhui

    2011-09-01

    In this article, jitter and phase noise of all-digital phase-locked loop due to power supply noise (PSN) with deterministic frequency are analysed. It leads to the conclusion that jitter and phase noise heavily depend on the noise frequency. Compared with jitter, phase noise is much less affected by the deterministic PSN. Our method is utilised to study a CMOS ADPLL designed and simulated in SMIC 0.13 µm standard CMOS process. A comparison between the results obtained by our method and those obtained by simulation and measurement proves the accuracy of the predicted model. When the digital controlled oscillator was corrupted by PSN with 100 mVpk-pk, the measured jitters were 33.9 ps at the rate of fG = 192 MHz and 148.5 ps at the rate of fG = 40 MHz. However, the measured phase noise was exactly the same except for two impulses appearing at 192 and 40 MHz, respectively.

  6. Efficient solution of the simplified P N equations

    DOE PAGES

    Hamilton, Steven P.; Evans, Thomas M.

    2014-12-23

    We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.

  7. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  8. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  9. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  10. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  11. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  12. ATDM Rover Milestone Report STDA02-1 (FY2017 Q4)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larsen, Matt; Laney, Dan E.

    We have successfully completed the MS-4/Y1 Milestone STDA02-1 for the Rover Project. This document describes the milestone and provides an overview of the technical details and artifacts of the milestone. This milestone is focused on building a GPU accelerated ray tracing package capable of doing multi-group radiography, both back-lit and with self-emission as well as serving as a volume rendering plot in VisIt and other VTK-based visualization tools. The long term goal is a package with in-situ capability, but for this first version integration into VisIt is the primary goal. Milestone Execution Plan: Create API for GPU Raytracer that supportsmore » multi-group transport (up to hundreds of groups); Implement components into one or more of: VTK-m, VisIt, and a new library/package implementation to be hosted on LLNL Bitbucket (initially), before releasing to the wider community.« less

  13. Risk of DDT residue in maize consumed by infants as complementary diet in southwest Ethiopia.

    PubMed

    Mekonen, Seblework; Lachat, Carl; Ambelu, Argaw; Steurbaut, Walter; Kolsteren, Patrick; Jacxsens, Liesbeth; Wondafrash, Mekitie; Houbraken, Michael; Spanoghe, Pieter

    2015-04-01

    Infants in Ethiopia are consuming food items such as maize as a complementary diet. However, this may expose infants to toxic contaminants like DDT. Maize samples were collected from the households visited during a consumption survey and from markets in Jimma zone, southwestern Ethiopia. The residues of total DDT and its metabolites were analyzed using the Quick, Easy, Cheap, Effective, Rugged and Safe (QuEChERS) method combined with dispersive solid phase extraction cleanup (d-SPE). Deterministic and probabilistic methods of analysis were applied to determine the consumer exposure of infants to total DDT. The results from the exposure assessment were compared with the health based guidance value in this case the provisional tolerable daily intake (PTDI). All maize samples (n=127) were contaminated by DDT, with a mean concentration of 1.770 mg/kg, which was far above the maximum residue limit (MRL). The mean and 97.5 percentile (P 97.5) estimated daily intake of total DDT for consumers were respectively 0.011 and 0.309 mg/kg bw/day for deterministic and 0.011 and 0.083 mg/kg bw/day for probabilistic exposure assessment. For total infant population (consumers and non-consumers), the 97.5 percentile estimated daily intake were 0.265 and 0.032 mg/kg bw/day from the deterministic and probabilistic exposure assessments, respectively. Health risk estimation revealed that, the mean and 97.5 percentile for consumers, and 97.5 percentile estimated daily intake of total DDT for total population were above the PTDI. Therefore, in Ethiopia, the use of maize as complementary food for infants may pose a health risk due to DDT residue. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Statistical uncertainty analysis applied to the DRAGONv4 code lattice calculations and based on JENDL-4 covariance data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Solis, A.; Demaziere, C.; Ekberg, C.

    2012-07-01

    In this paper, multi-group microscopic cross-section uncertainty is propagated through the DRAGON (Version 4) lattice code, in order to perform uncertainty analysis on k{infinity} and 2-group homogenized macroscopic cross-sections predictions. A statistical methodology is employed for such purposes, where cross-sections of certain isotopes of various elements belonging to the 172 groups DRAGLIB library format, are considered as normal random variables. This library is based on JENDL-4 data, because JENDL-4 contains the largest amount of isotopic covariance matrixes among the different major nuclear data libraries. The aim is to propagate multi-group nuclide uncertainty by running the DRAGONv4 code 500 times, andmore » to assess the output uncertainty of a test case corresponding to a 17 x 17 PWR fuel assembly segment without poison. The chosen sampling strategy for the current study is Latin Hypercube Sampling (LHS). The quasi-random LHS allows a much better coverage of the input uncertainties than simple random sampling (SRS) because it densely stratifies across the range of each input probability distribution. Output uncertainty assessment is based on the tolerance limits concept, where the sample formed by the code calculations infers to cover 95% of the output population with at least a 95% of confidence. This analysis is the first attempt to propagate parameter uncertainties of modern multi-group libraries, which are used to feed advanced lattice codes that perform state of the art resonant self-shielding calculations such as DRAGONv4. (authors)« less

  16. Local Multi-Grouped Binary Descriptor With Ring-Based Pooling Configuration and Optimization.

    PubMed

    Gao, Yongqiang; Huang, Weilin; Qiao, Yu

    2015-12-01

    Local binary descriptors are attracting increasingly attention due to their great advantages in computational speed, which are able to achieve real-time performance in numerous image/vision applications. Various methods have been proposed to learn data-dependent binary descriptors. However, most existing binary descriptors aim overly at computational simplicity at the expense of significant information loss which causes ambiguity in similarity measure using Hamming distance. In this paper, by considering multiple features might share complementary information, we present a novel local binary descriptor, referred as ring-based multi-grouped descriptor (RMGD), to successfully bridge the performance gap between current binary and floated-point descriptors. Our contributions are twofold. First, we introduce a new pooling configuration based on spatial ring-region sampling, allowing for involving binary tests on the full set of pairwise regions with different shapes, scales, and distances. This leads to a more meaningful description than the existing methods which normally apply a limited set of pooling configurations. Then, an extended Adaboost is proposed for an efficient bit selection by emphasizing high variance and low correlation, achieving a highly compact representation. Second, the RMGD is computed from multiple image properties where binary strings are extracted. We cast multi-grouped features integration as rankSVM or sparse support vector machine learning problem, so that different features can compensate strongly for each other, which is the key to discriminativeness and robustness. The performance of the RMGD was evaluated on a number of publicly available benchmarks, where the RMGD outperforms the state-of-the-art binary descriptors significantly.

  17. Parallel computation of multigroup reactivity coefficient using iterative method

    NASA Astrophysics Data System (ADS)

    Susmikanti, Mike; Dewayatna, Winter

    2013-09-01

    One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.

  18. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  19. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  20. Estimates of Dietary Exposure to Bisphenol A (BPA) from Light Metal Packaging using Food Consumption and Packaging usage Data: A Refined Deterministic Approach and a Fully Probabilistic (FACET) Approach

    PubMed Central

    Oldring, P.K.T.; Castle, L.; O'Mahony, C.; Dixon, J.

    2013-01-01

    The FACET tool is a probabilistic model to estimate exposure to chemicals in foodstuffs, originating from flavours, additives and food contact materials. This paper demonstrates the use of the FACET tool to estimate exposure to BPA (bisphenol A) from light metal packaging. For exposure to migrants from food packaging, FACET uses industry-supplied data on the occurrence of substances in the packaging, their concentrations and construction of the packaging, which were combined with data from a market research organisation and food consumption data supplied by national database managers. To illustrate the principles, UK packaging data were used together with consumption data from the UK National Diet and Nutrition Survey (NDNS) dietary survey for 19–64 year olds for a refined deterministic verification. The UK data were chosen mainly because the consumption surveys are detailed, data for UK packaging at a detailed level were available and, arguably, the UK population is composed of high consumers of packaged foodstuffs. Exposures were run for each food category that could give rise to BPA from light metal packaging. Consumer loyalty to a particular type of packaging, commonly referred to as packaging loyalty, was set. The BPA extraction levels used for the 15 types of coating chemistries that could release BPA were in the range of 0.00005–0.012 mg dm−2. The estimates of exposure to BPA using FACET for the total diet were 0.0098 (mean) and 0.0466 (97.5th percentile) mg/person/day, corresponding to 0.00013 (mean) and 0.00059 (97.5th percentile) mg kg−1 body weight day−1 for consumers of foods packed in light metal packaging. This is well below the current EFSA (and other recognised bodies) TDI of 0.05 mg kg−1 body weight day. These probabilistic estimates were compared with estimates using a refined deterministic approach drawing on the same input data. The results from FACET for the mean, 95th and 97.5th percentile exposures to BPA lay between the lowest and the highest estimates from the refined deterministic calculations. Since this should be the case, for a fully probabilistic compared with a deterministic approach, it is concluded that the FACET tool has been verified in this example. A recent EFSA draft opinion on exposure to BPA from different sources showed that canned foods were a major contributor and compared results from various models, including those from FACET. The results from FACET were overall conservative. PMID:24405320

  1. Hardware-software face detection system based on multi-block local binary patterns

    NASA Astrophysics Data System (ADS)

    Acasandrei, Laurentiu; Barriga, Angel

    2015-03-01

    Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.

  2. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  3. Bidirectional Relationships Between Parenting Processes and Deviance in a Sample of Inner-City African American Youth

    PubMed Central

    Harris, Charlene; Vazsonyi, Alexander T.; Bolland, John M.

    2016-01-01

    The current study assessed for bidirectional relationships among supportive parenting (knowledge), negative parenting (permissiveness), and deviance in a sample (N = 5,325) of poor, inner-city African American youth from the Mobile Youth Survey (MYS) over 4 years. Cross-lagged path analysis provided evidence of significant bidirectional paths among parenting processes (knowledge and permissiveness) and deviance over time. Follow-up multigroup tests provided only modest evidence of dissimilar relationships by sex and by developmental periods. The findings improve our understanding of developmental changes between parenting behaviors and deviance during adolescence and extended current research of the bidirectionality of parent and child relationships among inner-city African American youth. PMID:28316460

  4. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  5. Racial-Ethnic Identity and Adjustment in Canadian Indigenous Adolescents

    ERIC Educational Resources Information Center

    Gfellner, Barbara M.; Armstrong, Helen D.

    2013-01-01

    This study supported associations between three theoretically driven conceptualizations of racial and ethnic identity (REI; Multigroup Ethnic Identity Measure; Multidimensional Racial Identity Measure; Bicultural Identity Measure) and with adaptive functioning among Canadian indigenous adolescents in middle school to high school. Age differences…

  6. The Multigroup Multilevel Categorical Latent Growth Curve Models

    ERIC Educational Resources Information Center

    Hung, Lai-Fa

    2010-01-01

    Longitudinal data describe developmental patterns and enable predictions of individual changes beyond sampled time points. Major methodological issues in longitudinal data include modeling random effects, subject effects, growth curve parameters, and autoregressive residuals. This study embedded the longitudinal model within a multigroup…

  7. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  8. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  9. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  10. The Multigroup Ethnic Identity Measure-Revised: Measurement invariance across racial and ethnic groups

    PubMed Central

    Brown, Susan D.; Unger Hu, Kirsten A.; Mevi, Ashley A.; Hedderson, Monique M.; Shan, Jun; Quesenberry, Charles P.; Ferrara, Assiamira

    2014-01-01

    The Multigroup Ethnic Identity Measure-Revised (MEIM-R), a brief instrument assessing affiliation with one’s ethnic group, is a promising advance in the ethnic identity literature. However, equivalency of its measurement properties across specific racial and ethnic groups should be confirmed before using it in diverse samples. We examined a) the psychometric properties of the MEIM-R including factor structure, measurement invariance, and internal consistency reliability, and b) levels of and differences in ethnic identity across multiple racial and ethnic groups and subgroups. Asian (n = 630), Black/African American (n = 58), Hispanic (n = 240), multiethnic (n = 160), and White (n = 375) women completed the MEIM-R as part of the “Gestational diabetes’ Effect on Moms” diabetes prevention trial in the Kaiser Permanente Northern California health care setting (N = 1,463; M age 32.5 years, SD = 4.9). Multiple-groups confirmatory factor analyses provided provisional evidence of measurement invariance, i.e., an equal, correlated two-factor structure, equal factor loadings, and equal item intercepts across racial and ethnic groups. Latent factor means for the two MEIM-R subscales, exploration and commitment, differed across groups; effect sizes ranging from small to large generally supported the notion of ethnic identity as more salient among people of color. Pending replication, good psychometric properties in this large and diverse sample of women support the future use of the MEIM-R. Preliminary evidence of measurement invariance suggests that the MEIM-R could be used to measure and compare ethnic identity across multiple racial and ethnic groups. PMID:24188656

  11. Reliability generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R).

    PubMed

    Herrington, Hayley M; Smith, Timothy B; Feinauer, Erika; Griner, Derek

    2016-10-01

    [Correction Notice: An Erratum for this article was reported in Vol 63(5) of Journal of Counseling Psychology (see record 2016-33161-001). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected.] Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. "Reliability generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)": Correction to Herrington et al. (2016).

    PubMed

    2016-10-01

    Reports an error in "Reliability Generalization of the Multigroup Ethnic Identity Measure-Revised (MEIM-R)" by Hayley M. Herrington, Timothy B. Smith, Erika Feinauer and Derek Griner ( Journal of Counseling Psychology , Advanced Online Publication, Mar 17, 2016, np). The name of author Erika Feinauer was misspelled as Erika Feinhauer. All versions of this article have been corrected. (The following abstract of the original article appeared in record 2016-13160-001.) Individuals' strength of ethnic identity has been linked with multiple positive indicators, including academic achievement and overall psychological well-being. The measure researchers use most often to assess ethnic identity, the Multigroup Ethnic Identity Measure (MEIM), underwent substantial revision in 2007. To inform scholars investigating ethnic identity, we performed a reliability generalization analysis on data from the revised version (MEIM-R) and compared it with data from the original MEIM. Random-effects weighted models evaluated internal consistency coefficients (Cronbach's alpha). Reliability coefficients for the MEIM-R averaged α = .88 across 37 samples, a statistically significant increase over the average of α = .84 for the MEIM across 75 studies. Reliability coefficients for the MEIM-R did not differ across study and participant characteristics such as sample gender and ethnic composition. However, consistently lower reliability coefficients averaging α = .81 were found among participants with low levels of education, suggesting that greater attention to data reliability is warranted when evaluating the ethnic identity of individuals such as middle-school students. Future research will be needed to ascertain whether data with other measures of aspects of personal identity (e.g., racial identity, gender identity) also differ as a function of participant level of education and associated cognitive or maturation processes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Psychometric validation of the Persian nine-item Internet Gaming Disorder Scale – Short Form: Does gender and hours spent online gaming affect the interpretations of item descriptions?

    PubMed Central

    Wu, Tzu-Yi; Lin, Chung-Ying; Årestedt, Kristofer; Griffiths, Mark D.; Broström, Anders; Pakpour, Amir H.

    2017-01-01

    Background and aims The nine-item Internet Gaming Disorder Scale – Short Form (IGDS-SF9) is brief and effective to evaluate Internet Gaming Disorder (IGD) severity. Although its scores show promising psychometric properties, less is known about whether different groups of gamers interpret the items similarly. This study aimed to verify the construct validity of the Persian IGDS-SF9 and examine the scores in relation to gender and hours spent online gaming among 2,363 Iranian adolescents. Methods Confirmatory factor analysis (CFA) and Rasch analysis were used to examine the construct validity of the IGDS-SF9. The effects of gender and time spent online gaming per week were investigated by multigroup CFA and Rasch differential item functioning (DIF). Results The unidimensionality of the IGDS-SF9 was supported in both CFA and Rasch. However, Item 4 (fail to control or cease gaming activities) displayed DIF (DIF contrast = 0.55) slightly over the recommended cutoff in Rasch but was invariant in multigroup CFA across gender. Items 4 (DIF contrast = −0.67) and 9 (jeopardize or lose an important thing because of gaming activity; DIF contrast = 0.61) displayed DIF in Rasch and were non-invariant in multigroup CFA across time spent online gaming. Conclusions Given the Persian IGDS-SF9 was unidimensional, it is concluded that the instrument can be used to assess IGD severity. However, users of the instrument are cautioned concerning the comparisons of the sum scores of the IGDS-SF9 across gender and across adolescents spending different amounts of time online gaming. PMID:28571474

  14. Psychometric validation of the Persian nine-item Internet Gaming Disorder Scale - Short Form: Does gender and hours spent online gaming affect the interpretations of item descriptions?

    PubMed

    Wu, Tzu-Yi; Lin, Chung-Ying; Årestedt, Kristofer; Griffiths, Mark D; Broström, Anders; Pakpour, Amir H

    2017-06-01

    Background and aims The nine-item Internet Gaming Disorder Scale - Short Form (IGDS-SF9) is brief and effective to evaluate Internet Gaming Disorder (IGD) severity. Although its scores show promising psychometric properties, less is known about whether different groups of gamers interpret the items similarly. This study aimed to verify the construct validity of the Persian IGDS-SF9 and examine the scores in relation to gender and hours spent online gaming among 2,363 Iranian adolescents. Methods Confirmatory factor analysis (CFA) and Rasch analysis were used to examine the construct validity of the IGDS-SF9. The effects of gender and time spent online gaming per week were investigated by multigroup CFA and Rasch differential item functioning (DIF). Results The unidimensionality of the IGDS-SF9 was supported in both CFA and Rasch. However, Item 4 (fail to control or cease gaming activities) displayed DIF (DIF contrast = 0.55) slightly over the recommended cutoff in Rasch but was invariant in multigroup CFA across gender. Items 4 (DIF contrast = -0.67) and 9 (jeopardize or lose an important thing because of gaming activity; DIF contrast = 0.61) displayed DIF in Rasch and were non-invariant in multigroup CFA across time spent online gaming. Conclusions Given the Persian IGDS-SF9 was unidimensional, it is concluded that the instrument can be used to assess IGD severity. However, users of the instrument are cautioned concerning the comparisons of the sum scores of the IGDS-SF9 across gender and across adolescents spending different amounts of time online gaming.

  15. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  16. Further development of the talent development environment questionnaire for sport.

    PubMed

    Li, Chunxiao; Wang, Chee Keng John; Pyun, Do Young; Martindale, Russell

    2015-01-01

    Given the significance of monitoring the critical environmental factors that facilitate athlete performance, this two-phase research aimed to validate and refine the revised talent development environment questionnaire (TDEQ). The TDEQ is a multidimensional self-report scale that assesses talented athletes' environmental experiences. Study 1 (the first phase) involved the examination of the revised TDEQ through an exploratory factor analysis (n = 363). This exploratory investigation identified a 28-item five-factor structure (i.e., TDEQ-5) with adequate internal consistency. Study 2 (the second phase) examined the factorial structure of the TDEQ-5, including convergent validity, discriminant validity, and group invariance (i.e., gender and sports type). The second phase was carried out with 496 talented athletes through the application of confirmatory factor analyses and multigroup invariance tests. The results supported the convergent validity, discriminant validity, and group invariance of the TDEQ-5. In conclusion, the TDEQ-5 with 25 items appears to be a reliable and valid scale for use in talent development environments.

  17. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  18. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  19. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  20. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect.

    PubMed

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT.

  1. Using Multigroup-Multiphase Latent State-Trait Models to Study Treatment-Induced Changes in Intra-Individual State Variability: An Application to Smokers' Affect

    PubMed Central

    Geiser, Christian; Griffin, Daniel; Shiffman, Saul

    2016-01-01

    Sometimes, researchers are interested in whether an intervention, experimental manipulation, or other treatment causes changes in intra-individual state variability. The authors show how multigroup-multiphase latent state-trait (MG-MP-LST) models can be used to examine treatment effects with regard to both mean differences and differences in state variability. The approach is illustrated based on a randomized controlled trial in which N = 338 smokers were randomly assigned to nicotine replacement therapy (NRT) vs. placebo prior to quitting smoking. We found that post quitting, smokers in both the NRT and placebo group had significantly reduced intra-individual affect state variability with respect to the affect items calm and content relative to the pre-quitting phase. This reduction in state variability did not differ between the NRT and placebo groups, indicating that quitting smoking may lead to a stabilization of individuals' affect states regardless of whether or not individuals receive NRT. PMID:27499744

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yee, Ben Chung; Wollaber, Allan Benton; Haut, Terry Scot

    The high-order low-order (HOLO) method is a recently developed moment-based acceleration scheme for solving time-dependent thermal radiative transfer problems, and has been shown to exhibit orders of magnitude speedups over traditional time-stepping schemes. However, a linear stability analysis by Haut et al. (2015 Haut, T. S., Lowrie, R. B., Park, H., Rauenzahn, R. M., Wollaber, A. B. (2015). A linear stability analysis of the multigroup High-Order Low-Order (HOLO) method. In Proceedings of the Joint International Conference on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method; Nashville, TN, April 19–23, 2015. American Nuclear Society.)more » revealed that the current formulation of the multigroup HOLO method was unstable in certain parameter regions. Since then, we have replaced the intensity-weighted opacity in the first angular moment equation of the low-order (LO) system with the Rosseland opacity. Furthermore, this results in a modified HOLO method (HOLO-R) that is significantly more stable.« less

  3. AMPX-77: A modular code system for generating coupled multigroup neutron-gamma cross-section libraries from ENDF/B-IV and/or ENDF/B-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, N.M.; Ford, W.E. III; Petrie, L.M.

    AMPX-77 is a modular system of computer programs that pertain to nuclear analyses, with a primary emphasis on tasks associated with the production and use of multigroup cross sections. AH basic cross-section data are to be input in the formats used by the Evaluated Nuclear Data Files (ENDF/B), and output can be obtained in a variety of formats, including its own internal and very general formats, along with a variety of other useful formats used by major transport, diffusion theory, and Monte Carlo codes. Processing is provided for both neutron and gamma-my data. The present release contains codes all writtenmore » in the FORTRAN-77 dialect of FORTRAN and wig process ENDF/B-V and earlier evaluations, though major modules are being upgraded in order to process ENDF/B-VI and will be released when a complete collection of usable routines is available.« less

  4. Multi-group Fokker-Planck proton transport in MCNP{trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.

    1997-11-01

    MCNP has been enhanced to perform proton transport using a multigroup Fokker Planck (MGFP) algorithm with primary emphasis on proton radiography simulations. The new method solves the Fokker Planck approximation to the Boltzmann transport equation for the small angle multiple scattering portion of proton transport. Energy loss is accounted for by applying a group averaged stopping power over each transport step. Large angle scatter and non-inelastic events are treated as extinction. Comparisons with the more rigorous LAHET code show agreement to a few per cent for the total transmitted currents. The angular distributions through copper and low Z compounds showmore » good agreement between LAHET and MGFP with the MGFP method being slightly less forward peaked and without the large angle tails apparent in the LAHET simulation. Suitability of this method for proton radiography simulations is shown for a simple problem of a hole in a copper slab. LAHET and MGFP calculations of position, angle and energy through more complex objects are presented.« less

  5. An Improved Scheduling Algorithm for Data Transmission in Ultrasonic Phased Arrays with Multi-Group Ultrasonic Sensors

    PubMed Central

    Tang, Wenming; Liu, Guixiong; Li, Yuzhong; Tan, Daji

    2017-01-01

    High data transmission efficiency is a key requirement for an ultrasonic phased array with multi-group ultrasonic sensors. Here, a novel FIFOs scheduling algorithm was proposed and the data transmission efficiency with hardware technology was improved. This algorithm includes FIFOs as caches for the ultrasonic scanning data obtained from the sensors with the output data in a bandwidth-sharing way, on the basis of which an optimal length ratio of all the FIFOs is achieved, allowing the reading operations to be switched among all the FIFOs without time slot waiting. Therefore, this algorithm enhances the utilization ratio of the reading bandwidth resources so as to obtain higher efficiency than the traditional scheduling algorithms. The reliability and validity of the algorithm are substantiated after its implementation in the field programmable gate array (FPGA) technology, and the bandwidth utilization ratio and the real-time performance of the ultrasonic phased array are enhanced. PMID:29035345

  6. Development of a flood early warning system and communication with end-users: the Vipava/Vipacco case study in the KULTURisk FP7 project

    NASA Astrophysics Data System (ADS)

    Grossi, Giovanna; Caronna, Paolo; Ranzi, Roberto

    2014-05-01

    Within the framework of risk communication, the goal of an early warning system is to support the interaction between technicians and authorities (and subsequently population) as a prevention measure. The methodology proposed in the KULTURisk FP7 project aimed to build a closer collaboration between these actors, in the perspective of promoting pro-active actions to mitigate the effects of flood hazards. The transnational (Slovenia/ Italy) Soča/Isonzo case study focused on this concept of cooperation between stakeholders and hydrological forecasters. The DIMOSHONG_VIP hydrological model was calibrated for the Vipava/Vipacco River (650 km2), a tributary of the Soča/Isonzo River, on the basis of flood events occurred between 1998 and 2012. The European Centre for Medium-Range Weather Forecasts (ECMWF) provided the past meteorological forecasts, both deterministic (1 forecast) and probabilistic (51 ensemble members). The resolution of the ECMWF grid is currently about 15 km (Deterministic-DET) and 30 km (Ensemble Prediction System-EPS). A verification was conducted to validate the flood-forecast outputs of the DIMOSHONG_VIP+ECMWF early warning system. Basic descriptive statistics, like event probability, probability of a forecast occurrence and frequency bias were determined. Some performance measures were calculated, such as hit rate (probability of detection) and false alarm rate (probability of false detection). Relative Opening Characteristic (ROC) curves were generated both for deterministic and probabilistic forecasts. These analysis showed a good performance of the early warning system, in respect of the small size of the sample. A particular attention was spent to the design of flood-forecasting output charts, involving and inquiring stakeholders (Alto Adriatico River Basin Authority), hydrology specialists in the field, and common people. Graph types for both forecasted precipitation and discharge were set. Three different risk thresholds were identified ("attention", "pre-alarm" or "alert", "alarm"), with an "icon-style" representation, suitable for communication to civil protection stakeholders or the public. Aiming at showing probabilistic representations in a "user-friendly" way, we opted for the visualization of the single deterministic forecasted hydrograph together with the 5%, 25%, 50%, 75% and 95% percentiles bands of the Hydrological Ensemble Prediction System (HEPS). HEPS is generally used for 3-5 days hydrological forecasts, while the error due to incorrect initial data is comparable to the error due to the lower resolution with respect to the deterministic forecast. In the short term forecasting (12-48 hours) the HEPS-members show obviously a similar tendency; in this case, considering its higher resolution, the deterministic forecast is expected to be more effective. The plot of different forecasts in the same chart allows the use of model outputs from 4/5 days to few hours before a potential flood event. This framework was built to help a stakeholder, like a mayor, a civil protection authority, etc, in the flood control and management operations, and was designed to be included in a wider decision support system.

  7. An Adjoint State Method for Three-Dimensional Transmission Traveltime Tomography Using First-Arrivals

    DTIC Science & Technology

    2006-01-30

    detail next. 3.2 Fast Sweeping Method for Equation (1) The fast sweeping method was originated in Boue and Dupis [5], its first PDE formulation was in...Geophysics, 50:903–923, 1985. [5] M. Boue and P. Dupuis. Markov chain approximations for deterministic control prob- lems with affine dynamics and

  8. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  9. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  10. Using Structural Equation Modeling To Fit Models Incorporating Principal Components.

    ERIC Educational Resources Information Center

    Dolan, Conor; Bechger, Timo; Molenaar, Peter

    1999-01-01

    Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…

  11. Group Counseling for African American Elementary Students: An Exploratory Study

    ERIC Educational Resources Information Center

    Steen, Sam

    2009-01-01

    This article describes a group counseling intervention promoting academic achievement and ethnic identity development for twenty fifth grade African American elementary students. The Multigroup Ethnic Identity Measure (MEIM) scores of students participating in the treatment group improved significantly over those in the control group. Implications…

  12. A Multigroup Investigation of Latent Cognitive Abilities and Reading Achievement Relations

    ERIC Educational Resources Information Center

    Hajovsky, Daniel; Reynolds, Matthew R.; Floyd, Randy G.; Turek, Joshua J.; Keith, Timothy Z.

    2014-01-01

    The structural relations between the Cattell-Horn-Carroll abilities and reading achievement outcome variables across child and adolescent development were examined in the "Kaufman Assessment Battery for Children, Second Edition", and the "Kaufman Test of Educational Achievement, Second Edition", co-normed sample. We estimated…

  13. Toward the Style of the Community Change Educator.

    ERIC Educational Resources Information Center

    Franklin, Richard

    Variations and implications of change agents' patterns or styles of interaction with client systems (individuals, groups, or multigroups) are discussed. Five styles are defined: (1) the instructor, who imparts information to clients and interacts only with his agency; (2) the paterfamilias, who exercises personal, paternalistic influence and…

  14. Estimates of dietary exposure to bisphenol A (BPA) from light metal packaging using food consumption and packaging usage data: a refined deterministic approach and a fully probabilistic (FACET) approach.

    PubMed

    Oldring, P K T; Castle, L; O'Mahony, C; Dixon, J

    2014-01-01

    The FACET tool is a probabilistic model to estimate exposure to chemicals in foodstuffs, originating from flavours, additives and food contact materials. This paper demonstrates the use of the FACET tool to estimate exposure to BPA (bisphenol A) from light metal packaging. For exposure to migrants from food packaging, FACET uses industry-supplied data on the occurrence of substances in the packaging, their concentrations and construction of the packaging, which were combined with data from a market research organisation and food consumption data supplied by national database managers. To illustrate the principles, UK packaging data were used together with consumption data from the UK National Diet and Nutrition Survey (NDNS) dietary survey for 19-64 year olds for a refined deterministic verification. The UK data were chosen mainly because the consumption surveys are detailed, data for UK packaging at a detailed level were available and, arguably, the UK population is composed of high consumers of packaged foodstuffs. Exposures were run for each food category that could give rise to BPA from light metal packaging. Consumer loyalty to a particular type of packaging, commonly referred to as packaging loyalty, was set. The BPA extraction levels used for the 15 types of coating chemistries that could release BPA were in the range of 0.00005-0.012 mg dm(-2). The estimates of exposure to BPA using FACET for the total diet were 0.0098 (mean) and 0.0466 (97.5th percentile) mg/person/day, corresponding to 0.00013 (mean) and 0.00059 (97.5th percentile) mg kg(-1) body weight day(-1) for consumers of foods packed in light metal packaging. This is well below the current EFSA (and other recognised bodies) TDI of 0.05 mg kg(-1) body weight day(-1). These probabilistic estimates were compared with estimates using a refined deterministic approach drawing on the same input data. The results from FACET for the mean, 95th and 97.5th percentile exposures to BPA lay between the lowest and the highest estimates from the refined deterministic calculations. Since this should be the case, for a fully probabilistic compared with a deterministic approach, it is concluded that the FACET tool has been verified in this example. A recent EFSA draft opinion on exposure to BPA from different sources showed that canned foods were a major contributor and compared results from various models, including those from FACET. The results from FACET were overall conservative.

  15. Economic impact of nutritional grouping in dairy herds.

    PubMed

    Kalantari, A S; Armentano, L E; Shaver, R D; Cabrera, V E

    2016-02-01

    This article evaluates the estimated economic impact of nutritional grouping in commercial dairy herds using a stochastic Monte Carlo simulation model. The model was initialized by separate data sets obtained from 5 commercial dairy herds. These herds were selected to explore the effect of herd size, structure, and characteristics on the economics and efficiency of nutrient usage according to nutritional grouping strategies. Simulated status of each cow was updated daily together with the nutrient requirements of net energy for lactation (NEL) and metabolizable protein (MP). The amount of energy consumed directly affected body weight (BW) and body condition score (BCS) changes. Moreover, to control the range of observed BCS in the model, constraints on lower (2.0) and upper (4.5) bounds of BCS were set. Each month, the clustering method was used to homogeneously regroup the cows according to their nutrient concentration requirements. The average NEL concentration of the group and a level of MP (average MP, average MP+0.5SD, or average MP+1SD) were considered to formulate the group diet. The calculated income over feed costs gain (IOFC, $/cow per yr) of having >1 nutritional group among the herds ranged from $33 to $58, with an average of $39 for 2 groups and $46 for 3 groups, when group was fed at average NEL concentration and average MP+1SD concentration. The improved IOFC was explained by increased milk sales and lower feed costs. Higher milk sales were a result of fewer cows having a milk loss associated with low BCS in multi-group scenarios. Lower feed costs in multi-group scenarios were mainly due to less rumen-undegradable protein consumption. The percentage of total NEL consumed captured in milk for >1 nutritional group was slightly lower than that for 1 nutritional group due to better distribution of energy throughout the lactation and higher energy retained in body tissue, which resulted in better herd BCS distribution. The percentage of fed N captured in milk increased with >1 group and was the most important factor for improved economic efficiency of grouping strategies. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Deterministic direct reprogramming of somatic cells to pluripotency.

    PubMed

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  17. Accessing the dark exciton spin in deterministic quantum-dot microlenses

    NASA Astrophysics Data System (ADS)

    Heindel, Tobias; Thoma, Alexander; Schwartz, Ido; Schmidgall, Emma R.; Gantz, Liron; Cogan, Dan; Strauß, Max; Schnauber, Peter; Gschrey, Manuel; Schulze, Jan-Hindrik; Strittmatter, Andre; Rodt, Sven; Gershoni, David; Reitzenstein, Stephan

    2017-12-01

    The dark exciton state in semiconductor quantum dots (QDs) constitutes a long-lived solid-state qubit which has the potential to play an important role in implementations of solid-state-based quantum information architectures. In this work, we exploit deterministically fabricated QD microlenses which promise enhanced photon extraction, to optically prepare and read out the dark exciton spin and observe its coherent precession. The optical access to the dark exciton is provided via spin-blockaded metastable biexciton states acting as heralding states, which are identified by deploying polarization-sensitive spectroscopy as well as time-resolved photon cross-correlation experiments. Our experiments reveal a spin-precession period of the dark exciton of (0.82 ± 0.01) ns corresponding to a fine-structure splitting of (5.0 ± 0.7) μeV between its eigenstates |↑ ⇑ ±↓ ⇓ ⟩. By exploiting microlenses deterministically fabricated above pre-selected QDs, our work demonstrates the possibility to scale up implementations of quantum information processing schemes using the QD-confined dark exciton spin qubit, such as the generation of photonic cluster states or the realization of a solid-state-based quantum memory.

  18. Effects of magnetometer calibration and maneuvers on accuracies of magnetometer-only attitude-and-rate determination

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.

    1998-01-01

    Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.

  19. Deterministic quantum state transfer and remote entanglement using microwave photons.

    PubMed

    Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A

    2018-06-01

    Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  1. Predictors of Satisfaction in Geographically Close and Long-Distance Relationships

    ERIC Educational Resources Information Center

    Lee, Ji-yeon; Pistole, M. Carole

    2012-01-01

    In this study, the authors examined geographically close (GCRs) and long-distance (LDRs) romantic relationship satisfaction as explained by insecure attachment, self-disclosure, gossip, and idealization. After college student participants (N = 536) completed a Web survey, structural equation modeling (SEM) multigroup analysis revealed that the GCR…

  2. Treatment Effects for Adolescent Struggling Readers: An Application of Moderated Mediation

    ERIC Educational Resources Information Center

    Roberts, Greg; Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Vaughn, Sharon

    2013-01-01

    This study used multigroup structural equations to evaluate the possibility that a theory-driven, evidence-based, yearlong reading program for sixth-grade struggling readers moderates the interrelationships among elements of the simple model of reading (i.e., listening comprehension, word reading, and reading comprehension; Hoover & Gough,…

  3. Toward a Model of Strategies and Summary Writing Performance

    ERIC Educational Resources Information Center

    Yang, Hui-Chun

    2014-01-01

    This study explores the construct of a summarization test task by means of single-group and multigroup structural equation modeling (SEM). It examines the interrelationships between strategy use and performance, drawing on data from 298 Taiwanese undergraduates' summary essays and their self-reported strategy use. Single-group SEM analyses…

  4. An Investigation of Measurement Invariance across Genders on the Overexcitability Questionnaire-Two

    ERIC Educational Resources Information Center

    Warne, Russell T.

    2011-01-01

    The Overexcitability Questionnaire-Two (OEQII) is a quantitative instrument for assessing overexcitabilities as they are described in Dabrowski's theory of positive disintegration. This article uses multigroup confirmatory factor analysis to examine the measurement invariance of OEQII scores across genders. Results indicate that raw OEQII scores…

  5. A REACTOR DESIGN PARAMETER STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, A.H.; LaVerne, M.E.; Burtnette, C.S.

    1954-06-25

    Multigroup calculations were performed on reflectormoderated systems to establish some of the nuclear characteristics of various reflector geometries and materials. C, Li/sup 7/, Li/sup 7/OD, and NaOD moderators were used with NaF-UF/ sub 4/ fuel. The results are tabulated for 57 moderator and dimensional variations. (D.E.B.)

  6. Teachers' Engagement at Work: An International Validation Study

    ERIC Educational Resources Information Center

    Klassen, Robert M.; Aldhafri, Said; Mansfield, Caroline F.; Purwanto, Edy; Siu, Angela F. Y.; Wong, Marina W.; Woods-McConney, Amanda

    2012-01-01

    This study explored the validity of the Utrecht Work Engagement Scale in a sample of 853 practicing teachers from Australia, Canada, China (Hong Kong), Indonesia, and Oman. The authors used multigroup confirmatory factor analysis to test the factor structure and measurement invariance across settings, after which they examined the relationships…

  7. Dimensions of Cultural Differences: Pancultural, ETIC/EMIC, and Ecological Approaches

    ERIC Educational Resources Information Center

    Stankov, Lazar; Lee, Jihyun

    2009-01-01

    We investigated the factorial structure of four major domains in social psychology (personality traits, social attitudes, values, and social norms) with an emphasis on cross-cultural differences. Three distinctive approaches--pancultural, multigroup, and multilevel--were applied to the data based on 22 measures that were collected from 2029…

  8. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  9. The Structure of Cognitive Abilities in Youths with Manic Symptoms: A Factorial Invariance Study

    ERIC Educational Resources Information Center

    Beaujean, A. Alexander; Freeman, Megan Joseph; Youngstrom, Eric; Carlson, Gabrielle

    2012-01-01

    This study compared the structure of cognitive ability (specifically, verbal/crystallized ["Gc"] and visual-spatial ability ["Gv"]), as measured in the Wechsler Intelligence Scale for Children, in youth with manic symptoms with a nationally representative group of similarly aged youth. Multigroup confirmatory factor analysis…

  10. Ethnic Identity and Career Development among First-Year College Students

    ERIC Educational Resources Information Center

    Duffy, Ryan D.; Klingaman, Elizabeth A.

    2009-01-01

    The current study explored the relation of ethnic identity achievement and career development progress among a sample of 2,432 first-year college students who completed the Career Decision Profile and Phinney's Multigroup Ethnic Identity Measure. Among students of color, correlational analyses revealed a series of statistically significant, but…

  11. A Cross-Cultural Evaluation of Ethnic Identity Exploration and Commitment

    ERIC Educational Resources Information Center

    Mills, Sarah D.; Murray, Kate E.

    2017-01-01

    We evaluated the unique contribution of the two subscales of the Multigroup Ethnic Identity Measure-Revised (MEIM-R), Exploration and Commitment, to mental and behavioral health outcomes among non-Hispanic White, ethnic minority, and mixed-race college students. Monoracial ethnic minority and mixed-race students reported higher Exploration scores…

  12. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  13. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  14. PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucker, D.F.

    2000-09-01

    This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less

  15. 10 CFR 50.48 - Fire protection.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) of this section such as— (i) Administrative controls and personnel requirements for fire prevention... reactor coolant inventory, pressure control, and decay heat removal capability (i.e., feed-and-bleed) for... performed in accordance with Section 2.7.3.5 is not required to support deterministic approach calculations...

  16. 10 CFR 50.48 - Fire protection.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) of this section such as— (i) Administrative controls and personnel requirements for fire prevention... reactor coolant inventory, pressure control, and decay heat removal capability (i.e., feed-and-bleed) for... performed in accordance with Section 2.7.3.5 is not required to support deterministic approach calculations...

  17. Deterministic and robust generation of single photons from a single quantum dot with 99.5% indistinguishability using adiabatic rapid passage.

    PubMed

    Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2014-11-12

    Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.

  18. Symmetry breaking in the opinion dynamics of a multi-group project organization

    NASA Astrophysics Data System (ADS)

    Zhu, Zhen-Tao; Zhou, Jing; Li, Ping; Chen, Xing-Guang

    2012-10-01

    A bounded confidence model of opinion dynamics in multi-group projects is presented in which each group's opinion evolution is driven by two types of forces: (i) the group's cohesive force which tends to restore the opinion back towards the initial status because of its company culture; and (ii) nonlinear coupling forces with other groups which attempt to bring opinions closer due to collaboration willingness. Bifurcation analysis for the case of a two-group project shows a cusp catastrophe phenomenon and three distinctive evolutionary regimes, i.e., a deadlock regime, a convergence regime, and a bifurcation regime in opinion dynamics. The critical value of initial discord between the two groups is derived to discriminate which regime the opinion evolution belongs to. In the case of a three-group project with a symmetric social network, both bifurcation analysis and simulation results demonstrate that if each pair has a high initial discord, instead of symmetrically converging to consensus with the increase of coupling scale as expected by Gabbay's result (Physica A 378 (2007) p. 125 Fig. 5), project organization (PO) may be split into two distinct clusters because of the symmetry breaking phenomenon caused by pitchfork bifurcations, which urges that apart from divergence in participants' interests, nonlinear interaction can also make conflict inevitable in the PO. The effects of two asymmetric level parameters are tested in order to explore the ways of inducing dominant opinion in the whole PO. It is found that the strong influence imposed by a leader group with firm faith on the flexible and open minded follower groups can promote the formation of a positive dominant opinion in the PO.

  19. How Much Do Adolescents Cybergossip? Scale Development and Validation in Spain and Colombia.

    PubMed

    Romera, Eva M; Herrera-López, Mauricio; Casas, José A; Ortega Ruiz, Rosario; Del Rey, Rosario

    2018-01-01

    Cybergossip is the act of two or more people making evaluative comments via digital devices about somebody who is not present. This cyberbehavior affects the social group in which it occurs and can either promote or hinder peer relationships. Scientific studies that assess the nature of this emerging and interactive behavior in the virtual world are limited. Some research on traditional gossip has identified it as an inherent and defining element of indirect relational aggression. This paper adopts and argues for a wider definition of gossip that includes positive comments and motivations. This work also suggests that cybergossip has to be measured independently from traditional gossip due to key differences when it occurs through ICT. This paper presents the Colombian and Spanish validation of the Cybergossip Questionnaire for Adolescents (CGQ-A), involving 3,747 high school students ( M = 13.98 years old, SD = 1.69; 48.5% male), of which 1,931 were Colombian and 1,816 were Spanish. Test models derived from item response theory, confirmatory factor analysis, content validation, and multi-group analysis were run on the full sample and subsamples for each country and both genders. The obtained optimal fit and psychometric properties confirm the robustness and suitability of a one-dimensional structure for the cybergossip instrument. The multi-group analysis shows that the cybergossip construct is understood similarly in both countries and between girls and boys. The composite reliability ratifies convergent and divergent validity of the scale. Descriptive results show that Colombian adolescents gossip less than their Spanish counterparts and that boys and girls use cybergossip to the same extent. As a conclusion, this study confirmes the relationship between cybergossip and cyberbullying, but it also supports a focus on positive cybergossip in psychoeducational interventions to build positive virtual relationships and prevent risky cyberbehaviors.

  20. Psychometric evaluation of Persian Nomophobia Questionnaire: Differential item functioning and measurement invariance across gender.

    PubMed

    Lin, Chung-Ying; Griffiths, Mark D; Pakpour, Amir H

    2018-03-01

    Background and aims Research examining problematic mobile phone use has increased markedly over the past 5 years and has been related to "no mobile phone phobia" (so-called nomophobia). The 20-item Nomophobia Questionnaire (NMP-Q) is the only instrument that assesses nomophobia with an underlying theoretical structure and robust psychometric testing. This study aimed to confirm the construct validity of the Persian NMP-Q using Rasch and confirmatory factor analysis (CFA) models. Methods After ensuring the linguistic validity, Rasch models were used to examine the unidimensionality of each Persian NMP-Q factor among 3,216 Iranian adolescents and CFAs were used to confirm its four-factor structure. Differential item functioning (DIF) and multigroup CFA were used to examine whether males and females interpreted the NMP-Q similarly, including item content and NMP-Q structure. Results Each factor was unidimensional according to the Rach findings, and the four-factor structure was supported by CFA. Two items did not quite fit the Rasch models (Item 14: "I would be nervous because I could not know if someone had tried to get a hold of me;" Item 9: "If I could not check my smartphone for a while, I would feel a desire to check it"). No DIF items were found across gender and measurement invariance was supported in multigroup CFA across gender. Conclusions Due to the satisfactory psychometric properties, it is concluded that the Persian NMP-Q can be used to assess nomophobia among adolescents. Moreover, NMP-Q users may compare its scores between genders in the knowledge that there are no score differences contributed by different understandings of NMP-Q items.

  1. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  2. Robust Planning for Effects-Based Operations

    DTIC Science & Technology

    2006-06-01

    Algorithm ......................................... 34 2.6 Robust Optimization Literature ..................................... 36 2.6.1 Protecting Against...Model Formulation ...................... 55 3.1.5 Deterministic EBO Model Example and Performance ............. 59 3.1.6 Greedy Algorithm ...111 4.1.9 Conclusions on Robust EBO Model Performance .................... 116 4.2 Greedy Algorithm versus EBO Models

  3. PROCEEDINGS OF THE SYMPOSIUM ON SYSTEM THEORY, NEW YORK, N. Y. APRIL 20, 21, 22 1965. VOLUME XV.

    DTIC Science & Technology

    The papers presented at the symposium may be grouped as follows: (1) What is system theory ; (2) Representations of systems; (3) System dynamics; (4...Non-deterministic systems; (5) Optimal systems; and (6) Applications of system theory .

  4. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  5. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  6. Emotional Intelligence and Negative Feelings: A Gender Specific Moderated Mediation Model

    ERIC Educational Resources Information Center

    Karakus, Mehmet

    2013-01-01

    This study aims to clarify the effect of emotional intelligence (EI) on negative feelings (stress, anxiety, burnout and depression) in a gender specific model. Four hundred and twenty-five primary school teachers (326 males, 99 females) completed the measures of EI, stress, anxiety, burnout and depression. The multi-group analysis was performed…

  7. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, Kimberly A.

    2009-08-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples.

  8. Cultural Validation of the Maslach Burnout Inventory for Korean Students

    ERIC Educational Resources Information Center

    Shin, Hyojung; Puig, Ana; Lee, Jayoung; Lee, Ji Hee; Lee, Sang Min

    2011-01-01

    The purpose of this study was to examine the factorial validity of the MBI-SS in Korean students. Specifically, we investigated whether the original three-factor structure of the MBI-SS was appropriate for use with Korean students. In addition, by running multi-group structural equation model analyses with factorial invariance tests simultaneously…

  9. Measuring Teaching Quality and Student Engagement in South Korea and The Netherlands

    ERIC Educational Resources Information Center

    van de Grift, Wim J. C. M.; Chun, Seyeoung; Maulana, Ridwan; Lee, Okhwa; Helms-Lorenz, Michelle

    2017-01-01

    Six observation scales for measuring the skills of teachers and 1 scale for measuring student engagement, assessed in South Korea and The Netherlands, are sufficiently reliable and offer sufficient predictive value for student engagement. A multigroup confirmatory factor analysis shows that the factor loadings and intercepts of the scales are the…

  10. Systems of Goals, Attitudes, and Self-Related Beliefs in Second-Language-Learning Motivation

    ERIC Educational Resources Information Center

    Kormos, Judit; Kiddle, Thom; Csizer, Kata

    2011-01-01

    In the present study, we surveyed the English language-learning motivations of 518 secondary school students, university students, and young adult learners in the capital of Chile, Santiago. We applied multi-group structural-equation modeling to analyze how language-learning goals, attitudes, self-related beliefs, and parental encouragement…

  11. Ethnic Identity and Civic Attitudes in Latino and Caucasian Youth

    ERIC Educational Resources Information Center

    Anglin, Ashley Elizabeth; Johnson-Pynn, Julie S.; Johnson, Laura Renee

    2012-01-01

    Understanding youth's perceptions of their civic skills is important for enriching the lives of youth as well as society. This study explored the relationship between civic attitudes, leadership skills, and ethnic identity in Northwest Georgia schools using two measures, the Civic Attitudes and Skills Questionnaire (CASQ) and the Multigroup Ethnic…

  12. A Person-Centered Perspective on Multidimensional Perfectionism in Canadian and Chinese University Students: A Multigroup Latent Profile Analysis

    ERIC Educational Resources Information Center

    Smith, Martin M.; Saklofske, Donald H.; Yan, Gonggu; Sherry, Simon B.

    2016-01-01

    This study investigated the generalizability of the tripartite model of perfectionism across Canadian and Chinese university students. Using latent profile analysis and indicators of perfectionistic strivings, perfectionistic concerns, and neuroticism in both groups, the authors derived a 3-profile solution: adaptive perfectionists, maladaptive…

  13. Sex Differences in Adults' Motivation to Achieve

    ERIC Educational Resources Information Center

    van der Sluis, Sophie; Vinkhuyzen, Anna A. E.; Boomsma, Dorret I.; Posthuma, Danielle

    2010-01-01

    Achievement motivation is considered a prerequisite for success in academic as well as non-academic settings. We studied sex differences in academic and general achievement motivation in an adult sample of 338 men and 497 women (ages 18-70 years). Multi-group covariance and means structure analysis (MG-CMSA) for ordered categorical data was used…

  14. The Importance of Multi-Group Validity Evidence in Gifted and Talented Identification and Research

    ERIC Educational Resources Information Center

    Peters, Scott J.

    2011-01-01

    Practitioners and researchers often review the validity evidence of an instrument before using it for student assessment or in the practice of diagnosing or identifying children with exceptionalities. However, few test manuals present data on instrument measurement equivalence/ invariance or differential item functioning. This information is…

  15. Student Conceptions of Assessment by Level of Schooling: Further Evidence for Ecological Rationality in Belief Systems

    ERIC Educational Resources Information Center

    Brown, Gavin; Harris, Lois

    2012-01-01

    Student beliefs about assessment may vary according to the level of schooling. The "Students Conceptions of Assessment" version 6 (SCoA-VI) inventory elicits attitudes towards four beliefs (assessment: improves teaching and learning, measures external factors, has affective impact/benefit, is irrelevant). Using multi-group confirmatory…

  16. College Students' Achievement Goal Orientation and Motivational Regulations in Physical Activity Classes: A Test of Gender Invariance

    ERIC Educational Resources Information Center

    Su, Xiaoxia; McBride, Ron E.; Xiang, Ping

    2015-01-01

    The current study examined the measurement invariance across 361 male and female college students' 2 × 2 achievement goal orientation and motivational regulations. Participants completed questionnaires assessing their achievement goals and motivational regulations. Multigroup CFA analyses showed that male and female students' scores were fully…

  17. Assessing and Promoting Resilience: An Additional Tool to Address the Increasing Number of College Students with Psychological Problems

    ERIC Educational Resources Information Center

    Hartley, Michael T.

    2012-01-01

    This study examined the assessment of resilience in undergraduate college students. Multigroup comparisons of the Connor-Davidson Resilience Scale (CD-RISC; Connor & Davidson, 2003) were performed on general population students and students recruited from campus mental health offices offering college counseling, psychiatric-support, and…

  18. Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

    ERIC Educational Resources Information Center

    Meade, Adam W.; Bauer, Daniel J.

    2007-01-01

    This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…

  19. Testing Structural Invariance of the Achievement Goal Questionnaire in American, Chinese, and Dutch College Students

    ERIC Educational Resources Information Center

    Sun, Huaping; Hernandez, Diley

    2012-01-01

    This study investigates the structural invariance of the Achievement Goal Questionnaire (AGQ) in American, Chinese, and Dutch college students. Using confirmatory factor analyses (CFA), the authors found evidence for the four-factor structure of achievement goals in all three samples. Subsequent multigroup CFAs supported structural invariance of…

  20. Assessing Measurement Invariance of the Children's Depression Inventory in Chinese and Italian Primary School Student Samples

    ERIC Educational Resources Information Center

    Wu, Wenfeng; Lu, Yongbiao; Tan, Furong; Yao, Shuqiao; Steca, Patrizia; Abela, John R. Z.; Hankin, Benjamin L.

    2012-01-01

    This study tested the measurement invariance of Children's Depression Inventory (CDI) and compared its factorial variance/covariance and latent means among Chinese and Italian children. Multigroup confirmatory factor analysis of the original five factors identified by Kovacs revealed that full measurement invariance did not hold. Further analysis…

  1. Moderators of Youth Exercise Intention and Behavior

    ERIC Educational Resources Information Center

    Ellis, Rebecca; Kosma, Maria; Symons Downs, Danielle

    2013-01-01

    This study tested moderators of the theory of planned behavior (TPB) based on geographical region, gender, race, and income among adolescents in an exercise context using multigroup path analyses. Participants were eighth- and ninth-grade students from Louisiana (LA; N = 448, M[subscript age] = 14.37 years) and Pennsylvania (PA; N = 681,…

  2. Teachers' Collective Efficacy, Job Satisfaction, and Job Stress in Cross-Cultural Context

    ERIC Educational Resources Information Center

    Klassen, Robert M.; Usher, Ellen L.; Bong, Mimi

    2010-01-01

    This study examines how teachers' collective efficacy (TCE), job stress, and the cultural dimension of collectivism are associated with job satisfaction for 500 teachers from Canada, Korea (South Korea or Republic of Korea), and the United States. Multigroup path analysis revealed that TCE predicted job satisfaction across settings. Job stress was…

  3. California Psychological Inventory Dominance Scale Measurement Equivalence: General Population Normative and Indian, U.K., and U.S. Managerial Samples

    ERIC Educational Resources Information Center

    Kulas, John T.; Thompson, Richard C.; Anderson, Michael G.

    2011-01-01

    The California Psychological Inventory's Dominance scale was investigated for inconsistencies in item-trait associations across four samples (one American normative and three culturally dissociated manager groupings). The Kim, Cohen, and Park procedure was used, enabling simultaneous multigroup comparison in addition to the traditional…

  4. Effects of an Early Numeracy Intervention on Struggling Kindergarteners' Mathematics Performance

    ERIC Educational Resources Information Center

    Bryant, Brian R.; Bryant, Diane Pedrotty; Roberts, Greg; Fall, Anna-Maria

    2016-01-01

    The purpose of this study was to investigate the effects of an early numeracy intervention delivered by kindergarten teachers to students identified as having mathematics difficulties. A multigroup growth-modeling-with-random-assignment-to-intervention-condition design was employed. Thirty-two teachers were randomly assigned to the treatment or…

  5. Measurement Invariance of the "Servant Leadership Questionnaire" across K-12 Principal Gender

    ERIC Educational Resources Information Center

    Xu, Lihua; Stewart, Trae; Haber-Curran, Paige

    2015-01-01

    Measurement invariance of the five-factor "Servant Leadership Questionnaire" between female and male K-12 principals was tested using multi-group confirmatory factor analysis. A sample of 956 principals (56.9% were females and 43.1% were males) was analysed in this study. The hierarchical multi-step measurement invariance test supported…

  6. Multi-Group Invariance of the Conceptions of Assessment Scale among University Faculty and Students

    ERIC Educational Resources Information Center

    DiLoreto, Melanie Anne

    2013-01-01

    Conceptions are contextual. In the realm of education, conceptions of various constituent groups are often shaped over a period of a number of years during which time these groups have participated in educational endeavors. Specifically, conceptions of assessment are influenced by beliefs, actions, attitudes, understandings, and past experiences.…

  7. Happy Spouses, Happy Parents? Family Relationships among Finnish and Dutch Dual Earners

    ERIC Educational Resources Information Center

    Malinen, Kaisa; Kinnunen, Ulla; Tolvanen, Asko; Ronka, Anna; Wierda-Boer, Hilde; Gerris, Jan

    2010-01-01

    In this study links between spousal and parent-child relationships among Finnish (n = 157 couples) and Dutch (n = 276 couples) dual earners with young children were examined using paired questionnaire data. Variable-oriented analyses (structural equation modeling with a multigroup procedure) supported the spillover hypothesis, as higher levels of…

  8. Queensland Teachers' Conceptions of Assessment: The Impact of Policy Priorities on Teacher Attitudes

    ERIC Educational Resources Information Center

    Brown, Gavin T. L.; Lake, Robert; Matters, Gabrielle

    2011-01-01

    The conceptions Queensland teachers have about assessment purposes were surveyed in 2003 with an abridged version of the Teacher Conceptions of Assessment Inventory. Multi-group analysis found that a model with four factors, somewhat different in structure to previous studies, was statistically different between Queensland primary and (lower)…

  9. Motivation and Engagement in Jamaica: Testing a Multidimensional Framework among Students in an Emerging Regional Context

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Martin, Tamica G.; Evans, Paul

    2018-01-01

    This study explored motivation and engagement among 585 Jamaican middle and high school students. Motivation and engagement were assessed via students' responses to the Motivation and Engagement Scale. Confirmatory factor analysis (CFA) found satisfactory fit, and by most measures, multigroup CFA demonstrated comparable factor structure for males…

  10. Social support, sense of community in school, and self-efficacy as resources during early adolescence: an integrative model.

    PubMed

    Vieno, Alessio; Santinello, Massimo; Pastore, Massimiliano; Perkins, Douglas D

    2007-03-01

    Influences of different sources of social support (from parents and friends), school sense of community, and self-efficacy on psychosocial well being (as measured by self-reported life satisfaction and psychological symptoms) in early adolescence were investigated in an integrative model. The model was tested using structural equation modeling. Multi-group comparisons were used to estimate differences between sex and age groups. The survey sample was composed of 7,097 students in Northern Italy (51.4% male) divided into three age cohorts (equivalent to 6th, 8th, and 10th grades with median ages of 11, 13, and 15). Findings obtained using SEM were consistent with self-efficacy and school sense of community mediating effects of social support on psychosocial adjustment. The multi-group comparison indicates a need for more complex developmental models and more research on how changing forms of support interact with each other as their effects also change during this important stage of the life. Implications for primary prevention and cross-cultural comparisons are discussed.

  11. Multigroup Radiation-Hydrodynamics with a High-Order, Low-Order Method

    DOE PAGES

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron; ...

    2016-12-09

    Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less

  12. The complexity of personality: advantages of a genetically sensitive multi-group design.

    PubMed

    Hahn, Elisabeth; Spinath, Frank M; Siedler, Thomas; Wagner, Gert G; Schupp, Jürgen; Kandler, Christian

    2012-03-01

    Findings from many behavioral genetic studies utilizing the classical twin design suggest that genetic and non-shared environmental effects play a significant role in human personality traits. This study focuses on the methodological advantages of extending the sampling frame to include multiple dyads of relatives. We investigated the sensitivity of heritability estimates to the inclusion of sibling pairs, mother-child pairs and grandparent-grandchild pairs from the German Socio-Economic Panel Study in addition to a classical German twin sample consisting of monozygotic- and dizygotic twins. The resulting dataset contained 1.308 pairs, including 202 monozygotic and 147 dizygotic twin pairs, along with 419 sibling pairs, 438 mother-child dyads, and 102 grandparent-child dyads. This genetically sensitive multi-group design allowed the simultaneous testing of additive and non-additive genetic, common and specific environmental effects, including cultural transmission and twin-specific environmental influences. Using manifest and latent modeling of phenotypes (i.e., controlling for measurement error), we compare results from the extended sample with those from the twin sample alone and discuss implications for future research.

  13. Multigroup Propensity Score Approach to Evaluating an Effectiveness Trial of the New Beginnings Program.

    PubMed

    Tein, Jenn-Yun; Mazza, Gina L; Gunn, Heather J; Kim, Hanjoe; Stuart, Elizabeth A; Sandler, Irwin N; Wolchik, Sharlene A

    2018-06-01

    We used a multigroup propensity score approach to evaluate a randomized effectiveness trial of the New Beginnings Program (NBP), an intervention targeting divorced or separated families. Two features of effectiveness trials, high nonattendance rates and inclusion of an active control, make program effects harder to detect. To estimate program effects based on actual intervention participation, we created a synthetic inactive control comprised of nonattenders and assessed the impact of attending the NBP or active control relative to no intervention (inactive control). We estimated propensity scores using generalized boosted models and applied inverse probability of treatment weighting for the comparisons. Relative to the inactive control, NBP strengthened parenting quality as well as reduced child exposure to interparental conflict, parent psychological distress, and child internalizing problems. Some effects were moderated by parent gender, parent ethnicity, or child age. On the other hand, the effects of active versus inactive control were minimal for parenting and in the unexpected direction for child internalizing problems. Findings from the propensity score approach complement and enhance the interpretation of findings from the intention-to-treat approach.

  14. Multi-Group Formulation of the Temperature-Dependent Resonance Scattering Model and its Impact on Reactor Core Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghrayeb, Shadi Z.; Ougouag, Abderrafi M.; Ouisloumen, Mohamed

    2014-01-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. It incorporates the neutron up-scattering effects, stemming from lattice atoms thermal motion and accounts for it within the resulting effective nuclear cross-section data. The effects pertain essentially to resonant scattering off of heavy nuclei. The formulation, implemented into a standalone code, produces effective nuclear scattering data that are then supplied directly into the DRAGON lattice physics code where the effects on Doppler Reactivity and neutron flux are demonstrated. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering,more » which in turn affect the estimation of core reactivity and burnup characteristics. The results show an increase in values of Doppler temperature feedback coefficients up to -10% for UOX and MOX LWR fuels compared to the corresponding values derived using the traditional asymptotic elastic scattering kernel. This paper also summarizes the results done on this topic to date.« less

  15. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  16. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  17. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  18. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  19. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  20. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  1. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  2. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  3. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  4. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  5. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  6. An improved random walk algorithm for the implicit Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities aremore » a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.« less

  7. A PC-based magnetometer-only attitude and rate determination system for gyroless spacecraft

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.; Deutschmann, J.; Galal, K.

    1995-01-01

    This paper describes a prototype PC-based system that uses measurements from a three-axis magnetometer (TAM) to estimate the state (three-axis attitude and rates) of a spacecraft given no a priori information other than the mass properties. The system uses two algorithms that estimate the spacecraft's state - a deterministic magnetic-field only algorithm and a Kalman filter for gyroless spacecraft. The algorithms are combined by invoking the deterministic algorithm to generate the spacecraft state at epoch using a small batch of data and then using this deterministic epoch solution as the initial condition for the Kalman filter during the production run. System input comprises processed data that includes TAM and reference magnetic field data. Additional information, such as control system data and measurements from line-of-sight sensors, can be input to the system if available. Test results are presented using in-flight data from two three-axis stabilized spacecraft: Solar, Anomalous, and Magnetospheric Particle Explorer (SAMPEX) (gyroless, Sun-pointing) and Earth Radiation Budget Satellite (ERBS) (gyro-based, Earth-pointing). The results show that, using as little as 700 s of data, the system is capable of accuracies of 1.5 deg in attitude and 0.01 deg/s in rates; i.e., within SAMPEX mission requirements.

  8. Deterministic delivery of remote entanglement on a quantum network.

    PubMed

    Humphreys, Peter C; Kalb, Norbert; Morits, Jaco P J; Schouten, Raymond N; Vermeulen, Raymond F L; Twitchen, Daniel J; Markham, Matthew; Hanson, Ronald

    2018-06-01

    Large-scale quantum networks promise to enable secure communication, distributed quantum computing, enhanced sensing and fundamental tests of quantum mechanics through the distribution of entanglement across nodes 1-7 . Moving beyond current two-node networks 8-13 requires the rate of entanglement generation between nodes to exceed the decoherence (loss) rate of the entanglement. If this criterion is met, intrinsically probabilistic entangling protocols can be used to provide deterministic remote entanglement at pre-specified times. Here we demonstrate this using diamond spin qubit nodes separated by two metres. We realize a fully heralded single-photon entanglement protocol that achieves entangling rates of up to 39 hertz, three orders of magnitude higher than previously demonstrated two-photon protocols on this platform 14 . At the same time, we suppress the decoherence rate of remote-entangled states to five hertz through dynamical decoupling. By combining these results with efficient charge-state control and mitigation of spectral diffusion, we deterministically deliver a fresh remote state with an average entanglement fidelity of more than 0.5 at every clock cycle of about 100 milliseconds without any pre- or post-selection. These results demonstrate a key building block for extended quantum networks and open the door to entanglement distribution across multiple remote nodes.

  9. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta

    2017-07-01

    This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.

  10. The progression of the entropy of a five dimensional psychotherapeutic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badalamenti, A.F.; Langs, R.J.

    This paper presents a study of the deterministic and stochastic behavior of the entropy of a 5-dimensional, 2400 state, system across each of six psychotherapeutic sessions. The growth of entropy was found to be logarithmic in each session. The stochastic behavior of a moving 600 second estimator of entropy revealed a Box-Jenkins model of type (1,1,0) - that is, the first difference of the entropy series was first order autoregressive or prior state sensitive. In addition, the patient and therapist entropy series exhibited no significant cross correlation across lags of -300 to +300 seconds. Yet all such pairs of seriesmore » exhibited high coherency past the frequency of .06 (on a full range of 0 to .5). Furthermore, all the patients and therapists were attracted to a geometric center of mass in 5-dimensional space which was different from the geometric center of the region where the system lived. The process significance of the findings and the relationship between the deterministic and stochastic results are discussed. The paper is then placed in the broader context of our efforts to provide new and meaningful quantitative dimensions and mathematical models to psychotherapy research. 59 refs.« less

  11. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  12. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  13. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  14. An Experimental Study of the Effect of Streamwise Vortices on Unsteady Turbulent Boundary-Layer Separation

    DTIC Science & Technology

    1988-12-09

    Measurement of Second Order Statistics .... .............. .54 5.4 Measurement of Triple Products ...... ................. .58 5.6 Uncertainty Analysis...deterministic fluctuations, u/ 2 , were 25 times larger than the mean fluctuations, u𔃼, there were no significant variations in the mean statistical ...input signals, the three velocity components are cal- culated, Awn in ,i-;dual phase ensembles are collected for the appropriate statistical 3

  15. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  16. Adolescent Student Burnout Inventory in Mainland China: Measurement Invariance across Gender and Educational Track

    ERIC Educational Resources Information Center

    Li, Bi; Wu, Yan; Wen, Zhonglin; Wang, Mengcheng

    2014-01-01

    This article assessed the measurement in variance of the Adolescent Student Burnout Inventory (ASBI) across gender and educational track, and investigated the main and interaction effects of gender and educational track on the facets of student burnout with a sample consisting of 2,216 adolescent students from China. Multigroup confirmatory factor…

  17. First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures

    ERIC Educational Resources Information Center

    Cahan, Sorel; Gamliel, Eyal

    2011-01-01

    Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…

  18. The Moderating Effect of Machiavellianism on the Relationships between Bullying, Peer Acceptance, and School Adjustment in Adolescents

    ERIC Educational Resources Information Center

    Wei, Hsi-Sheng; Chen, Ji-Kang

    2012-01-01

    This study examined the moderating effect of Machiavellianism on the relationships between bullying, peer acceptance, and school adjustment (rule-following behavior and academic performance) among 216 middle school 7th-graders in Taipei, Taiwan. The participants were divided into two groups according to their Machiavellianism. Multi-group path…

  19. The Korean Diasporic Experience: Measuring Ethnic Identity in the United States and China.

    ERIC Educational Resources Information Center

    Lee, Richard M.; Falbo, Toni; Doh, Hyun Sim; Park, Seong Yeon

    2001-01-01

    Korean undergraduates living in the United States and in China were administered the Multigroup Ethnic Identity Measure to assess their ethnic identity. Korean Americans had higher scores on ethnic identity and were more likely to be classified as bicultural, indicating that they were able to retain their cultural heritage while incorporating…

  20. Cyberbullying and Cybervictimization within a Cross-Cultural Context: A Study of Canadian and Tanzanian Adolescents

    ERIC Educational Resources Information Center

    Shapka, Jennifer D.; Onditi, Hezron Z.; Collie, Rebecca J.; Lapidot-Lefler, Noam

    2018-01-01

    This study explored cyberbullying and cybervictimization (CBCV), for adolescents aged 11-15 from Tanzania (N = 426) and Canada (N = 592). Measurement invariance and model invariance was found for CBCV. In addition, multigroup structural equation modeling was used to explore several variables: age, gender, average hours online each day, accessing…

  1. Employee Participation in Non-Mandatory Professional Development--The Role of Core Proactive Motivation Processes

    ERIC Educational Resources Information Center

    Sankey, Kim S.; Machin, M. Anthony

    2014-01-01

    With a focus on the self-initiated efforts of employees, this study examined a model of core proactive motivation processes for participation in non-mandatory professional development (PD) within a proactive motivation framework using the Self-Determination Theory perspective. A multi-group SEM analysis conducted across 439 academic and general…

  2. Promoting Reading Attitudes of Girls and Boys: A New Challenge for Educational Policy? Multi-Group Analyses across Four European Countries

    ERIC Educational Resources Information Center

    Nonte, Sonja; Hartwich, Lea; Willems, Ariane S.

    2018-01-01

    Background: Numerous studies have investigated the relationships between various student, home and contextual factors and reading achievement. However, the relationship between such factors and reading attitudes has been investigated far less, despite the fact that theoretical frameworks of large-scale assessments and school effectiveness research…

  3. Are All Minority Women Equally Buffered from Negative Body Image? Intra-Ethnic Moderators of the Buffering Hypothesis

    ERIC Educational Resources Information Center

    Sabik, Natalie J.; Cole, Elizabeth R.; Ward, L. Monique

    2010-01-01

    Body dissatisfaction is normative among European American women, and involvement with predominant culture or linking self-worth to weight may intensify the association between body dissatisfaction and drive for thinness for women of color. Our study investigated whether orientation to other ethnic groups (Multigroup Ethnic Identity Measure) and…

  4. Exploring the Full-Information Bifactor Model in Vertical Scaling with Construct Shift

    ERIC Educational Resources Information Center

    Li, Ying; Lissitz, Robert W.

    2012-01-01

    To address the lack of attention to construct shift in item response theory (IRT) vertical scaling, a multigroup, bifactor model was proposed to model the common dimension for all grades and the grade-specific dimensions. Bifactor model estimation accuracy was evaluated through a simulation study with manipulated factors of percentage of common…

  5. The Chinese Family Assessment Instrument (C-FAI): Hierarchical Confirmatory Factor Analyses and Factorial Invariance

    ERIC Educational Resources Information Center

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2010-01-01

    Objective: This paper examines the dimensionality and factorial invariance of the Chinese Family Assessment Instrument (C-FAI) using multigroup confirmatory factor analyses (MCFAs). Method: A total of 3,649 students responded to the C-FAI in a community survey. Results: Results showed that there are five dimensions of the C-FAI (communication,…

  6. The Effects of Cognitive Style on Edmodo Users' Behaviour: A Structural Equation Modeling-Based Multi-Group Analysis

    ERIC Educational Resources Information Center

    Ursavas, Omer Faruk; Reisoglu, Ilknur

    2017-01-01

    Purpose: The purpose of this paper is to explore the validity of extended technology acceptance model (TAM) in explaining pre-service teachers' Edmodo acceptance and the variation of variables related to TAM among pre-service teachers having different cognitive styles. Design/methodology/approach: Structural equation modeling approach was used to…

  7. Measurement Invariance of Early Development Instrument (EDI) Domain Scores across Gender and ESL Status

    ERIC Educational Resources Information Center

    Mousavi, Amin; Krishnan, Vijaya

    2016-01-01

    The Early Development Instrument (EDI) is a widely used teacher rating tool to assess kindergartners' developmental outcomes in Canada and a number of other countries. This paper examines the measurement invariance of EDI domains across ESL status and gender by means of multi-group confirmatory factor analysis. The results suggest evidence of…

  8. An Exploration of the Effects of Skin Tone on African American Life Experiences.

    ERIC Educational Resources Information Center

    Breland, Alfiee M.; Collins, Wanda; Damico, Karen Lowenstein; Steward, Robbie; King, Jennifer

    This study surveys African Americans to assess perceptions of and life experiences with the issue of skin tone. Thirty-seven African American adults agreed to complete a survey packet and participate in a semi-structured focus group discussion. Participants completed the Rosenberg Self-Esteem Scale, the Multigroup Ethnic Identity Measure, the Skin…

  9. A Cross-Cultural Validation of the Learning-Related Boredom Scale (LRBS) with Canadian and Chinese College Students

    ERIC Educational Resources Information Center

    Tze, Virginia M. C.; Klassen, Robert M.; Daniels, Lia M.; Li, Johnson C.-H.; Zhang, Xiao

    2013-01-01

    This study evaluated the psychometric properties of the Learning-Related Boredom Scale (LRBS) from the Academic Emotions Questionnaire (AEQ; Pekrun, Goetz, & Perry, 2005; Pekrun, Goetz, Titz, & Perry, 2002) in a sample of 405 university students from Canada and China. Multigroup confirmatory factor analysis was used to test the factor…

  10. Using Confirmatory Factor Analysis and the Rasch Model to Assess Measurement Invariance in a High Stakes Reading Assessment

    ERIC Educational Resources Information Center

    Randall, Jennifer; Engelhard, George, Jr.

    2010-01-01

    The psychometric properties and multigroup measurement invariance of scores across subgroups, items, and persons on the "Reading for Meaning" items from the Georgia Criterion Referenced Competency Test (CRCT) were assessed in a sample of 778 seventh-grade students. Specifically, we sought to determine the extent to which score-based…

  11. Integrative review of indigenous arthropod natural enemies of the invasive brown marmorated stink bug in North America and Europe

    USDA-ARS?s Scientific Manuscript database

    Since the establishment of the brown marmorated stink bug, Halyomorpha halys Stål (Hemiptera: Pentatomidae) in North America and Europe, there has been a large, multi-group effort to characterize the composition and impact of the indigenous community of arthropod natural enemies attacking this invas...

  12. Using Multigroup Confirmatory Factor Analysis to Test Measurement Invariance in Raters: A Clinical Skills Examination Application

    ERIC Educational Resources Information Center

    Kahraman, Nilufer; Brown, Crystal B.

    2015-01-01

    Psychometric models based on structural equation modeling framework are commonly used in many multiple-choice test settings to assess measurement invariance of test items across examinee subpopulations. The premise of the current article is that they may also be useful in the context of performance assessment tests to test measurement invariance…

  13. Attitude toward Science Teaching of Spanish and Turkish In-Service Elementary Teachers: Multi-Group Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Korur, Fikret; Vargas, Rocío Vargas; Torres Serrano, Noemí

    2016-01-01

    Elementary school teachers' having a positive attitude toward science teaching might encourage students to develop positive attitudes toward science learning. This cross-cultural study aimed to validate the seven-factor structure of the Dimensions of Attitude toward Science (DAS) scale by applying it in two countries. Moreover, it aimed to…

  14. Moderation of Cognitive-Achievement Relations for Children with Specific Learning Disabilities: A Multi-Group Latent Variable Analysis Using CHC Theory

    ERIC Educational Resources Information Center

    Niileksela, Christopher R.

    2012-01-01

    Recent advances in the understanding of the relations between cognitive abilities and academic skills have helped shape a better understanding of which cognitive processes may underlie different types of SLD (Flanagan, Fiorello, & Ortiz, 2010). Similarities and differences in cognitive-achievement relations for children with and without SLDs…

  15. Testing for Measurement and Structural Equivalence in Large-Scale Cross-Cultural Studies: Addressing the Issue of Nonequivalence

    ERIC Educational Resources Information Center

    Byrne, Barbara M.; van de Vijver, Fons J. R.

    2010-01-01

    A critical assumption in cross-cultural comparative research is that the instrument measures the same construct(s) in exactly the same way across all groups (i.e., the instrument is measurement and structurally equivalent). Structural equation modeling (SEM) procedures are commonly used in testing these assumptions of multigroup equivalence.…

  16. A Comparative Study of the Effects of Cultural Differences on the Adoption of Mobile Learning

    ERIC Educational Resources Information Center

    Arpaci, Ibrahim

    2015-01-01

    The objective of this paper is to understand the impact of cultural differences on mobile learning adoption through identifying key adoption characteristics in Canada and Turkey, which have markedly different cultural backgrounds. A multi-group analysis was employed to test the hypothesised relationships based on the data collected by means of…

  17. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  18. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  19. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  20. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  1. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  2. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  3. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  4. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  5. Improvements to Busquet's Non LTE algorithm in NRL's Hydro code

    NASA Astrophysics Data System (ADS)

    Klapisch, M.; Colombant, D.

    1996-11-01

    Implementation of the Non LTE model RADIOM (M. Busquet, Phys. Fluids B, 5, 4191 (1993)) in NRL's RAD2D Hydro code in conservative form was reported previously(M. Klapisch et al., Bull. Am. Phys. Soc., 40, 1806 (1995)).While the results were satisfactory, the algorithm was slow and not always converging. We describe here modifications that address the latter two shortcomings. This method is quicker and more stable than the original. It also gives information about the validity of the fitting. It turns out that the number and distribution of groups in the multigroup diffusion opacity tables - a basis for the computation of radiation effects in the ionization balance in RADIOM- has a large influence on the robustness of the algorithm. These modifications give insight about the algorithm, and allow to check that the obtained average charge state is the true average. In addition, code optimization resulted in greatly reduced computing time: The ratio of Non LTE to LTE computing times being now between 1.5 and 2.

  6. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  7. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  8. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  9. On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis.

    PubMed

    Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Gao, Yuan; Cheng, Shaochi

    2017-07-08

    Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance.

  10. Tapered fiber coupling of single photons emitted by a deterministically positioned single nitrogen vacancy center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebermeister, Lars, E-mail: lars.liebermeister@physik.uni-muenchen.de; Petersen, Fabian; Münchow, Asmus v.

    2014-01-20

    A diamond nano-crystal hosting a single nitrogen vacancy (NV) center is optically selected with a confocal scanning microscope and positioned deterministically onto the subwavelength-diameter waist of a tapered optical fiber (TOF) with the help of an atomic force microscope. Based on this nano-manipulation technique, we experimentally demonstrate the evanescent coupling of single fluorescence photons emitted by a single NV-center to the guided mode of the TOF. By comparing photon count rates of the fiber-guided and the free-space modes and with the help of numerical finite-difference time domain simulations, we determine a lower and upper bound for the coupling efficiency ofmore » (9.5 ± 0.6)% and (10.4 ± 0.7)%, respectively. Our results are a promising starting point for future integration of single photon sources into photonic quantum networks and applications in quantum information science.« less

  11. On Transform Domain Communication Systems under Spectrum Sensing Mismatch: A Deterministic Analysis

    PubMed Central

    Jin, Chuanxue; Hu, Su; Huang, Yixuan; Luo, Qu; Huang, Dan; Li, Yi; Cheng, Shaochi

    2017-01-01

    Towards the era of mobile Internet and the Internet of Things (IoT), numerous sensors and devices are being introduced and interconnected. To support such an amount of data traffic, traditional wireless communication technologies are facing challenges both in terms of the increasing shortage of spectrum resources and massive multiple access. The transform-domain communication system (TDCS) is considered as an alternative multiple access system, where 5G and mobile IoT are mainly focused. However, previous studies about TDCS are under the assumption that the transceiver has the global spectrum information, without the consideration of spectrum sensing mismatch (SSM). In this paper, we present the deterministic analysis of TDCS systems under arbitrary given spectrum sensing scenarios, especially the influence of the SSM pattern to the signal to noise ratio (SNR) performance. Simulation results show that arbitrary SSM pattern can lead to inferior bit error rate (BER) performance. PMID:28698477

  12. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    NASA Astrophysics Data System (ADS)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  13. Validation of the Social and Emotional Health Survey for Five Sociocultural Groups: Multigroup Invariance and Latent Mean Analyses

    ERIC Educational Resources Information Center

    You, Sukkyung; Furlong, Michael; Felix, Erika; O'Malley, Meagan

    2015-01-01

    Social-emotional health influences youth developmental trajectories and there is growing interest among educators to measure the social-emotional health of the students they serve. This study replicated the psychometric characteristics of the Social Emotional Health Survey (SEHS) with a diverse sample of high school students (Grades 9-12; N =…

  14. Reactor Statics Module, RS-9: Multigroup Diffusion Program Using an Exponential Acceleration Technique.

    ERIC Educational Resources Information Center

    Macek, Victor C.

    The nine Reactor Statics Modules are designed to introduce students to the use of numerical methods and digital computers for calculation of neutron flux distributions in space and energy which are needed to calculate criticality, power distribution, and fuel burnup for both slow neutron and fast neutron fission reactors. The last module, RS-9,…

  15. Motivation and Engagement in the "Asian Century": A Comparison of Chinese Students in Australia, Hong Kong, and Mainland China

    ERIC Educational Resources Information Center

    Martin, A. J.; Yu, Kai; Hau, Kit-Tai

    2014-01-01

    The present study investigated multidimensional motivation and engagement among Chinese middle school students in Australia (N?=?273), Hong Kong (N?=?528), and Mainland China (N?=?2106; randomly selected N?=?528). Findings showed that a multidimensional model of motivation and engagement fit very well for all three groups. Multi-group invariance…

  16. Item Response Theory with Covariates (IRT-C): Assessing Item Recovery and Differential Item Functioning for the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Tay, Louis; Huang, Qiming; Vermunt, Jeroen K.

    2016-01-01

    In large-scale testing, the use of multigroup approaches is limited for assessing differential item functioning (DIF) across multiple variables as DIF is examined for each variable separately. In contrast, the item response theory with covariate (IRT-C) procedure can be used to examine DIF across multiple variables (covariates) simultaneously. To…

  17. Multigroup Confirmatory Factor Analysis of U.S. and Italian Children's Performance on the PASS Theory of Intelligence as Measured by the Cognitive Assessment System

    ERIC Educational Resources Information Center

    Naglieri, Jack A.; Taddei, Stefano; Williams, Kevin M.

    2013-01-01

    This study examined Italian and U.S. children's performance on the English and Italian versions, respectively, of the Cognitive Assessment System (CAS; Naglieri & Conway, 2009; Naglieri & Das, 1997), a test based on a neurocognitive theory of intelligence entitled PASS (Planning, Attention, Simultaneous, and Successive; Naglieri & Das,…

  18. Explaining the Intention to Use Technology among Pre-Service Teachers: A Multi-Group Analysis of the Unified Theory of Acceptance and Use of Technology

    ERIC Educational Resources Information Center

    Teo, Timothy; Noyes, Jan

    2014-01-01

    Pre-service teachers' self-reported intentions to use information technology were studied. Two hundred and sixty-four participants completed a survey questionnaire measuring their responses to four constructs (performance expectancy, effort expectancy, social influence and facilitating conditions) derived from the Unified Theory of Acceptance and…

  19. Multidimensional Self-Concept Structure for Preadolescents with Mild Intellectual Disabilities: A Hybrid Multigroup?MIMC Approach to Factorial Invariance and Latent Mean Differences

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Tracey, Danielle K.; Craven, Rhonda G.

    2006-01-01

    Confirmatory factor analysis of responses by 211 preadolescents (M age = 10.25 years,SD = 1.48) with mild intellectual disabilities (MIDs) to the individually administered Self Description Questionnaire I-Individual Administration (SDQI-IA) counters widely cited claims that these children cannot differentiate multiple self-concept factors. Results…

  20. Shielding analyses: the rabbit vs the turtle?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broadhead, B.L.

    1996-12-31

    This paper compares solutions using Monte Carlo and discrete- ordinates methods applied to two actual shielding situations in order to make some general observations concerning the efficiency and advantages/disadvantages of the two approaches. The discrete- ordinates solutions are performed using two-dimensional geometries, while the Monte Carlo approaches utilize three-dimensional geometries with both multigroup and point cross-section data.

  1. Parenting and Neighborhood Predictors of Youth Problem Behaviors within Hispanic Families: The Moderating Role of Family Structure

    ERIC Educational Resources Information Center

    Gayles, Jochebed G.; Coatsworth, J. Douglas; Pantin, Hilda M.; Szapocznik, Jose

    2009-01-01

    This study examined the influence of family and neighborhood contexts on Hispanic youth problem behavior. The effects of parents' perceptions of neighborhood context and parenting practices on problem behavior were examined in 167 one-parent (n = 75) and two-parent (n = 92) families. Results from multigroup path analyses showed significant main…

  2. The Structural and Predictive Properties of the Psychopathy Checklist-Revised in Canadian Aboriginal and Non-Aboriginal Offenders

    ERIC Educational Resources Information Center

    Olver, Mark E.; Neumann, Craig S.; Wong, Stephen C. P.; Hare, Robert D.

    2013-01-01

    We examined the structural and predictive properties of the Psychopathy Checklist-Revised (PCL-R) in large samples of Canadian male Aboriginal and non-Aboriginal offenders. The PCL-R ratings were part of a risk assessment for criminal recidivism, with a mean follow-up of 26 months postrelease. Using multigroup confirmatory factor analysis, we were…

  3. Towards a Four-Dimensional Model of Burnout: A Multigroup Factor-Analytic Study Including Depersonalization and Cynicism

    ERIC Educational Resources Information Center

    Salanova, Marisa; Llorens, Susana; Garcia-Renedo, Monica; Burriel, Raul; Breso, Edgar; Schaufeli, Wilmar B.

    2005-01-01

    This article investigated whether cynicism and depersonalization are two different dimensions of burnout or whether they may be collapsed into one construct of mental distance. Using confirmatory factor analyses in two samples of teachers (n = 483) and blue-collar workers (n = 474), a superior fit was found for the four-factor model that contained…

  4. Gender Differences in the Developmental Cascade from Harsh Parenting to Educational Attainment: An Evolutionary Perspective

    ERIC Educational Resources Information Center

    Hentges, Rochelle F.; Wang, Ming-Te

    2018-01-01

    This study utilized life history theory to test a developmental cascade model linking harsh parenting to low educational attainment. Multigroup models were examined to test for potential gender differences. The sample consisted of 1,482 adolescents followed up for 9 years starting in seventh grade (M[subscript age] = 12.74). Results supported…

  5. Predicting Achievement: Confidence vs Self-Efficacy, Anxiety, and Self-Concept in Confucian and European Countries

    ERIC Educational Resources Information Center

    Morony, Suzanne; Kleitman, Sabina; Lee, Yim Ping; Stankov, Lazar

    2013-01-01

    This study investigates the structure and cross-cultural (in)variance of mathematical self-beliefs in relation to mathematics achievement in two world regions: Confucian Asia (Singapore, South Korea, Hong Kong and Taiwan) and Europe (Denmark, The Netherlands, Finland, Serbia and Latvia). This is done both pan-culturally and at a multigroup-level,…

  6. Ethnic Identity, Academic Achievement, and Global Self-Concept in Four Groups of Academically Talented Adolescents

    ERIC Educational Resources Information Center

    Worrell, Frank C.

    2007-01-01

    In this study, academically talented African American (n = 28), Asian American (n = 171), Hispanic (n = 28), and White (n = 92) middle and high school students are compared on ethnic identity (EI) and other group orientation (OGO) attitudes as measured by the Multigroup Ethnic Identity Measure. The contributions of these variables to self-esteem…

  7. The Affect and Arousal Scales: Psychometric Properties of the Dutch Version and Multigroup Confirmatory Factor Analyses

    ERIC Educational Resources Information Center

    De Bolle, Marleen; De Fruyt, Filip; Decuyper, Mieke

    2010-01-01

    Psychometric properties of the Dutch version of the Affect and Arousal Scales (AFARS) were inspected in a combined clinical and population sample (N = 1,215). The validity of the tripartite structure and the relations between Negative Affect, Positive Affect, and Physiological Hyperarousal (PH) were investigated for boys and girls, younger (8-11…

  8. Knowledge Transfer or Social Competence? A Comparison of German and Canadian Adolescent Students on Their Socio-Motivational Relationships in School

    ERIC Educational Resources Information Center

    Hoferichter, Frances; Raufelder, Diana; Eid, Michael; Bukowski, William M.

    2014-01-01

    This cross-national study investigates the perception of the impact of students' relationships towards teachers and peers on scholastic motivation in a total sample of 1477 seventh and eighth grade German (N?=?1088) and Canadian (N?=?389) secondary school students. By applying Multigroup Confirmatory Latent Class Analysis in Mplus we confirmed…

  9. Achievement Goal Questionnaire: Psychometric Properties and Gender Invariance in a Sample of Chinese University Students

    ERIC Educational Resources Information Center

    Xiao, Jing; Bai, Yu; He, Yini; McWhinnie, Chad M.; Ling, Yu; Smith, Hannah; Huebner, E. Scott

    2016-01-01

    The aim of this study was to test the gender invariance of the Chinese version of the Achievement Goal Questionnaire (AGQ-C) utilizing a sample of 1,115 Chinese university students. Multi-group confirmatory factor analysis supported the configural, metric, and scalar invariance of the AGQ-C across genders. Analyses also revealed that the latent…

  10. A Cross-Cultural Test of the Work-Family Interface in 48 Countries

    ERIC Educational Resources Information Center

    Jeffrey Hill, E.; Yang, Chongming; Hawkins, Alan J.; Ferris, Maria

    2004-01-01

    This study tests a cross-cultural model of the work-family interface. Using multigroup structural equation modeling with IBM survey responses from 48 countries (N= 25,380), results show that the same work-family interface model that fits the data globally also fits the data in a four-group model composed of culturally related groups of countries,…

  11. A Multigroup, Longitudinal Study of Truant Youths, Marijuana Use, Depression, and STD-Associated Sexual Risk Behavior

    ERIC Educational Resources Information Center

    Dembo, Richard; M. Krupa, Julie; Wareham, Jennifer; Schmeidler, James; DiClemente, Ralph J.

    2017-01-01

    Truant youths are likely to engage in a number of problem behaviors, including sexual risky behaviors. Previous research involving non-truant youths has found sexual risk behaviors to be related to marijuana use and depression, with differential effects for male and female youths. Using data collected in a National Institute on Drug Abuse…

  12. Trends in Racial and Ethnic Disparities in Infant Mortality Rates in the United States, 1989–2006

    PubMed Central

    Rossen, Lauren M.; Schoendorf, Kenneth C.

    2014-01-01

    Objectives. We sought to measure overall disparities in pregnancy outcome, incorporating data from the many race and ethnic groups that compose the US population, to improve understanding of how disparities may have changed over time. Methods. We used Birth Cohort Linked Birth–Infant Death Data Files from US Vital Statistics from 1989–1990 and 2005–2006 to examine multigroup indices of racial and ethnic disparities in the overall infant mortality rate (IMR), preterm birth rate, and gestational age–specific IMRs. We calculated selected absolute and relative multigroup disparity metrics weighting subgroups equally and by population size. Results. Overall IMR decreased on the absolute scale, but increased on the population-weighted relative scale. Disparities in the preterm birth rate decreased on both the absolute and relative scales, and across equally weighted and population-weighted indices. Disparities in preterm IMR increased on both the absolute and relative scales. Conclusions. Infant mortality is a common bellwether of general and maternal and child health. Despite significant decreases in disparities in the preterm birth rate, relative disparities in overall and preterm IMRs increased significantly over the past 20 years. PMID:24028239

  13. An Improved Neutron Transport Algorithm for Space Radiation

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Clowdsley, Martha S.; Wilson, John W.

    2000-01-01

    A low-energy neutron transport algorithm for use in space radiation protection is developed. The algorithm is based upon a multigroup analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. This analysis is accomplished by solving a realistic but simplified neutron transport test problem. The test problem is analyzed by using numerical and analytical procedures to obtain an accurate solution within specified error bounds. Results from the test problem are then used for determining mean values associated with rescattering terms that are associated with a multigroup solution of the straight-ahead Boltzmann equation. The algorithm is then coupled to the Langley HZETRN code through the evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for a water and an aluminum-water shield-target configuration is then compared with LAHET and MCNPX Monte Carlo code calculations for the same shield-target configuration. The algorithm developed showed a great improvement in results over the unmodified HZETRN solution. In addition, a two-directional solution of the evaporation source showed even further improvement of the fluence near the front of the water target where diffusion from the front surface is important.

  14. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  15. Validation of the European Cyberbullying Intervention Project Questionnaire for Colombian Adolescents.

    PubMed

    Herrera-López, Mauricio; Casas, José A; Romera, Eva M; Ortega-Ruiz, Rosario; Del Rey, Rosario

    2017-02-01

    Cyberbullying is the act of using unjustified aggression to harm or harass via digital devices. Currently regarded as a widespread problem, the phenomenon has attracted growing research interest in different measures of cyberbullying and the similarities and differences across countries and cultures. This article presents the Colombian validation of the European Cyberbullying Intervention Project Questionnaire (ECIPQ) involving 3,830 high school students (M = 13.9 years old, standard deviation = 1.61; 48.9 percent male), of which 1,931 were Colombian and 1,899 Spanish. Confirmatory factor analysis (CFA), content validation, and multigroup analysis were performed with each of the sample subgroups. The optimal fits and psychometric properties obtained confirm the robustness and suitability of the assessment instrument to jointly measure cyber-aggression and cyber-victimization. The results corroborated the theoretical construct and the two-dimensional and universal nature of cyberbullying. The multigroup analysis showed that cyberbullying dynamics are similar in both countries. The comparative analyses of prevalence revealed that Colombian students are less involved in cyberbullying. The results indicate the suitability of the instrument and the advantages of using such a tool to evaluate and guide psychoeducational interventions aimed at preventing cyberbullying in countries where few studies have been performed.

  16. Testing of the ABBN-RF multigroup data library in photon transport calculations

    NASA Astrophysics Data System (ADS)

    Koscheev, Vladimir; Lomakov, Gleb; Manturov, Gennady; Tsiboulia, Anatoly

    2017-09-01

    Gamma radiation is produced via both of nuclear fuel and shield materials. Photon interaction is known with appropriate accuracy, but secondary gamma ray production known much less. The purpose of this work is studying secondary gamma ray production data from neutron induced reactions in iron and lead by using MCNP code and modern nuclear data as ROSFOND, ENDF/B-7.1, JEFF-3.2 and JENDL-4.0. Results of calculations show that all of these nuclear data have different photon production data from neutron induced reactions and have poor agreement with evaluated benchmark experiment. The ABBN-RF multigroup cross-section library is based on the ROSFOND data. It presented in two forms of micro cross sections: ABBN and MATXS formats. Comparison of group-wise calculations using both ABBN and MATXS data to point-wise calculations with the ROSFOND library shows a good agreement. The discrepancies between calculation and experimental C/E results in neutron spectra are in the limit of experimental errors. For the photon spectrum they are out of experimental errors. Results of calculations using group-wise and point-wise representation of cross sections show a good agreement both for photon and neutron spectra.

  17. Development of ENDF/B-IV multigroup neutron cross-section libraries for the LEOPARD and LASER codes. Technical report on Phase 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenquin, U.P.; Stewart, K.B.; Heeb, C.M.

    1975-07-01

    The principal aim of this neutron cross-section research is to provide the utility industry with a 'standard nuclear data base' that will perform satisfactorily when used for analysis of thermal power reactor systems. EPRI is coordinating its activities with those of the Cross Section Evaluation Working Group (CSEWG), responsible for the development of the Evaluated Nuclear Data File-B (ENDF/B) library, in order to improve the performance of the ENDF/B library in thermal reactors and other applications of interest to the utility industry. Battelle-Northwest (BNW) was commissioned to process the ENDF/B Version-4 data files into a group-constant form for use inmore » the LASER and LEOPARD neutronics codes. Performance information on the library should provide the necessary feedback for improving the next version of the library, and a consistent data base is expected to be useful in intercomparing the versions of the LASER and LEOPARD codes presently being used by different utility groups. This report describes the BNW multi-group libraries and the procedures followed in their preparation and testing. (GRA)« less

  18. Assessing the Implicit Theory of Willpower for Strenuous Mental Activities Scale: Multigroup, across-gender, and cross-cultural measurement invariance and convergent and divergent validity.

    PubMed

    Napolitano, Christopher M; Job, Veronika

    2018-05-21

    Why do some people struggle with self-control (colloquially called willpower) whereas others are able to sustain it during challenging circumstances? Recent research showed that a person's implicit theories of willpower-whether they think self-control capacity is a limited or nonlimited resource-predict sustained self-control on laboratory tasks and on goal-related outcomes in everyday life. The present research tests the Implicit Theory of Willpower for Strenuous Mental Activities Scale (or ITW-M) Scale for measurement invariance across samples and gender within each culture, and two cultural contexts (the U.S. and Switzerland/Germany). Across a series of multigroup confirmatory factor analyses, we found support for the measurement invariance of the ITW-M scale across samples within and across two cultures, as well as across men and women. Further, the analyses showed expected patterns of convergent (with life-satisfaction and trait-self-control) and discriminant validity (with implicit theory of intelligence). These results provide guidelines for future research and clinical practice using the ITW-M scale for the investigation of latent group differences, for example, between gender or cultures. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Using kaizen to improve employee well-being: Results from two organizational intervention studies.

    PubMed

    von Thiele Schwarz, Ulrica; Nielsen, Karina M; Stenfors-Hayes, Terese; Hasson, Henna

    2017-08-01

    Participatory intervention approaches that are embedded in existing organizational structures may improve the efficiency and effectiveness of organizational interventions, but concrete tools are lacking. In the present article, we use a realist evaluation approach to explore the role of kaizen, a lean tool for participatory continuous improvement, in improving employee well-being in two cluster-randomized, controlled participatory intervention studies. Case 1 is from the Danish Postal Service, where kaizen boards were used to implement action plans. The results of multi-group structural equation modeling showed that kaizen served as a mechanism that increased the level of awareness of and capacity to manage psychosocial issues, which, in turn, predicted increased job satisfaction and mental health. Case 2 is from a regional hospital in Sweden that integrated occupational health processes with a pre-existing kaizen system. Multi-group structural equation modeling revealed that, in the intervention group, kaizen work predicted better integration of organizational and employee objectives after 12 months, which, in turn, predicted increased job satisfaction and decreased discomfort at 24 months. The findings suggest that participatory and structured problem-solving approaches that are familiar and visual to employees can facilitate organizational interventions.

  20. Coupled neutron--gamma multigroup--multitable cross sections for 29 materials pertinent to nuclear weapons effect calculations generated by LASL/TD Division

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sandmeier, H.A.; Hansen, G.E.; Seamon, R.E.

    This report lists 42-group, coupled, neutron -gamma cross sections for H, D, T, /sup 3/He, /sup 4/He, /sup 6/Li, /sup 7/Li, Be, /sup 10/B, /sup 11/B, C, N, O, Na, Mg, Ai, Si, Cl, A, K, Ca, Fe, Cu, W, Pb, /sup 235/U, /sup 238/U, / sup 239/Pu, and /sup 240/Pu. Most of these materials are used in nuclear- weaponseffects calculations, where the elements for air, ground, and sea water are needed. Further, lists are given of cross sections for materials used in nuclear weapons vulnerability calculations, such as the elements of high explosives as well as materials that willmore » undergo fusion and fission. Most of the common reactor materials are also listed. The 42 coupled neutron-gamma groups are split into 30 neutron groups (17 MeV through 1.39 x 10/sup -4/ eV) and 12 gamma groups (10 MeV through 0.01 MeV). Data sources and averaging schemes used for the development of these multigroup parameters are given. (119 tables) (auth)« less

  1. Gender and Acceptance of E-Learning: A Multi-Group Analysis Based on a Structural Equation Model among College Students in Chile and Spain.

    PubMed

    Ramírez-Correa, Patricio E; Arenas-Gaitán, Jorge; Rondán-Cataluña, F Javier

    2015-01-01

    The scope of this study was to evaluate whether the adoption of e-learning in two universities, and in particular, the relationship between the perception of external control and perceived ease of use, is different because of gender differences. The study was carried out with participating students in two different universities, one in Chile and one in Spain. The Technology Acceptance Model was used as a theoretical framework for the study. A multi-group analysis method in partial least squares was employed to relate differences between groups. The four main conclusions of the study are: (1) a version of the Technology Acceptance Model has been successfully used to explain the process of adoption of e-learning at an undergraduate level of study; (2) the finding of a strong and significant relationship between perception of external control and perception of ease of use of the e-learning platform; (3) a significant relationship between perceived enjoyment and perceived ease of use and between results demonstrability and perceived usefulness is found; (4) the study indicates a few statistically significant differences between males and females when adopting an e-learning platform, according to the tested model.

  2. Using kaizen to improve employee well-being: Results from two organizational intervention studies

    PubMed Central

    von Thiele Schwarz, Ulrica; Nielsen, Karina M; Stenfors-Hayes, Terese; Hasson, Henna

    2016-01-01

    Participatory intervention approaches that are embedded in existing organizational structures may improve the efficiency and effectiveness of organizational interventions, but concrete tools are lacking. In the present article, we use a realist evaluation approach to explore the role of kaizen, a lean tool for participatory continuous improvement, in improving employee well-being in two cluster-randomized, controlled participatory intervention studies. Case 1 is from the Danish Postal Service, where kaizen boards were used to implement action plans. The results of multi-group structural equation modeling showed that kaizen served as a mechanism that increased the level of awareness of and capacity to manage psychosocial issues, which, in turn, predicted increased job satisfaction and mental health. Case 2 is from a regional hospital in Sweden that integrated occupational health processes with a pre-existing kaizen system. Multi-group structural equation modeling revealed that, in the intervention group, kaizen work predicted better integration of organizational and employee objectives after 12 months, which, in turn, predicted increased job satisfaction and decreased discomfort at 24 months. The findings suggest that participatory and structured problem-solving approaches that are familiar and visual to employees can facilitate organizational interventions. PMID:28736455

  3. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  4. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  5. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  6. Identification of gene regulation models from single-cell data

    NASA Astrophysics Data System (ADS)

    Weber, Lisa; Raymond, William; Munsky, Brian

    2018-09-01

    In quantitative analyses of biological processes, one may use many different scales of models (e.g. spatial or non-spatial, deterministic or stochastic, time-varying or at steady-state) or many different approaches to match models to experimental data (e.g. model fitting or parameter uncertainty/sloppiness quantification with different experiment designs). These different analyses can lead to surprisingly different results, even when applied to the same data and the same model. We use a simplified gene regulation model to illustrate many of these concerns, especially for ODE analyses of deterministic processes, chemical master equation and finite state projection analyses of heterogeneous processes, and stochastic simulations. For each analysis, we employ MATLAB and PYTHON software to consider a time-dependent input signal (e.g. a kinase nuclear translocation) and several model hypotheses, along with simulated single-cell data. We illustrate different approaches (e.g. deterministic and stochastic) to identify the mechanisms and parameters of the same model from the same simulated data. For each approach, we explore how uncertainty in parameter space varies with respect to the chosen analysis approach or specific experiment design. We conclude with a discussion of how our simulated results relate to the integration of experimental and computational investigations to explore signal-activated gene expression models in yeast (Neuert et al 2013 Science 339 584–7) and human cells (Senecal et al 2014 Cell Rep. 8 75–83)5.

  7. Panel summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.

    1987-04-01

    The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less

  8. Fumonisin B1 Toxicity in Grower-Finisher Pigs: A Comparative Analysis of Genetically Engineered Bt Corn and non-Bt Corn by Using Quantitative Dietary Exposure Assessment Modeling

    PubMed Central

    Delgado, James E.; Wolt, Jeffrey D.

    2011-01-01

    In this study, we investigate the long-term exposure (20 weeks) to fumonisin B1 (FB1) in grower-finisher pigs by conducting a quantitative exposure assessment (QEA). Our analytical approach involved both deterministic and semi-stochastic modeling for dietary comparative analyses of FB1 exposures originating from genetically engineered Bacillus thuringiensis (Bt)-corn, conventional non-Bt corn and distiller’s dried grains with solubles (DDGS) derived from Bt and/or non-Bt corn. Results from both deterministic and semi-stochastic demonstrated a distinct difference of FB1 toxicity in feed between Bt corn and non-Bt corn. Semi-stochastic results predicted the lowest FB1 exposure for Bt grain with a mean of 1.5 mg FB1/kg diet and the highest FB1 exposure for a diet consisting of non-Bt grain and non-Bt DDGS with a mean of 7.87 mg FB1/kg diet; the chronic toxicological incipient level of concern is 1.0 mg of FB1/kg of diet. Deterministic results closely mirrored but tended to slightly under predict the mean result for the semi-stochastic analysis. This novel comparative QEA model reveals that diet scenarios where the source of grain is derived from Bt corn presents less potential to induce FB1 toxicity than diets containing non-Bt corn. PMID:21909298

  9. Structural invariance of multiple intelligences, based on the level of execution.

    PubMed

    Almeida, Leandro S; Prieto, María Dolores; Ferreira, Arístides; Ferrando, Mercedes; Ferrandiz, Carmen; Bermejo, Rosario; Hernández, Daniel

    2011-11-01

    The independence of multiple intelligences (MI) of Gardner's theory has been debated since its conception. This article examines whether the one- factor structure of the MI theory tested in previous studies is invariant for low and high ability students. Two hundred ninety-four children (aged 5 to 7) participated in this study. A set of Gardner's Multiple Intelligence assessment tasks based on the Spectrum Project was used. To analyze the invariance of a general dimension of intelligence, the different models of behaviours were studied in samples of participants with different performance on the Spectrum Project tasks with Multi-Group Confirmatory Factor Analysis (MGCFA). Results suggest an absence of structural invariance in Gardner's tasks. Exploratory analyses suggest a three-factor structure for individuals with higher performance levels and a two-factor structure for individuals with lower performance levels.

  10. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  11. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  12. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  13. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  14. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  15. Wind power application research on the fusion of the determination and ensemble prediction

    NASA Astrophysics Data System (ADS)

    Lan, Shi; Lina, Xu; Yuzhu, Hao

    2017-07-01

    The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.

  16. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  18. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  19. Apparatus for fixing latency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, David R; Bartholomew, David B; Moon, Justin

    2009-09-08

    An apparatus for fixing computational latency within a deterministic region on a network comprises a network interface modem, a high priority module and at least one deterministic peripheral device. The network interface modem is in communication with the network. The high priority module is in communication with the network interface modem. The at least one deterministic peripheral device is connected to the high priority module. The high priority module comprises a packet assembler/disassembler, and hardware for performing at least one operation. Also disclosed is an apparatus for executing at least one instruction on a downhole device within a deterministic region,more » the apparatus comprising a control device, a downhole network, and a downhole device. The control device is near the surface of a downhole tool string. The downhole network is integrated into the tool string. The downhole device is in communication with the downhole network.« less

  20. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  1. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  2. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  3. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  4. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  5. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  6. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  7. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  8. The NJOY Nuclear Data Processing System, Version 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.

    The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.

  9. A Latent Variable Investigation of the Phonological Awareness Literacy Screening-Kindergarten Assessment: Construct Identification and Multigroup Comparisons between Spanish-Speaking English-Language Learners (ELLs) and Non-ELL Students

    ERIC Educational Resources Information Center

    Huang, Francis L.; Konold, Timothy R.

    2014-01-01

    Psychometric properties of the Phonological Awareness Literacy Screening for Kindergarten (PALS-K) instrument were investigated in a sample of 2844 first-time public school kindergarteners. PALS-K is a widely used English literacy screening assessment. Exploratory factor analysis revealed a theoretically defensible measurement structure that was…

  10. The Relationship between Receptive and Expressive Subskills of Academic L2 Proficiency in Nonnative Speakers of English: A Multigroup Approach

    ERIC Educational Resources Information Center

    Pae, Hye K.; Greenberg, Daphne

    2014-01-01

    The purpose of this study was to examine the relationship between receptive and expressive language skills characterized by the performance of nonnative speakers (NNSs) of English in the academic context. Test scores of 585 adult NNSs were selected from Form 2 of the Pearson Test of English Academic's field-test database. A correlated…

  11. Examination of Measurement Invariance across Culture and Gender on the RCMAS-2 Short Form among Singapore and U.S. Adolescents

    ERIC Educational Resources Information Center

    Lowe, Patricia A.; Ang, Rebecca P.

    2016-01-01

    Tests of measurement invariance were conducted across culture and gender on the Revised Children's Manifest Anxiety Scale-Second Edition (RCMAS-2) Short Form in a sample of 1,003 Singapore and U.S. adolescents. The results of multi-group confirmatory factor analyses across culture and gender supported at least partial measurement invariance. ANOVA…

  12. The Theory of Planned Behavior within the Stages of the Transtheoretical Model: Latent Structural Modeling of Stage-Specific Prediction Patterns in Physical Activity

    ERIC Educational Resources Information Center

    Lippke, Sonia; Nigg, Claudio R.; Maddock, Jay E.

    2007-01-01

    This is the first study to test whether the stages of change of the transtheoretical model are qualitatively different through exploring discontinuity patterns in theory of planned behavior (TPB) variables using latent multigroup structural equation modeling (MSEM) with AMOS. Discontinuity patterns in terms of latent means and prediction patterns…

  13. Ideas for Planning Your Instructional Materials Center. Administration; Conference and Independent Study; Listening and Viewing; Materials Production; Reading, Research and Borrowing; Storage and Maintenance.

    ERIC Educational Resources Information Center

    Massachusetts School Building Assistance Commission, Boston.

    This report suggests that the instructional materials center be flexible for multigroup activities, expansible for future physical growth, and central to the instructional program. Area specifications are given for the following areas: materials research, small groups, cataloging and processing materials, and listening and speaking, and for a dark…

  14. The Relationship between Adolescents' News Media Use and Civic Engagement: The Indirect Effect of Interpersonal Communication with Parents

    ERIC Educational Resources Information Center

    Boyd, Michelle J.; Zaff, Jonathan F.; Phelps, Erin; Weiner, Michelle B.; Lerner, Richard M.

    2011-01-01

    Using data from the 4-H Study of Positive Youth Development, a longitudinal study involving U.S. adolescents, multi-group structural equation modeling (SEM) was used to evaluate whether news media use is predictive of a set of civic indicators (civic duty, civic efficacy, neighborhood social connection, and civic participation) for youth in Grades…

  15. Do Items that Measure Self-Perceived Physical Appearance Function Differentially across Gender Groups? An Application of the MACS Model

    ERIC Educational Resources Information Center

    Gonzalez-Roma, Vicente; Tomas, Ines; Ferreres, Doris; Hernandez, Ana

    2005-01-01

    The aims of this study were to investigate whether the 6 items of the Physical Appearance Scale (Marsh, Richards, Johnson, Roche, & Tremayne, 1994) show differential item functioning (DIF) across gender groups of adolescents, and to show how this can be done using the multigroup mean and covariance structure (MG-MACS) analysis model. Two samples…

  16. Perfectionism among Chinese Gifted and Nongifted Students in Hong Kong: The Use of the Revised Almost Perfect Scale

    ERIC Educational Resources Information Center

    Chan, David W.

    2011-01-01

    This study investigated the structure of perfectionism based on the almost Perfect Scale-Revised with a sample of 320 gifted students aged 7 to 12 and a sample of 882 nongifted students of similar ages in Hong Kong. Multigroup confirmatory factor analyses across the two student groups supported a common three-dimensional model that included…

  17. Testing women's propensities to leave their abusive husbands using structural equation modeling.

    PubMed

    Choi, Myunghan; Belyea, Michael; Phillips, Linda R; Insel, Kathleen; Min, Sung-Kil

    2009-01-01

    Many Korean women are just beginning to recognize that what they considered to be normal treatment is actually domestic violence. Many are becoming more intolerant of the abuse and more likely to desire to leave an abusive relationship. The aim of this study was to test, using the framework of sociostructural and psychological-relational power (PRP), a model of Korean women's propensities to leave their abusive husbands. Multigroup structural equation modeling was used to test relationships between variables chosen from the sociostructural power and PRP to explain intolerance to abuse. Married Korean women (n = 184) who self-identified as being abused physically, psychologically, sexually, or financially participated in the study. The multigroup analysis revealed that the relationship of abuse and Hwa-Byung (a culture-bound syndrome that denotes Korean women's anger) with intolerance was supported for women with low education (defined as having an education of high school or less: < or =12 years); also for this group, particularly among the younger women, high power was related to high levels of reported abuse and abuse intolerance. For women in the high-education group (education beyond high school: > or =13 years), high power was related to abuse, Hwa-Byung, and abuse intolerance; age did not influence power. Overall, the multigroup model adequately fitted the sample data (chi2 = 92.057, degree of freedom = 50, p = .000; normal fit index = .926, comparative fix index = .964, root mean square error of approximation = .068, Hoelter's critical number = 152), demonstrating that education is a crucial moderator of Korean women's attitude toward the unacceptability of abuse and propensity to terminate the marriage. This study found support for a model of abuse intolerance using the framework of sociostructural power and PRP, primarily for the low-education group. Hwa-Byung was a mediating factor that contributed to intolerance to abuse in women with low education. This study highlights the importance of understanding the cultural assumptions that guide Korean women's beliefs and behaviors about abuse intolerance, suggesting that effective intervention programs should be specific to age and education, including a focus on resource availability, which could clarify the variations in Korean women's responses to abuse intolerance.

  18. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions

    PubMed Central

    Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-01-01

    Background The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users’ satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. Objective The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). Methods The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Results Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=−.35, P<.001) and perceived stress (r=−.48, P<.001) demonstrated the construct validity of the scale. Conclusions The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user’s general satisfaction with Web-based interventions for depression and stress management. Multigroup analyses indicate its robustness across different samples. Thus, the CSQ-I seems to be a suitable measure to consider the user’s perspective in the overall evaluation of Web-based health interventions. PMID:27582341

  19. Reliability and Validity of Assessing User Satisfaction With Web-Based Health Interventions.

    PubMed

    Boß, Leif; Lehr, Dirk; Reis, Dorota; Vis, Christiaan; Riper, Heleen; Berking, Matthias; Ebert, David Daniel

    2016-08-31

    The perspective of users should be taken into account in the evaluation of Web-based health interventions. Assessing the users' satisfaction with the intervention they receive could enhance the evidence for the intervention effects. Thus, there is a need for valid and reliable measures to assess satisfaction with Web-based health interventions. The objective of this study was to analyze the reliability, factorial structure, and construct validity of the Client Satisfaction Questionnaire adapted to Internet-based interventions (CSQ-I). The psychometric quality of the CSQ-I was analyzed in user samples from 2 separate randomized controlled trials evaluating Web-based health interventions, one from a depression prevention intervention (sample 1, N=174) and the other from a stress management intervention (sample 2, N=111). At first, the underlying measurement model of the CSQ-I was analyzed to determine the internal consistency. The factorial structure of the scale and the measurement invariance across groups were tested by multigroup confirmatory factor analyses. Additionally, the construct validity of the scale was examined by comparing satisfaction scores with the primary clinical outcome. Multigroup confirmatory analyses on the scale yielded a one-factorial structure with a good fit (root-mean-square error of approximation =.09, comparative fit index =.96, standardized root-mean-square residual =.05) that showed partial strong invariance across the 2 samples. The scale showed very good reliability, indicated by McDonald omegas of .95 in sample 1 and .93 in sample 2. Significant correlations with change in depressive symptoms (r=-.35, P<.001) and perceived stress (r=-.48, P<.001) demonstrated the construct validity of the scale. The proven internal consistency, factorial structure, and construct validity of the CSQ-I indicate a good overall psychometric quality of the measure to assess the user's general satisfaction with Web-based interventions for depression and stress management. Multigroup analyses indicate its robustness across different samples. Thus, the CSQ-I seems to be a suitable measure to consider the user's perspective in the overall evaluation of Web-based health interventions.

  20. How Much Do Adolescents Cybergossip? Scale Development and Validation in Spain and Colombia

    PubMed Central

    Romera, Eva M.; Herrera-López, Mauricio; Casas, José A.; Ortega Ruiz, Rosario; Del Rey, Rosario

    2018-01-01

    Cybergossip is the act of two or more people making evaluative comments via digital devices about somebody who is not present. This cyberbehavior affects the social group in which it occurs and can either promote or hinder peer relationships. Scientific studies that assess the nature of this emerging and interactive behavior in the virtual world are limited. Some research on traditional gossip has identified it as an inherent and defining element of indirect relational aggression. This paper adopts and argues for a wider definition of gossip that includes positive comments and motivations. This work also suggests that cybergossip has to be measured independently from traditional gossip due to key differences when it occurs through ICT. This paper presents the Colombian and Spanish validation of the Cybergossip Questionnaire for Adolescents (CGQ-A), involving 3,747 high school students (M = 13.98 years old, SD = 1.69; 48.5% male), of which 1,931 were Colombian and 1,816 were Spanish. Test models derived from item response theory, confirmatory factor analysis, content validation, and multi-group analysis were run on the full sample and subsamples for each country and both genders. The obtained optimal fit and psychometric properties confirm the robustness and suitability of a one-dimensional structure for the cybergossip instrument. The multi-group analysis shows that the cybergossip construct is understood similarly in both countries and between girls and boys. The composite reliability ratifies convergent and divergent validity of the scale. Descriptive results show that Colombian adolescents gossip less than their Spanish counterparts and that boys and girls use cybergossip to the same extent. As a conclusion, this study confirmes the relationship between cybergossip and cyberbullying, but it also supports a focus on positive cybergossip in psychoeducational interventions to build positive virtual relationships and prevent risky cyberbehaviors. PMID:29483887

  1. Factorial invariance of pediatric patient self-reported fatigue across age and gender: a multigroup confirmatory factor analysis approach utilizing the PedsQL™ Multidimensional Fatigue Scale.

    PubMed

    Varni, James W; Beaujean, A Alexander; Limbers, Christine A

    2013-11-01

    In order to compare multidimensional fatigue research findings across age and gender subpopulations, it is important to demonstrate measurement invariance, that is, that the items from an instrument have equivalent meaning across the groups studied. This study examined the factorial invariance of the 18-item PedsQL™ Multidimensional Fatigue Scale items across age and gender and tested a bifactor model. Multigroup confirmatory factor analysis (MG-CFA) was performed specifying a three-factor model across three age groups (5-7, 8-12, and 13-18 years) and gender. MG-CFA models were proposed in order to compare the factor structure, metric, scalar, and error variance across age groups and gender. The analyses were based on 837 children and adolescents recruited from general pediatric clinics, subspecialty clinics, and hospitals in which children were being seen for well-child checks, mild acute illness, or chronic illness care. A bifactor model of the items with one general factor influencing all the items and three domain-specific factors representing the General, Sleep/Rest, and Cognitive Fatigue domains fit the data better than oblique factor models. Based on the multiple measures of model fit, configural, metric, and scalar invariance were found for almost all items across the age and gender groups, as was invariance in the factor covariances. The PedsQL™ Multidimensional Fatigue Scale demonstrated strict factorial invariance for child and adolescent self-report across gender and strong factorial invariance across age subpopulations. The findings support an equivalent three-factor structure across the age and gender groups studied. Based on these data, it can be concluded that pediatric patients across the groups interpreted the items in a similar manner regardless of their age or gender, supporting the multidimensional factor structure interpretation of the PedsQL™ Multidimensional Fatigue Scale.

  2. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  3. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  4. Deterministic and efficient quantum cryptography based on Bell's theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Zengbing; Pan Jianwei; Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, 69120 Heidelberg

    2006-05-15

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.

  5. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  6. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  7. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  8. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  9. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  10. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  11. Mixing Single Scattering Properties in Vector Radiative Transfer for Deterministic and Stochastic Solutions

    NASA Astrophysics Data System (ADS)

    Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.

    2016-12-01

    Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.

  12. Multi-Year Revenue and Expenditure Forecasting for Small Municipal Governments.

    DTIC Science & Technology

    1981-03-01

    Management Audit Econometric Revenue Forecast Gap and Impact Analysis Deterministic Expenditure Forecast Municipal Forecasting Municipal Budget Formlto...together with a multi-year revenue and expenditure forecasting model for the City of Monterey, California. The Monterey model includes an econometric ...65 5 D. FORECAST BASED ON THE ECONOMETRIC MODEL ------- 67 E. FORECAST BASED ON EXPERT JUDGMENT AND TREND ANALYSIS

  13. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  14. Temperature feedback of TRIGA MARK-II fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Minhat, M. S.; Rabir, M. H.

    2016-01-22

    We study the amount of temperature feedback on reactivity for the three types of TRIGA fuel i.. ST8, ST12 and LEU fuel, are used in the TRIGA MARK II reactor in Malaysia Nuclear Agency. We employ WIMSD-5B for the calculation of kin f for a single TRIGA fuel surrounded by water. Typical calculations of TRIGA fuel reactivity are usually limited to ST8 fuel, but in this paper our investigation extends to ST12 and LEU fuel. We look at the kin f of our model at various fuel temperatures and calculate the amount reactivity removed. In one instance, the water temperaturemore » is kept at room temperature of 300K to simulate sudden reactivity increase from startup. In another instance, we simulate the sudden temperature increase during normal operation where the water temperature is approximately 320K while observing the kin f at various fuel temperatures. For accidents, two cases are simulated. The first case is for water temperature at 370K and the other is without any water. We observe that the higher Uranium content fuel such as the ST12 and LEU have much smaller contribution to the reactivity in comparison to the often studied ST8 fuel. In fact the negative reactivity coefficient for LEU fuel at high temperature in water is only slightly larger to the negative reactivity coefficient for ST8 fuel in void. The performance of ST8 fuel in terms of negative reactivity coefficient is cut almost by half when it is in void. These results are essential in the safety evaluation of the reactor and should be carefully considered when choices of fuel for core reconfiguration are made.« less

  15. Temperature feedback of TRIGA MARK-II fuel

    NASA Astrophysics Data System (ADS)

    Usang, M. D.; Minhat, M. S.; Rabir, M. H.; M. Rawi M., Z.

    2016-01-01

    We study the amount of temperature feedback on reactivity for the three types of TRIGA fuel i.. ST8, ST12 and LEU fuel, are used in the TRIGA MARK II reactor in Malaysia Nuclear Agency. We employ WIMSD-5B for the calculation of kin f for a single TRIGA fuel surrounded by water. Typical calculations of TRIGA fuel reactivity are usually limited to ST8 fuel, but in this paper our investigation extends to ST12 and LEU fuel. We look at the kin f of our model at various fuel temperatures and calculate the amount reactivity removed. In one instance, the water temperature is kept at room temperature of 300K to simulate sudden reactivity increase from startup. In another instance, we simulate the sudden temperature increase during normal operation where the water temperature is approximately 320K while observing the kin f at various fuel temperatures. For accidents, two cases are simulated. The first case is for water temperature at 370K and the other is without any water. We observe that the higher Uranium content fuel such as the ST12 and LEU have much smaller contribution to the reactivity in comparison to the often studied ST8 fuel. In fact the negative reactivity coefficient for LEU fuel at high temperature in water is only slightly larger to the negative reactivity coefficient for ST8 fuel in void. The performance of ST8 fuel in terms of negative reactivity coefficient is cut almost by half when it is in void. These results are essential in the safety evaluation of the reactor and should be carefully considered when choices of fuel for core reconfiguration are made.

  16. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  17. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  18. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  19. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  20. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  1. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  2. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  3. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  4. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  5. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  6. Short-range solar radiation forecasts over Sweden

    NASA Astrophysics Data System (ADS)

    Landelius, Tomas; Lindskog, Magnus; Körnich, Heiner; Andersson, Sandra

    2018-04-01

    In this article the performance for short-range solar radiation forecasts by the global deterministic and ensemble models from the European Centre for Medium-Range Weather Forecasts (ECMWF) is compared with an ensemble of the regional mesoscale model HARMONIE-AROME used by the national meteorological services in Sweden, Norway and Finland. Note however that only the control members and the ensemble means are included in the comparison. The models resolution differs considerably with 18 km for the ECMWF ensemble, 9 km for the ECMWF deterministic model, and 2.5 km for the HARMONIE-AROME ensemble. The models share the same radiation code. It turns out that they all underestimate systematically the Direct Normal Irradiance (DNI) for clear-sky conditions. Except for this shortcoming, the HARMONIE-AROME ensemble model shows the best agreement with the distribution of observed Global Horizontal Irradiance (GHI) and DNI values. During mid-day the HARMONIE-AROME ensemble mean performs best. The control member of the HARMONIE-AROME ensemble also scores better than the global deterministic ECMWF model. This is an interesting result since mesoscale models have so far not shown good results when compared to the ECMWF models. Three days with clear, mixed and cloudy skies are used to illustrate the possible added value of a probabilistic forecast. It is shown that in these cases the mesoscale ensemble could provide decision support to a grid operator in terms of forecasts of both the amount of solar power and its probabilities.

  7. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  8. Neutron skyshine from intense 14-MeV neutron source facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T.; Hayashi, K.; Takahashi, A.

    1985-07-01

    The dose distribution and the spectrum variation of neutrons due to the skyshine effect have been measured with the high-efficiency rem counter, the multisphere spectrometer, and the NE-213 scintillator in the environment surrounding an intense 14-MeV neutron source facility. The dose distribution and the energy spectra of neutrons around the facility used as a skyshine source have also been measured to enable the absolute evaluation of the skyshine effect. The skyshine effect was analyzed by two multigroup Monte Carlo codes, NIMSAC and MMCR-2, by two discrete ordinates S /sub n/ codes, ANISN and DOT3.5, and by the shield structure designmore » code for skyshine, SKYSHINE-II. The calculated results show good agreement with the measured results in absolute values. These experimental results should be useful as benchmark data for shyshine analysis and for shielding design of fusion facilities.« less

  9. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  10. Multi-Group Covariance and Mean Structure Modeling of the Relationship between the WAIS-III Common Factors and Sex and Educational Attainment in Spain

    ERIC Educational Resources Information Center

    Dolan, Conor V.; Colom, Roberto; Abad, Francisco J.; Wicherts, Jelte M.; Hessen, David J.; van de Sluis, Sophie

    2006-01-01

    We investigated sex effects and the effects of educational attainment (EA) on the covariance structure of the WAIS-III in a subsample of the Spanish standardization data. We fitted both first order common factor models and second order common factor models. The latter include general intelligence ("g") as a second order common factor.…

  11. Ethnic Variables and Negative Life Events as Predictors of Depressive Symptoms and Suicidal Behaviors in Latino College Students: On the Centrality of "Receptivo a los Demás"

    ERIC Educational Resources Information Center

    Chang, Edward C.; Yu, Elizabeth A.; Yu, Tina; Kahle, Emma R.; Hernandez, Viviana; Kim, Jean M.; Jeglic, Elizabeth L.; Hirsch, Jameson K.

    2016-01-01

    In the present study, we examined ethnic variables (viz., multigroup ethnic identity and other group orientation) along with negative life events as predictors of depressive symptoms and suicidal behaviors in a sample of 156 (38 male and 118 female) Latino college students. Results of conducting hierarchical regression analyses indicated that the…

  12. Differences in Intention to Use Educational RSS Feeds between Lebanese and British Students: A Multi-Group Analysis Based on the Technology Acceptance Model

    ERIC Educational Resources Information Center

    Tarhini, Ali; Scott, Michael James; Sharma, Sujeet Kumar; Abbasi, Muhammad Sharif

    2015-01-01

    Really Simple Syndication (RSS) offers a means for university students to receive timely updates from virtual learning environments. However, despite its utility, only 21% of home students surveyed at a university in Lebanon claim to have ever used the technology. To investigate whether national culture could be an influence on intention to use…

  13. Short-Term Effect of Two Semi-Occluded Vocal Tract Training Programs on the Vocal Quality of Future Occupational Voice Users: "Resonant Voice Training Using Nasal Consonants" versus "Straw Phonation"

    ERIC Educational Resources Information Center

    Meerschman, Iris; Van Lierde, Kristiane; Peeters, Karen; Meersman, Eline; Claeys, Sofie; D'haeseleer, Evelien

    2017-01-01

    Purpose: The purpose of this study was to determine the short-term effect of 2 semi-occluded vocal tract training programs, "resonant voice training using nasal consonants" versus "straw phonation," on the vocal quality of vocally healthy future occupational voice users. Method: A multigroup pretest--posttest randomized control…

  14. Assessing the Intention to Use Technology among Pre-Service Teachers in Singapore and Malaysia: A Multigroup Invariance Analysis of the Technology Acceptance Model (TAM)

    ERIC Educational Resources Information Center

    Teo, Timothy; Lee, Chwee Beng; Chai, Ching Sing; Wong, Su Luan

    2009-01-01

    This study assesses the pre-service teachers' self-reported future intentions to use technology in Singapore and Malaysia. A survey was employed to validate items from past research. Using the Technology Acceptance Model (TAM) as a research framework, 495 pre-service teachers from Singapore and Malaysia responded to an 11-item questionnaires…

  15. Factorial Invariance and Latent Mean Differences of Scores on the Achievement Goal Tendencies Questionnaire across Gender and Age in a Sample of Spanish Students

    ERIC Educational Resources Information Center

    Ingles, Candido J.; Marzo, Juan C.; Castejon, Juan L.; Nunez, Jose Carlos; Valle, Antonio; Garcia-Fernandez, Jose M.; Delgado, Beatriz

    2011-01-01

    This study examined the factorial invariance and latent mean differences of scores on the Spanish version of the "Achievement Goal Tendencies Questionnaire" (AGTQ) across gender and age groups in 2022 Spanish students (51.1% boys) in grades 7 through 10. The equality of factor structures was compared using multi-group confirmatory factor…

  16. Gender and Acceptance of E-Learning: A Multi-Group Analysis Based on a Structural Equation Model among College Students in Chile and Spain

    PubMed Central

    2015-01-01

    The scope of this study was to evaluate whether the adoption of e-learning in two universities, and in particular, the relationship between the perception of external control and perceived ease of use, is different because of gender differences. The study was carried out with participating students in two different universities, one in Chile and one in Spain. The Technology Acceptance Model was used as a theoretical framework for the study. A multi-group analysis method in partial least squares was employed to relate differences between groups. The four main conclusions of the study are: (1) a version of the Technology Acceptance Model has been successfully used to explain the process of adoption of e-learning at an undergraduate level of study; (2) the finding of a strong and significant relationship between perception of external control and perception of ease of use of the e-learning platform; (3) a significant relationship between perceived enjoyment and perceived ease of use and between results demonstrability and perceived usefulness is found; (4) the study indicates a few statistically significant differences between males and females when adopting an e-learning platform, according to the tested model. PMID:26465895

  17. VENTURE/PC manual: A multidimensional multigroup neutron diffusion code system. Version 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, A.; Huria, H.C.; Cho, K.W.

    1991-12-01

    VENTURE/PC is a recompilation of part of the Oak Ridge BOLD VENTURE code system, which will operate on an IBM PC or compatible computer. Neutron diffusion theory solutions are obtained for multidimensional, multigroup problems. This manual contains information associated with operating the code system. The purpose of the various modules used in the code system, and the input for these modules are discussed. The PC code structure is also given. Version 2 included several enhancements not given in the original version of the code. In particular, flux iterations can be done in core rather than by reading and writing tomore » disk, for problems which allow sufficient memory for such in-core iterations. This speeds up the iteration process. Version 3 does not include any of the special processors used in the previous versions. These special processors utilized formatted input for various elements of the code system. All such input data is now entered through the Input Processor, which produces standard interface files for the various modules in the code system. In addition, a Standard Interface File Handbook is included in the documentation which is distributed with the code, to assist in developing the input for the Input Processor.« less

  18. The Status of Multi-Dimensional Core-Collapse Supernova Models

    NASA Astrophysics Data System (ADS)

    Müller, B.

    2016-09-01

    Models of neutrino-driven core-collapse supernova explosions have matured considerably in recent years. Explosions of low-mass progenitors can routinely be simulated in 1D, 2D, and 3D. Nucleosynthesis calculations indicate that these supernovae could be contributors of some lighter neutron-rich elements beyond iron. The explosion mechanism of more massive stars remains under investigation, although first 3D models of neutrino-driven explosions employing multi-group neutrino transport have become available. Together with earlier 2D models and more simplified 3D simulations, these have elucidated the interplay between neutrino heating and hydrodynamic instabilities in the post-shock region that is essential for shock revival. However, some physical ingredients may still need to be added/improved before simulations can robustly explain supernova explosions over a wide range of progenitors. Solutions recently suggested in the literature include uncertainties in the neutrino rates, rotation, and seed perturbations from convective shell burning. We review the implications of 3D simulations of shell burning in supernova progenitors for the `perturbations-aided neutrino-driven mechanism,' whose efficacy is illustrated by the first successful multi-group neutrino hydrodynamics simulation of an 18 solar mass progenitor with 3D initial conditions. We conclude with speculations about the impact of 3D effects on the structure of massive stars through convective boundary mixing.

  19. Self-esteem Is Mostly Stable Across Young Adulthood: Evidence from Latent STARTS Models.

    PubMed

    Wagner, Jenny; Lüdtke, Oliver; Trautwein, Ulrich

    2016-08-01

    How stable is self-esteem? This long-standing debate has led to different conclusions across different areas of psychology. Longitudinal data and up-to-date statistical models have recently indicated that self-esteem has stable and autoregressive trait-like components and state-like components. We applied latent STARTS models with the goal of replicating previous findings in a longitudinal sample of young adults (N = 4,532; Mage  = 19.60, SD = 0.85; 55% female). In addition, we applied multigroup models to extend previous findings on different patterns of stability for men versus women and for people with high versus low levels of depressive symptoms. We found evidence for the general pattern of a major proportion of stable and autoregressive trait variance and a smaller yet substantial amount of state variance in self-esteem across 10 years. Furthermore, multigroup models suggested substantial differences in the variance components: Females showed more state variability than males. Individuals with higher levels of depressive symptoms showed more state and less autoregressive trait variance in self-esteem. Results are discussed with respect to the ongoing trait-state debate and possible implications of the group differences that we found in the stability of self-esteem. © 2015 Wiley Periodicals, Inc.

  20. TIME-DEPENDENT MULTI-GROUP MULTI-DIMENSIONAL RELATIVISTIC RADIATIVE TRANSFER CODE BASED ON SPHERICAL HARMONIC DISCRETE ORDINATE METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tominaga, Nozomu; Shibata, Sanshiro; Blinnikov, Sergei I., E-mail: tominaga@konan-u.ac.jp, E-mail: sshibata@post.kek.jp, E-mail: Sergei.Blinnikov@itep.ru

    We develop a time-dependent, multi-group, multi-dimensional relativistic radiative transfer code, which is required to numerically investigate radiation from relativistic fluids that are involved in, e.g., gamma-ray bursts and active galactic nuclei. The code is based on the spherical harmonic discrete ordinate method (SHDOM) which evaluates a source function including anisotropic scattering in spherical harmonics and implicitly solves the static radiative transfer equation with ray tracing in discrete ordinates. We implement treatments of time dependence, multi-frequency bins, Lorentz transformation, and elastic Thomson and inelastic Compton scattering to the publicly available SHDOM code. Our code adopts a mixed-frame approach; the source functionmore » is evaluated in the comoving frame, whereas the radiative transfer equation is solved in the laboratory frame. This implementation is validated using various test problems and comparisons with the results from a relativistic Monte Carlo code. These validations confirm that the code correctly calculates the intensity and its evolution in the computational domain. The code enables us to obtain an Eddington tensor that relates the first and third moments of intensity (energy density and radiation pressure) and is frequently used as a closure relation in radiation hydrodynamics calculations.« less

  1. A multigroup confirmatory factor analysis of the Patient Health Questionnaire-9 among English- and Spanish-speaking Latinas.

    PubMed

    Merz, Erin L; Malcarne, Vanessa L; Roesch, Scott C; Riley, Natasha; Sadler, Georgia Robins

    2011-07-01

    Depression is a significant problem for ethnic minorities that remains understudied partly due to a lack of strong measures with established psychometric properties. One screening tool, the Patient Health Questionnaire-9 (PHQ-9), which was developed for use in primary care has also gained popularity in research settings. The reliability and validity of the PHQ-9 has been well established among predominantly Caucasian samples, in addition to many minority groups. However, there is little evidence regarding its utility among Hispanic Americans, a large and growing cultural group in the United States. In this study, we investigated the reliability and structural validity of the PHQ-9 in Hispanic American women. A community sample of 479 Latina women from southern California completed the PHQ-9 in their preferred language of English or Spanish. Cronbach's alphas suggested that there was good internal consistency for both the English- and Spanish-language versions. Structural validity was investigated using multigroup confirmatory factor analysis. Results support a similar one-factor structure with equivalent response patterns and variances among English- and Spanish-speaking Latinas. These results suggest that the PHQ-9 can be used with confidence in both English and Spanish versions to screen Latinas for depression.

  2. LOCAL POPULATION CHANGE AND VARIATIONS IN RACIAL INTEGRATION IN THE UNITED STATES, 2000-2010.

    PubMed

    Bellman, Benjamin; Spielman, Seth E; Franklin, Rachel S

    2018-03-01

    While population growth has been consistently tied to decreasing racial segregation at the metropolitan level in the United States, little work has been done to relate small-scale changes in population size to integration. We address this question through a novel technique that tracks population changes by race and ethnicity for comparable geographies in both 2000 and 2010. Using the Theil Index, we analyze the fifty most populous Metropolitan Statistical Areas in 2010 for changes in multigroup segregation. We classify local areas by their net population change between 2000 and 2010 using a novel unit of analysis based on aggregating census blocks. We find strong evidence that growing parts of rapidly growing metropolitan areas of the United States are crucial to understanding regional differences in segregation that have emerged in past decades. Multigroup segregation declined the most in growing parts of growing metropolitan areas. Comparatively, growing parts of shrinking or stagnant metropolitan areas were less diverse and had smaller declines in segregation. We also find that local areas with shrinking populations had disproportionately high minority representation in 2000 before population loss took place. We conclude that the regional context of population growth or decline has important consequences for the residential mixing of racial groups.

  3. Factor structure and reliability of the childhood trauma questionnaire and prevalence estimates of trauma for male and female street youth.

    PubMed

    Forde, David R; Baron, Stephen W; Scher, Christine D; Stein, Murray B

    2012-01-01

    This study examines the psychometric properties of the Childhood Trauma Questionnaire short form (CTQ-SF) with street youth who have run away or been expelled from their homes (N = 397). Internal reliability coefficients for the five clinical scales ranged from .65 to .95. Confirmatory Factor Analysis (CFA) was used to test the five-factor structure of the scales yielding acceptable fit for the total sample. Additional multigroup analyses were performed to consider items by gender. Results provided only evidence of weak factorial invariance. Constrained models showed invariance in configuration, factor loadings, and factor covariances but failed for equality of intercepts. Mean trauma scores for street youth tended to fall in the moderate to severe range on all abuse/neglect clinical scales. Females reported higher levels of abuse and neglect. Prevalence of child maltreatment of individual forms was very high with 98% of street youth reporting one or more forms; 27.4% of males and 48.9% of females reported all five forms. Results of this study support the viability of the CTQ-SF for screening maltreatment in a highly vulnerable street population. Caution is recommended when comparing prevalence estimates for male and female street youth given the failure of the strong factorial multigroup model.

  4. A Multigroup Confirmatory Factor Analysis of the Patient Health Questionnaire-9 among English- and Spanish-speaking Latinas

    PubMed Central

    Merz, Erin L.; Malcarne, Vanessa L.; Roesch, Scott C.; Riley, Natasha; Sadler, Georgia Robins

    2014-01-01

    Depression is a significant problem for ethnic minorities that remains understudied partly due to a lack of strong measures with established psychometric properties. One screening tool, the Patient Health Questionnaire-9 (PHQ-9), which was developed for use in primary care has also gained popularity in research settings. The reliability and validity of the PHQ-9 has been well established among predominantly Caucasian samples, in addition to many minority groups. However, there is little evidence regarding its utility among Hispanic Americans, a large and growing cultural group in the United States. In this study, we investigated the reliability and structural validity of the PHQ-9 in Hispanic American women. A community sample of 479 Latina women from southern California completed the PHQ-9 in their preferred language of English or Spanish. Cronbach’s alphas suggested that there was good internal consistency for both the English- and Spanish-language versions. Structural validity was investigated using multigroup confirmatory factor analysis (CFA). Results support a similar one-factor structure with equivalent response patterns and variances among English- and Spanish-speaking Latinas. These results suggest that the PHQ-9 can be used with confidence in both English and Spanish versions to screen Latinas for depression. PMID:21787063

  5. Recent developments in multidimensional transport methods for the APOLLO 2 lattice code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmijarevic, I.; Sanchez, R.

    1995-12-31

    A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less

  6. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  7. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  8. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  9. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  10. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  11. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  12. Heavy rain prediction using deterministic and probabilistic models - the flash flood cases of 11-13 October 2005 in Catalonia (NE Spain)

    NASA Astrophysics Data System (ADS)

    Barrera, A.; Altava-Ortiz, V.; Llasat, M. C.; Barnolas, M.

    2007-09-01

    Between the 11 and 13 October 2005 several flash floods were produced along the coast of Catalonia (NE Spain) due to a significant heavy rainfall event. Maximum rainfall achieved values up to 250 mm in 24 h. The total amount recorded during the event in some places was close to 350 mm. Barcelona city was also in the affected area where high rainfall intensities were registered, but just a few small floods occurred, thanks to the efficient urban drainage system of the city. Two forecasting methods have been applied in order to evaluate their capability of prediction regarding extreme events: the deterministic MM5 model and a probabilistic model based on the analogous method. The MM5 simulation allows analysing accurately the main meteorological features with a high spatial resolution (2 km), like the formation of some convergence lines over the region that partially explains the maximum precipitation location during the event. On the other hand, the analogous technique shows a good agreement among highest probability values and real affected areas, although a larger pluviometric rainfall database would be needed to improve the results. The comparison between the observed precipitation and from both QPF (quantitative precipitation forecast) methods shows that the analogous technique tends to underestimate the rainfall values and the MM5 simulation tends to overestimate them.

  13. Distributed Market-Based Algorithms for Multi-Agent Planning with Shared Resources

    DTIC Science & Technology

    2013-02-01

    1 Introduction 1 2 Distributed Market-Based Multi-Agent Planning 5 2.1 Problem Formulation...over the deterministic planner, on the “test set” of scenarios with changing economies. . . 50 xi xii Chapter 1 Introduction Multi-agent planning is...representation of the objective (4.2.1). For example, for the supply chain mangement problem, we assumed a sequence of Bernoulli coin flips, which seems

  14. Prediction of the Arctic Oscillation in Boreal Winter by Dynamical Seasonal Forecasting Systems

    NASA Technical Reports Server (NTRS)

    Kang, Daehyun; Lee, Myong-In; Im, Jungho; Kim, Daehyun; Kim, Hye-Mi; Kang, Hyun-Suk; Shubert, Siegfried D.; Arriba, Albertom; MacLachlan, Craig

    2013-01-01

    This study assesses the prediction skill of the boreal winter Arctic Oscillation (AO) in the state-of-the-art dynamical ensemble prediction systems (EPSs): the UKMO GloSea4, the NCEP CFSv2, and the NASA GEOS-5. Long-term reforecasts made with the EPSs are used to evaluate representations of the AO, and to examine skill scores for the deterministic and probabilistic forecast of the AO index. The reforecasts reproduce the observed changes in the large-scale patterns of the Northern Hemispheric surface temperature, upper-level wind, and precipitation according to the AO phase. Results demonstrate that all EPSs have better prediction skill than the persistence prediction for lead times up to 3-month, suggesting a great potential for skillful prediction of the AO and the associated climate anomalies in seasonal time scale. It is also found that the deterministic and probabilistic forecast skill of the AO in the recent period (1997-2010) is higher than that in the earlier period (1983-1996).

  15. Experimental realization of real-time feedback-control of single-atom arrays

    NASA Astrophysics Data System (ADS)

    Kim, Hyosub; Lee, Woojun; Ahn, Jaewook

    2016-05-01

    Deterministic loading of neutral atoms on particular locations has remained a challenging problem. Here we show, in a proof-of-principle experimental demonstration, that such deterministic loading can be achieved by rearrangement of atoms. In the experiment, cold rubidium atom were trapped by optical tweezers, which are the hologram images made by a liquid-crystal spatial light modulator (LC-SLM). After the initial occupancy was identified, the hologram was actively controlled to rearrange the captured atoms on to unfilled sites. For this, we developed a new flicker-free hologram algorithm that enables holographic atom translation. Our demonstration show that up to N=9 atoms were simultaneously moved in the 2D plane with the movable degrees of freedom of 2N=18 and the fidelity of 99% for single-atom 5- μm translation. It is hoped that our in situ atom rearrangement becomes useful in scaling quantum computers. Samsung Science and Technology Foundation [SSTF-BA1301-12].

  16. A Unit on Deterministic Chaos for Student Teachers

    ERIC Educational Resources Information Center

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  17. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  18. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 2 PHARMACOKINETIC MODELING

    EPA Science Inventory

    The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...

  19. Integrability and Chaos: The Classical Uncertainty

    ERIC Educational Resources Information Center

    Masoliver, Jaume; Ros, Ana

    2011-01-01

    In recent years there has been a considerable increase in the publishing of textbooks and monographs covering what was formerly known as random or irregular deterministic motion, now referred to as deterministic chaos. There is still substantial interest in a matter that is included in many graduate and even undergraduate courses on classical…

  20. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  1. Contemporary Genetics for Gender Researchers: Not Your Grandma's Genetics Anymore

    ERIC Educational Resources Information Center

    Salk, Rachel H.; Hyde, Janet S.

    2012-01-01

    Over the past century, much of genetics was deterministic, and feminist researchers framed justified criticisms of genetics research. However, over the past two decades, genetics research has evolved remarkably and has moved far from earlier deterministic approaches. Our article provides a brief primer on modern genetics, emphasizing contemporary…

  2. Technological Utopia, Dystopia and Ambivalence: Teaching with Social Media at a South African University

    ERIC Educational Resources Information Center

    Rambe, Patient; Nel, Liezel

    2015-01-01

    The discourse of social media adoption in higher education has often been funnelled through utopian and dystopian perspectives, which are polarised but determinist theorisations of human engagement with educational technologies. Consequently, these determinist approaches have obscured a broadened grasp of the situated, socially constructed nature…

  3. New Criterion and Tool for Caltrans Seismic Hazard Characterization

    NASA Astrophysics Data System (ADS)

    Shantz, T.; Merriam, M.; Turner, L.; Chiou, B.; Liu, X.

    2008-12-01

    Caltrans recently adopted new procedures for the development of response spectra for structure design. These procedures incorporate both deterministic and probabilistic criteria. The Next Generation Attenuation (NGA) models (2008) are used for deterministic assessment (using a revised late-Quaternary age fault database), and the USGS 2008 5% in 50-year hazard maps are used for probabilistic assessment. A minimum deterministic spectrum based on a M6.5 earthquake at 12 km is also included. These spectra are enveloped and the largest values used. A new publicly available web-based design tool for calculating the design spectrum will be used for calculations. The tool is built on a Windows-Apache-MySQL-PHP (WAMP) platform and integrates GoogleMaps for increased flexibility in the tool's use. Links to Caltrans data such as pre-construction logs of test borings assist in the estimation of Vs30 values used in the new procedures. Basin effects based on new models developed for the CFM, for the San Francisco Bay area by the USGS, and by Thurber (2008) are also incorporated. It is anticipated that additional layers such as CGS Seismic Hazard Zone maps will be added in the future. Application of the new criterion will result in expected higher levels of ground motion at many bridges west of the Coast Ranges. In eastern California, use of the NGA relationships for strike-slip faulting (the dominant sense of motion in California) will often result in slightly lower expected values for bridges. The expected result is a more realistic prediction of ground motions at bridges, in keeping with those motions developed for other large-scale and important structures. The tool is based on a simplified fault map of California, so it will not be used for more detailed evaluations such as surface rupture determination. Announcements regarding tool availability (expected to be in early 2009) are at http://www.dot.ca.gov/research/index.htm

  4. Measurements of angular flux on surface of Li/sub 2/O slab assemblies and their analysis by a direct integration transport code ''BERMUDA''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maekawa, H.; Oyama, Y.

    1983-09-01

    Angle-dependent neutron leakage spectra above 0.5 MeV from Li/sub 2/O slab assemblies were measured accurately by the time-of-flight method. The measured angles were 0/sup 0/, 12.2/sup 0/, 24.9/sup 0/, 41.8/sup 0/ and 66.8/sup 0/. The sizes of Li/sub 2/O assemblies were 31.4 cm in equivalent radius and 5.06, 20.24 and 40.48 cm in thickness. The data were analyzed by a new transport code ''BERMUDA-2DN''. Time-independent transport equation is solved for two-dimensional, cylindrical, multi-regional geometry using the direct integration method in a multi-group model. The group transfer kernels are accurately obtained from the double-differential cross section data without using Legendre expansion.more » The results were compared absolutely. While there exist discrepancies partially, the calculational spectra agree well with the experimental ones as a whole. The BERMUDA code was demonstrated to be useful for the analyses of the fusion neutronics and shielding.« less

  5. Deterministic Tectonic Origin Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas

    NASA Astrophysics Data System (ADS)

    Necmioglu, O.; Meral Ozel, N.

    2014-12-01

    Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the well-known historical earthquakes such as AD 365 or AD 1303 in the Hellenic Arc, but also earthquakes with lower magnitudes do constitute to the tsunami hazard in the study area.

  6. Deterministic chaos in an ytterbium-doped mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Mélo, Lucas B. A.; Palacios, Guillermo F. R.; Carelli, Pedro V.; Acioli, Lúcio H.; Rios Leite, José R.; de Miranda, Marcio H. G.

    2018-05-01

    We experimentally study the nonlinear dynamics of a femtosecond ytterbium doped mode-locked fiber laser. With the laser operating in the pulsed regime a route to chaos is presented, starting from stable mode-locking, period two, period four, chaos and period three regimes. Return maps and bifurcation diagrams were extracted from time series for each regime. The analysis of the time series with the laser operating in the quasi mode-locked regime presents deterministic chaos described by an unidimensional Rossler map. A positive Lyapunov exponent $\\lambda = 0.14$ confirms the deterministic chaos of the system. We suggest an explanation about the observed map by relating gain saturation and intra-cavity loss.

  7. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  8. A Comparison of Responses on the Attitudes toward Women Scale and Attitudes toward Feminism Scale: Is There a Difference between College-Age and Later-Life Adults with the Original Norms?

    ERIC Educational Resources Information Center

    Byrne, Zinta S.; Felker, Sydney; Vacha-Haase, Tammi; Rickard, Kathryn M.

    2011-01-01

    Responses from college-age students and those 50 years and older were compared using the Attitudes Toward Women Scale and the Attitudes Toward Feminism Scale. Results from a multigroup confirmatory factor analysis showed groups differed on each scale, suggesting unidimensional scales no longer represent attitudes toward women or feminism.…

  9. Covariance Applications in Criticality Safety, Light Water Reactor Analysis, and Spent Fuel Characterization

    DOE PAGES

    Williams, M. L.; Wiarda, D.; Ilas, G.; ...

    2014-06-15

    Recently, we processed a new covariance data library based on ENDF/B-VII.1 for the SCALE nuclear analysis code system. The multigroup covariance data are discussed here, along with testing and application results for critical benchmark experiments. Moreover, the cross section covariance library, along with covariances for fission product yields and decay data, is used to compute uncertainties in the decay heat produced by a burned reactor fuel assembly.

  10. Examining the Role of Inclusive STEM Schools in the College and Career Readiness of Students in the United States: A Multi-Group Analysis on the Outcome of Student Achievement

    ERIC Educational Resources Information Center

    Erdogan, Niyazi; Stuessy, Carol

    2016-01-01

    The most prominent option for finding a solution to the shortage of workers with STEM knowledge has been identified as specialized STEM schools by policymakers in the United States. The current perception of specialized STEM schools can be described as a unique environment that includes advanced curriculum, expert teachers, and opportunities for…

  11. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    EPA Science Inventory

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  12. Seed availability constrains plant species sorting along a soil fertility gradient

    Treesearch

    Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith

    2011-01-01

    1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...

  13. The Role of Probability and Intentionality in Preschoolers' Causal Generalizations

    ERIC Educational Resources Information Center

    Sobel, David M.; Sommerville, Jessica A.; Travers, Lea V.; Blumenthal, Emily J.; Stoddard, Emily

    2009-01-01

    Three experiments examined whether preschoolers recognize that the causal properties of objects generalize to new members of the same set given either deterministic or probabilistic data. Experiment 1 found that 3- and 4-year-olds were able to make such a generalization given deterministic data but were at chance when they observed probabilistic…

  14. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  15. Earlier initialization of highly active antiretroviral therapy is associated with long-term survival and is cost-effective: findings from a deterministic model of a 10-year Ugandan cohort.

    PubMed

    Mills, Fergal P; Ford, Nathan; Nachega, Jean B; Bansback, Nicholas; Nosyk, Bohdan; Yaya, Sanni; Mills, Edward J

    2012-11-01

    Raising the guidelines for the initiation of antiretroviral therapy in resource-limited settings at CD4 T-cell counts of 350 cells per microliter raises concerns about feasibility and cost. We examined costs of this shift using data from Uganda for almost 10 years. We projected total costs of earlier initiation with combined antiretroviral therapy, including inpatient and outpatient services, antiretroviral treatment and treatment for limited HIV-related opportunistic diseases, and benefits expressed in years-of-life-saved over 5- and 30-year time horizons using a deterministic economic model to examine the incremental cost-effectiveness ratio (ICER), expressed in cost per year-of-life-saved (YLS). The model generated ICERs for 5- and 30-year time horizons. Discounting both costs and benefits at 3% annually, for the 5-year analysis, the ICER was $695/YLS and $769 in the 30-year analysis. The results were most sensitive to program cost and the discount rate applied, but they were less sensitive to opportunistic infection treatment costs or the relative-risk reduction from earlier initiation. Program costs varied from 25% to 125%, and the ICER for the lower bound decreased to $491/YLS at 5-years and $574/YLS at 30 years. For the upper bound, the ICER increased to $899 for 5-years and $964 at 30-years. The budget impact of adoption, assuming the same level of program penetration in the community, is $261,651,942 for 5 years and $872,685,561 for 30 years. Our model showed that earlier initiation of combined antiretroviral therapy in Uganda is associated with improved long-term survival and is highly cost-effective, as defined by WHO-CHOICE.

  16. CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.

    2006-01-01

    This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.

  17. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  18. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  19. Precision production: enabling deterministic throughput for precision aspheres with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  20. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  1. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  2. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  3. Stochastic Adaptive Particle Beam Tracker Using Meer Filter Feedback.

    DTIC Science & Technology

    1986-12-01

    breakthrough required in controlling the beam location. In 1983, Zicker (27] conducted a feasibility study of a simple proportional gain controller... Zicker synthesized his stochastic controller designs from a deterministic optimal LQ controller assuming full state feedback. An LQ controller is a...34Merge" Method 2.5 Simlifying the eer Filter a Zicker ran a performance analysis on the Meer filter and found the Meer filter virtually insensitive to

  4. Deterministic Intracellular Modeling

    DTIC Science & Technology

    2003-03-01

    eukaryotes encompass all plants, animal, fungi and protists [6:71]. Structures in this class are more defined. For example, cells in this class possess a...affect cells. 5.3 Recommendations Further research into the construction and evaluation of intracellular models would benefit Air Force toxicology studies...manual220/indexE.html. 16. MathWorks, “The Benefits of MATLAB.” Internet, 2003. http://www.mathworks.com/products/matlab/description1.jsp. 17. Mendes

  5. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  6. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  7. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  8. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Wilson, Paul P. H.

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  9. Transmutation approximations for the application of hybrid Monte Carlo/deterministic neutron transport to shutdown dose rate analysis

    DOE PAGES

    Biondo, Elliott D.; Wilson, Paul P. H.

    2017-05-08

    In fusion energy systems (FES) neutrons born from burning plasma activate system components. The photon dose rate after shutdown from resulting radionuclides must be quantified. This shutdown dose rate (SDR) is calculated by coupling neutron transport, activation analysis, and photon transport. The size, complexity, and attenuating configuration of FES motivate the use of hybrid Monte Carlo (MC)/deterministic neutron transport. The Multi-Step Consistent Adjoint Driven Importance Sampling (MS-CADIS) method can be used to optimize MC neutron transport for coupled multiphysics problems, including SDR analysis, using deterministic estimates of adjoint flux distributions. When used for SDR analysis, MS-CADIS requires the formulation ofmore » an adjoint neutron source that approximates the transmutation process. In this work, transmutation approximations are used to derive a solution for this adjoint neutron source. It is shown that these approximations are reasonably met for typical FES neutron spectra and materials over a range of irradiation scenarios. When these approximations are met, the Groupwise Transmutation (GT)-CADIS method, proposed here, can be used effectively. GT-CADIS is an implementation of the MS-CADIS method for SDR analysis that uses a series of single-energy-group irradiations to calculate the adjoint neutron source. For a simple SDR problem, GT-CADIS provides speedups of 200 100 relative to global variance reduction with the Forward-Weighted (FW)-CADIS method and 9 ± 5 • 104 relative to analog. As a result, this work shows that GT-CADIS is broadly applicable to FES problems and will significantly reduce the computational resources necessary for SDR analysis.« less

  10. A multi-group and preemptable scheduling of cloud resource based on HTCondor

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaowei; Zou, Jiaheng; Cheng, Yaodong; Shi, Jingyan

    2017-10-01

    Due to the features of virtual machine-flexibility, easy controlling and various system environments, more and more fields utilize the virtualization technology to construct the distributed system with the virtual resources, also including high energy physics. This paper introduce a method used in high energy physics that supports multiple resource group and preemptable cloud resource scheduling, combining virtual machine with HTCondor (a batch system). It makes resource controlling more flexible and more efficient and makes resource scheduling independent of job scheduling. Firstly, the resources belong to different experiment-groups, and the type of user-groups mapping to resource-groups(same as experiment-group) is one-to-one or many-to-one. In order to make the confused group simply to be managed, we designed the permission controlling component to ensure that the different resource-groups can get the suitable jobs. Secondly, for the purpose of elastically allocating resources for suitable resource-group, it is necessary to schedule resources like scheduling jobs. So this paper designs the cloud resource scheduling to maintain a resource queue and allocate an appropriate amount of virtual resources to the request resource-group. Thirdly, in some kind of situations, because of the resource occupied for a long time, resources need to be preempted. This paper adds the preemption function for the resource scheduling that implement resource preemption based on the group priority. Additionally, the way to preempting is soft that when virtual resources are preempted, jobs will not be killed but also be held and rematched later. It is implemented with the help of HTCondor, storing the held job information in scheduler, releasing the job to idle status and doing second matcher. In IHEP (institute of high energy physics), we have built a batch system based on HTCondor with a virtual resources pool based on Openstack. And this paper will show some cases of experiment JUNO and LHAASO. The result indicates that multi-group and preemptable resource scheduling is efficient to support multi-group and soft preemption. Additionally, the permission controlling component has been used in the local computing cluster, supporting for experiment JUNO, CMS and LHAASO, and the scale will be expanded to more experiments at the first half year, including DYW, BES and so on. Its evidence that the permission controlling is efficient.

  11. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  12. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  13. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  14. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  15. Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers

    DOE PAGES

    Saha, Sourabh K.; Culpepper, Martin L.

    2016-04-25

    Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.

  16. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  17. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2015-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  18. Probabilistic direct counterfactual quantum communication

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng

    2017-02-01

    It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).

  19. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  20. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  1. Integrated deterministic and probabilistic safety analysis for safety assessment of nuclear power plants

    DOE PAGES

    Di Maio, Francesco; Zio, Enrico; Smith, Curtis; ...

    2015-07-06

    The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less

  2. The POPOP4 library and codes for preparing secondary gamma-ray production cross sections

    NASA Technical Reports Server (NTRS)

    Ford, W. E., III

    1972-01-01

    The POPOP4 code for converting secondary gamma ray yield data to multigroup secondary gamma ray production cross sections and the POPOP4 library of secondary gamma ray yield data are described. Recent results of the testing of uranium and iron data sets from the POPOP4 library are given. The data sets were tested by comparing calculated secondary gamma ray pulse height spectra measured at the ORNL TSR-II reactor.

  3. School belongingness and mental health functioning across the primary-secondary transition in a mainstream sample: multi-group cross-lagged analyses.

    PubMed

    Vaz, Sharmila; Falkmer, Marita; Parsons, Richard; Passmore, Anne Elizabeth; Parkin, Timothy; Falkmer, Torbjörn

    2014-01-01

    The relationship between school belongingness and mental health functioning before and after the primary-secondary school transition has not been previously investigated in students with and without disabilities. This study used a prospective longitudinal design to test the bi-directional relationships between these constructs, by surveying 266 students with and without disabilities and their parents, 6-months before and after the transition to secondary school. Cross-lagged multi-group analyses found student perception of belongingness in the final year of primary school to contribute to change in their mental health functioning a year later. The beneficial longitudinal effects of school belongingness on subsequent mental health functioning were evident in all student subgroups; even after accounting for prior mental health scores and the cross-time stability in mental health functioning and school belongingness scores. Findings of the current study substantiate the role of school contextual influences on early adolescent mental health functioning. They highlight the importance for primary and secondary schools to assess students' school belongingness and mental health functioning and transfer these records as part of the transition process, so that appropriate scaffolds are in place to support those in need. Longer term longitudinal studies are needed to increase the understanding of the temporal sequencing between school belongingness and mental health functioning of all mainstream students.

  4. Improvements in health-related quality of life, cardio-metabolic health, and fitness in postmenopausal women after a supervised, multicomponent, adapted exercise program in a suited health promotion intervention: a multigroup study.

    PubMed

    Godoy-Izquierdo, Débora; Guevara, Nicolás Mendoza Ladrón de; Toral, Mercedes Vélez; Galván, Carlos de Teresa; Ballesteros, Alberto Salamanca; García, Juan F Godoy

    2017-08-01

    This study explored the multidimensional outcomes that resulted from the adherence to regular exercise among previously sedentary postmenopausal women. The exercise was managed through a supervised, multicomponent, adapted approximately 20-week program in a suited health promotion intervention. A multigroup, mixed-design study with between-group (intervention, sedentary, and active women) and within-subject measures (baseline, postintervention, and 3- and 12-month follow-ups) was conducted using intention-to-treat methodology. The Cervantes Scale assessed health-related quality of life (HRQoL), and several indicators of cardio-metabolic status and fitness were also assessed. After the intervention, the participants experienced positive changes in short and long-term physical and mental health, with significant enhancements in several HRQoL dimensions, particularly mental well-being and menopause-related health and subdomains. Improvements were maintained or continued (eg, mental well-being) throughout the period, leading up to the 12-month follow-up. These outcomes were accompanied by significant improvements in cardio-metabolic status and fitness, including weight, body mass index, cardio-respiratory fitness, and flexibility. Our findings parallel previous empirical evidence showing the benefits associated with regular exercise, and add evidence to the association of positive outcomes in HRQoL with improvements in cardio-metabolic health and fitness status after the adoption of an active lifestyle.

  5. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Holst, B.; Toth, G.; Sokolov, I. V.

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less

  6. Measurement equivalence: A non-technical primer on categorical multi-group confirmatory factor analysis in school psychology.

    PubMed

    Pendergast, Laura L; von der Embse, Nathaniel; Kilgus, Stephen P; Eklund, Katie R

    2017-02-01

    Evidence-based interventions (EBIs) have become a central component of school psychology research and practice, but EBIs are dependent upon the availability and use of evidence-based assessments (EBAs) with diverse student populations. Multi-group confirmatory factor analysis (MG-CFA) is an analytical tool that can be used to examine the validity and measurement equivalence/invariance of scores across diverse groups. The objective of this article is to provide a conceptual and procedural overview of categorical MG-CFA, as well as an illustrated example based on data from the Social and Academic Behavior Risk Screener (SABRS) - a tool designed for use in school-based interventions. This article serves as a non-technical primer on the topic of MG-CFA with ordinal (rating scale) data and does so through the framework of examining equivalence of measures used for EBIs within multi-tiered models - an understudied topic. To go along with the illustrated example, we have provided supplementary files that include sample data, Mplus input code, and an annotated guide for understanding the input code (http://dx.doi.org/10.1016/j.jsp.2016.11.002). Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  7. Cross-cultural examination of measurement invariance of the Beck Depression Inventory-II.

    PubMed

    Dere, Jessica; Watters, Carolyn A; Yu, Stephanie Chee-Min; Bagby, R Michael; Ryder, Andrew G; Harkness, Kate L

    2015-03-01

    Given substantial rates of major depressive disorder among college and university students, as well as the growing cultural diversity on many campuses, establishing the cross-cultural validity of relevant assessment tools is important. In the current investigation, we examined the Beck Depression Inventory-Second Edition (BDI-II; Beck, Steer, & Brown, 1996) among Chinese-heritage (n = 933) and European-heritage (n = 933) undergraduates in North America. The investigation integrated 3 distinct lines of inquiry: (a) the literature on cultural variation in depressive symptom reporting between people of Chinese and Western heritage; (b) recent developments regarding the factor structure of the BDI-II; and (c) the application of advanced statistical techniques to the issue of cross-cultural measurement invariance. A bifactor model was found to represent the optimal factor structure of the BDI-II. Multigroup confirmatory factor analysis showed that the BDI-II had strong measurement invariance across both culture and gender. In group comparisons with latent and observed variables, Chinese-heritage students scored higher than European-heritage students on cognitive symptoms of depression. This finding deviates from the commonly held view that those of Chinese heritage somatize depression. These findings hold implications for the study and use of the BDI-II, highlight the value of advanced statistical techniques such as multigroup confirmatory factor analysis, and offer methodological lessons for cross-cultural psychopathology research more broadly. 2015 APA, all rights reserved

  8. Ethnic Identity and Regional Differences in Mental Health in a National Sample of African American Young Adults.

    PubMed

    Williams, Monnica T; Duque, Gerardo; Wetterneck, Chad T; Chapman, L Kevin; DeLapp, Ryan C T

    2018-04-01

    Prior research has found that a strong positive ethnic identity is a protective factor against anxiety and depression in African Americans. In this study, ethnic identity is examined in a geographically representative sample of African American young adults (n = 242), using the Multigroup Ethnic Identity Measure (MEIM) (Phinney in J Adolescent Res 7:156-76, 15). The two-factor structure of the measure (Roberts et al. in J Early Adolescence 19:301-22, 1) was analyzed using a structural equation model and displayed an acceptable fit only when multiple error terms were correlated. A multigroup confirmatory factor analysis revealed measurement equivalence of the two-factor structure between African Americans from Southern and non-Southern regions of the USA. We found that significantly higher levels of ethnic identity were present among African American in the South compared to other regions, and region significantly predicted total ethnic identity scores in a linear regression, even when controlling for gender, age, urbanicity, and years of education. Furthermore, among African Americans, living in the South was significantly correlated with less help-seeking for diagnosed depression, anxiety, and/or obsessive-compulsive disorder, where help-seeking was defined as obtaining a diagnosis by a professional. The role of ethnic identity and social support are discussed in the context of African American mental health.

  9. School Belongingness and Mental Health Functioning across the Primary-Secondary Transition in a Mainstream Sample: Multi-Group Cross-Lagged Analyses

    PubMed Central

    Vaz, Sharmila; Falkmer, Marita; Parsons, Richard; Passmore, Anne Elizabeth; Parkin, Timothy; Falkmer, Torbjörn

    2014-01-01

    The relationship between school belongingness and mental health functioning before and after the primary-secondary school transition has not been previously investigated in students with and without disabilities. This study used a prospective longitudinal design to test the bi-directional relationships between these constructs, by surveying 266 students with and without disabilities and their parents, 6-months before and after the transition to secondary school. Cross-lagged multi-group analyses found student perception of belongingness in the final year of primary school to contribute to change in their mental health functioning a year later. The beneficial longitudinal effects of school belongingness on subsequent mental health functioning were evident in all student subgroups; even after accounting for prior mental health scores and the cross-time stability in mental health functioning and school belongingness scores. Findings of the current study substantiate the role of school contextual influences on early adolescent mental health functioning. They highlight the importance for primary and secondary schools to assess students’ school belongingness and mental health functioning and transfer these records as part of the transition process, so that appropriate scaffolds are in place to support those in need. Longer term longitudinal studies are needed to increase the understanding of the temporal sequencing between school belongingness and mental health functioning of all mainstream students. PMID:24967580

  10. Multi-Group Reductions of LTE Air Plasma Radiative Transfer in Cylindrical Geometries

    NASA Technical Reports Server (NTRS)

    Scoggins, James; Magin, Thierry Edouard Bertran; Wray, Alan; Mansour, Nagi N.

    2013-01-01

    Air plasma radiation in Local Thermodynamic Equilibrium (LTE) within cylindrical geometries is studied with an application towards modeling the radiative transfer inside arc-constrictors, a central component of constricted-arc arc jets. A detailed database of spectral absorption coefficients for LTE air is formulated using the NEQAIR code developed at NASA Ames Research Center. The database stores calculated absorption coefficients for 1,051,755 wavelengths between 0.04 µm and 200 µm over a wide temperature (500K to 15 000K) and pressure (0.1 atm to 10.0 atm) range. The multi-group method for spectral reduction is studied by generating a range of reductions including pure binning and banding reductions from the detailed absorption coefficient database. The accuracy of each reduction is compared to line-by-line calculations for cylindrical temperature profiles resembling typical profiles found in arc-constrictors. It is found that a reduction of only 1000 groups is sufficient to accurately model the LTE air radiation over a large temperature and pressure range. In addition to the reduction comparison, the cylindrical-slab formulation is compared with the finite-volume method for the numerical integration of the radiative flux inside cylinders with varying length. It is determined that cylindrical-slabs can be used to accurately model most arc-constrictors due to their high length to radius ratios.

  11. Moderating effect of gender on the prospective relation of physical activity with psychosocial outcomes and asthma control in adolescents: a longitudinal study.

    PubMed

    Tiggelman, Dana; van de Ven, Monique O M; van Schayck, Onno C P; Engels, Rutger C M E

    2014-12-01

    Adolescents with asthma experience more psychosocial and physiological problems compared to their healthy peers. Physical activity (PA) might decrease these problems. This study was the first observational longitudinal study to examine whether habitual PA could predict changes in psychosocial outcomes (i.e., symptoms of anxiety and depression, quality of life [QOL] and stress) and asthma control over time in adolescents with asthma and whether gender moderated these relationships. Adolescents with asthma (N = 253; aged 10-14 years at baseline) were visited at home in the spring/summer of 2012 and 2013. They completed questionnaires assessing their habitual PA, symptoms of anxiety and depression, QOL, perceived stress and asthma control. Path analyses using Mplus were conducted to examine longitudinal relationships among habitual PA, psychosocial outcomes and asthma control (controlled for body mass index, age and gender). Using multi-group analyses, we examined whether gender moderated these relationships. Path analyses in the total group showed that habitual PA did not predict changes in psychosocial outcomes or asthma control over time. Multi-group analyses showed that gender moderated the relation of habitual PA with anxiety and depression. Habitual PA only significantly predicted a decrease in anxiety and depression over time for girls but not for boys. Increasing habitual PA in girls with asthma might decrease their symptoms of anxiety and depression.

  12. Racial/Ethnic Differences Moderate Associations of Coping Strategies and Posttraumatic Stress Disorder Symptom Clusters among Women Experiencing Partner Violence: A Multigroup Path Analysis

    PubMed Central

    Weiss, Nicole H.; Johnson, Clinesha D.; Contractor, Ateka; Peasant, Courtney; Swan, Suzanne C.; Sullivan, Tami P.

    2017-01-01

    Background Past research underscores the key role of coping strategies in the development, maintenance, and exacerbation of posttraumatic stress disorder (PTSD) symptoms. The goal of the current study was to extend existing literature by examining whether race/ethnicity moderates the relations among coping strategies (social support, problem-solving, avoidance) and PTSD symptom clusters (intrusion, avoidance, numbing, arousal). Methods Participants were 369 community women (134 African Americans, 131 Latinas, 104 Whites) who reported bidirectional aggression with a current male partner. Multigroup path analysis was utilized to test the moderating role of race/ethnicity in a model linking coping strategies to PTSD symptom clusters. Results The strength and direction of relations among coping strategies and PTSD symptom clusters varied as a function of race/ethnicity. Greater social support coping was related to more arousal symptoms for Latinas and Whites. Greater problem-solving coping was related to fewer arousal symptoms for Latinas. Greater avoidance coping was related to more symptoms across many of the PTSD clusters for African Americans, Latinas, and Whites, however, these relations were strongest for African Americans. Conclusion Results provide support for the moderating role of race/ethnicity in the relations among coping strategies and PTSD symptom clusters, and highlight potential targets for culturally-informed PTSD treatments. PMID:27575609

  13. Analysis of sensitive questions across cultures: an application of multigroup item randomized response theory to sexual attitudes and behavior.

    PubMed

    de Jong, Martijn G; Pieters, Rik; Stremersch, Stefan

    2012-09-01

    Answers to sensitive questions are prone to social desirability bias. If not properly addressed, the validity of the research can be suspect. This article presents multigroup item randomized response theory (MIRRT) to measure self-reported sensitive topics across cultures. The method was specifically developed to reduce social desirability bias by making an a priori change in the design of the survey. The change involves the use of a randomization device (e.g., a die) that preserves participants' privacy at the item level. In cases where multiple items measure a higher level theoretical construct, the researcher could still make inferences at the individual level. The method can correct for under- and overreporting, even if both occur in a sample of individuals or across nations. We present and illustrate MIRRT in a nontechnical manner, provide WinBugs software code so that researchers can directly implement it, and present 2 cross-national studies in which it was applied. The first study compared nonstudent samples from 2 countries (total n = 927) on permissive sexual attitudes and risky sexual behavior and related these to individual-level characteristics such as the Big Five personality traits. The second study compared nonstudent samples from 17 countries (total n = 6,195) on risky sexual behavior and related these to individual-level characteristics, such as gender and age, and to country-level characteristics, such as sex ratio.

  14. Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA

    NASA Astrophysics Data System (ADS)

    Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.

    2017-12-01

    Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.

  15. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  16. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  17. Efficient Algorithms for Handling Nondeterministic Automata

    NASA Astrophysics Data System (ADS)

    Vojnar, Tomáš

    Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.

  18. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  19. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  20. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  1. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  2. Deterministic Execution of Ptides Programs

    DTIC Science & Technology

    2013-05-15

    at a time no later than 30+1+5 = 36. Assume the maximum clock synchronization error is . Therefore, the AddSubtract adder must delay processing the...the synchronization of the platform real- time clock to its peers in other system platforms. The portions of PtidyOS code that implement access to the...interesting opportunities for future research. References [1] Y. Zhao, E. A. Lee, and J. Liu, “A programming model for time - synchronized distributed real

  3. Composing Data and Process Descriptions in the Design of Software Systems.

    DTIC Science & Technology

    1988-05-01

    accompanying ’data’ specification. So, for example, the bank account of Section 2.2.3 became ACC = open? d -- ACCIin(d) ACCA = payin? p --* ACCeosi(Ap) wdraw...w --* ACCtidraw(A,w) bal! balance(A) --+ ACCA I close -+ STOP where A has abstract type Account , with operators (that is, side-effect free functions...n accounts .................. 43 3.5 Non-deterministic merge ........ ........................... 45 4.1 Specification of a ticket machine system

  4. Modeling the Combined Effects of Deterministic and Statistical Structure for Optimization of Regional Monitoring

    DTIC Science & Technology

    2014-06-30

    Directorate 3550 Aberdeen Ave SE AIR FORCE MATERIEL COMMAND KIRTLAND AIR FORCE BASE, NM 87117-5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using ...any other person or corporation; or convey any rights or permission to manufacture, use , or sell any patented invention that may relate to them...stations in Eurasia. This is accomplished by synthesizing seismograms using a radiative transport technique to predict the high frequency coda (>5 Hz

  5. Proceedings of the Expert Systems Workshop Held in Pacific Grove, California on 16-18 April 1986

    DTIC Science & Technology

    1986-04-18

    13- NUMBER OF PAGES 197 N IS. SECURITY CLASS, (ol Mm raport) UNCLASSIFIED I5a. DECLASSIFI CATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION...are distributed and parallel. * - Features unimplemented at present; scheduled for phase 2. Table 1-1: Key design characteristics of ABE 2. a...data structuring techniques and a semi- deterministic scheduler . A program for the DF framework consists of a number of independent processing modules

  6. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  7. Values in Science: Making Sense of Biology Doctoral Students' Critical Examination of a Deterministic Claim in a Media Article

    ERIC Educational Resources Information Center

    Raveendran, Aswathy; Chunawala, Sugra

    2015-01-01

    Several educators have emphasized that students need to understand science as a human endeavor that is not value free. In the exploratory study reported here, we investigated how doctoral students of biology understand the intersection of values and science in the context of genetic determinism. Deterministic research claims have been critiqued…

  8. The dual reading of general conditionals: The influence of abstract versus concrete contexts.

    PubMed

    Wang, Moyun; Yao, Xinyun

    2018-04-01

    A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.

  9. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  10. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  11. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  12. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  13. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  14. Implementation speed of deterministic population passages compared to that of Rabi pulses

    NASA Astrophysics Data System (ADS)

    Chen, Jingwei; Wei, L. F.

    2015-02-01

    Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.

  15. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  16. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  17. Moving beyond the cost-loss ratio: economic assessment of streamflow forecasts for a risk-averse decision maker

    NASA Astrophysics Data System (ADS)

    Matte, Simon; Boucher, Marie-Amélie; Boucher, Vincent; Fortier Filion, Thomas-Charles

    2017-06-01

    A large effort has been made over the past 10 years to promote the operational use of probabilistic or ensemble streamflow forecasts. Numerous studies have shown that ensemble forecasts are of higher quality than deterministic ones. Many studies also conclude that decisions based on ensemble rather than deterministic forecasts lead to better decisions in the context of flood mitigation. Hence, it is believed that ensemble forecasts possess a greater economic and social value for both decision makers and the general population. However, the vast majority of, if not all, existing hydro-economic studies rely on a cost-loss ratio framework that assumes a risk-neutral decision maker. To overcome this important flaw, this study borrows from economics and evaluates the economic value of early warning flood systems using the well-known Constant Absolute Risk Aversion (CARA) utility function, which explicitly accounts for the level of risk aversion of the decision maker. This new framework allows for the full exploitation of the information related to a forecasts' uncertainty, making it especially suited for the economic assessment of ensemble or probabilistic forecasts. Rather than comparing deterministic and ensemble forecasts, this study focuses on comparing different types of ensemble forecasts. There are multiple ways of assessing and representing forecast uncertainty. Consequently, there exist many different means of building an ensemble forecasting system for future streamflow. One such possibility is to dress deterministic forecasts using the statistics of past error forecasts. Such dressing methods are popular among operational agencies because of their simplicity and intuitiveness. Another approach is the use of ensemble meteorological forecasts for precipitation and temperature, which are then provided as inputs to one or many hydrological model(s). In this study, three concurrent ensemble streamflow forecasting systems are compared: simple statistically dressed deterministic forecasts, forecasts based on meteorological ensembles, and a variant of the latter that also includes an estimation of state variable uncertainty. This comparison takes place for the Montmorency River, a small flood-prone watershed in southern central Quebec, Canada. The assessment of forecasts is performed for lead times of 1 to 5 days, both in terms of forecasts' quality (relative to the corresponding record of observations) and in terms of economic value, using the new proposed framework based on the CARA utility function. It is found that the economic value of a forecast for a risk-averse decision maker is closely linked to the forecast reliability in predicting the upper tail of the streamflow distribution. Hence, post-processing forecasts to avoid over-forecasting could help improve both the quality and the value of forecasts.

  18. Nanoscale lateral displacement arrays for the separation of exosomes and colloids down to 20 nm

    NASA Astrophysics Data System (ADS)

    Austin, Robert; Wunsch, Benjamin; Smith, Joshua; Gifford, Stacey; Wang, Chao; Brink, Markus; Bruce, Robert; Stolovitzky, Gustavo; Astier, Yann

    Deterministic lateral displacement (DLD) pillar arrays are an efficient technology to sort, separate and enrich micrometre-scale particles, which include parasites1, bacteria2, blood cells3 and circulating tumour cells in blood4. However, this technology has not been translated to the true nanoscale, where it could function on biocolloids, such as exosomes. Exosomes, a key target of liquid biopsies, are secreted by cells and contain nucleic acid and protein information about their originating tissue5. One challenge in the study of exosome biology is to sort exosomes by size and surface markers6, 7. We use manufacturable silicon processes to produce nanoscale DLD (nano-DLD) arrays of uniform gap sizes ranging from 25 to 235 nm. We show that at low Péclet (Pe) numbers, at which diffusion and deterministic displacement compete, nano-DLD arrays separate particles between 20 to 110 nm based on size with sharp resolution. Further, we demonstrate the size-based displacement of exosomes, and so open up the potential for on-chip sorting and quantification of these important biocolloids.

  19. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  20. Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.

    PubMed

    Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F

    Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.

Top