Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies
Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.
2012-08-14
reasonable quality photographic data. The photographs in Figure 2 also reveal the appearance of a boundary layer at the left and right walls; this boundary layer has not been included in the test problem as preliminary calculations suggested it had a negligible effect on plume penetration and RT mixing. The significance of this test problem is that, unlike planar RT experiments such as the Rocket-Rig (Youngs, 1984), Linear Electric Motor - LEM (Dimonte, 1990), or the Water Tunnel (Andrews, 1992), the Tilted-Rig is a unique two-dimensional RT mixing experiment that has experimental data and now (in this TP) Direct Numerical Simulation data from Livescu and Wei. The availability of DNS data for the tilted-rig has made this TP viable as it provides detailed results for comparison purposes. The purpose of the test problem is to provide 3D simulation results, validated by comparison with experiment, which can be used for the development and validation of 2D RANS models. When such models are applied to 2D flows, various physics issues are raised such as double counting, combined buoyancy and shear, and 2-D strain, which have not yet been adequately addressed. The current objective of the test problem is to compare key results, which are needed for RANS model validation, obtained from high-Reynolds number DNS, high-resolution ILES or LES with explicit sub-grid-scale models. The experiment is incompressible and so is directly suitable for algorithms that are designed for incompressible flows (e.g. pressure correction algorithms with multi-grid); however, we have extended the TP so that compressible algorithms, run at low Mach number, may also be used if careful consideration is given to initial pressure fields. Thus, this TP serves as a useful tool for incompressible and compressible simulation codes, and mathematical models. In the remainder of this TP we provide a detailed specification; the next section provides the underlying assumptions for the TP, fluids, geometry details
A 2-D Test Problem for CFD Modeling Heat Transfer in Spent Fuel Transfer Cask Neutron Shields
Zigh, Ghani; Solis, Jorge; Fort, James A.
2011-01-14
well as the tradeoff between steady state and transient solutions. Solutions are compared for two commercial CFD codes, FLUENT and STAR-CCM+. The results can be used to provide input to the CFD Best Practices for this application. Following study results for the 2-D test problem, a comparison of simulation results is provided for a high Rayleigh number experiment with large annular gap. Because the geometry of this validation is significantly different from the neutron shield, and due to the critical nature of this application, the argument is made for new experiments at representative scales
On 2D bisection method for double eigenvalue problems
Ji, X.
1996-06-01
The two-dimensional bisection method presented in (SIAM J. Matrix Anal. Appl. 13(4), 1085 (1992)) is efficient for solving a class of double eigenvalue problems. This paper further extends the 2D bisection method of full matrix cases and analyses its stability. As in a single parameter case, the 2D bisection method is very stable for the tridiagonal matrix triples satisfying the symmetric-definite condition. Since the double eigenvalue problems arise from two-parameter boundary value problems, an estimate of the discretization error in eigenpairs is also given. Some numerical examples are included. 42 refs., 1 tab.
Validation and testing of the VAM2D computer code
Kool, J.B.; Wu, Y.S. )
1991-10-01
This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs.
2-D or not 2-D, that is the question: A Northern California test
Mayeda, K; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D
2005-06-06
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2
A linear analytical boundary element method (BEM) for 2D homogeneous potential problems
NASA Astrophysics Data System (ADS)
Friedrich, Jürgen
2002-06-01
The solution of potential problems is not only fundamental for geosciences, but also an essential part of related subjects like electro- and fluid-mechanics. In all fields, solution algorithms are needed that should be as accurate as possible, robust, simple to program, easy to use, fast and small in computer memory. An ideal technique to fulfill these criteria is the boundary element method (BEM) which applies Green's identities to transform volume integrals into boundary integrals. This work describes a linear analytical BEM for 2D homogeneous potential problems that is more robust and precise than numerical methods because it avoids numerical schemes and coordinate transformations. After deriving the solution algorithm, the introduced approach is tested against different benchmarks. Finally, the gained method was incorporated into an existing software program described before in this journal by the same author.
Parallel algorithms for 2-D cylindrical transport equations of Eigenvalue problem
Wei, J.; Yang, S.
2013-07-01
In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability. (authors)
Use of adaptive walls in 2D tests
NASA Technical Reports Server (NTRS)
Archambaud, J. P.; Chevallier, J. P.
1984-01-01
A new method for computing the wall effects gives precise answers to some questions arising in adaptive wall concept applications: length of adapted regions, fairings with up and downstream regions, residual misadjustments effects, reference conditions. The acceleration of the iterative process convergence and the development of an efficient technology used in CERT T2 wind tunnels give in a single run the required test conditions. Samples taken from CAST 7 tests demonstrate the efficiency of the whole process to obtain significant results with considerations of tridimensional case extension.
Analytical solution of boundary integral equations for 2-D steady linear wave problems
NASA Astrophysics Data System (ADS)
Chuang, J. M.
2005-10-01
Based on the Fourier transform, the analytical solution of boundary integral equations formulated for the complex velocity of a 2-D steady linear surface flow is derived. It has been found that before the radiation condition is imposed, free waves appear both far upstream and downstream. In order to cancel the free waves in far upstream regions, the eigensolution of a specific eigenvalue, which satisfies the homogeneous boundary integral equation, is found and superposed to the analytical solution. An example, a submerged vortex, is used to demonstrate the derived analytical solution. Furthermore, an analytical approach to imposing the radiation condition in the numerical solution of boundary integral equations for 2-D steady linear wave problems is proposed.
Structure-approximating inverse protein folding problem in the 2D HP model.
Gupta, Arvind; Manuch, Ján; Stacho, Ladislav
2005-12-01
The inverse protein folding problem is that of designing an amino acid sequence which has a particular native protein fold. This problem arises in drug design where a particular structure is necessary to ensure proper protein-protein interactions. In this paper, we show that in the 2D HP model of Dill it is possible to solve this problem for a broad class of structures. These structures can be used to closely approximate any given structure. One of the most important properties of a good protein (in drug design) is its stability--the aptitude not to fold simultaneously into other structures. We show that for a number of basic structures, our sequences have a unique fold. PMID:16379538
NASA Astrophysics Data System (ADS)
Tucciarelli, T.
2012-12-01
A new methodology for the solution of irrotational 2D flow problems in domains with strongly unstructured meshes is presented. A fractional time step procedure is applied to the original governing equations, solving consecutively a convective prediction system and a diffusive corrective system. The non linear components of the problem are concentrated in the prediction step, while the correction step leads to the solution of a linear system, of the order of the number of computational cells. A MArching in Space and Time (MAST) approach is applied for the solution of the convective prediction step. The major advantages of the model, as well as its ability to maintain the solution monotonicity even in strongly irregular meshes, are briefly described. The algorithm is applied to the solution of diffusive shallow water equations in a simple domain.
Coupling finite and boundary element methods for 2-D elasticity problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.; Sistla, R.
1993-01-01
A finite element-boundary element (FE-BE) coupling method for two-dimensional elasticity problems is developed based on a weighted residual variational method in which a portion of the domain of interest is modeled by FEs and the remainder of the region by BEs. The performance of the FE-BE coupling method is demonstrated via applications to a simple 'patch test' problem and three-crack problems. The method passed the patch tests for various modeling configurations and yielded accurate strain energy release rates for the crack problems studied.
Differential Sensitivity Theory applied to the MESA2D code for multi-material problems
Henninger, R.J.; Maudlin, P.J.; Harstad, E.N.
1996-05-01
The technique called Differential Sensitivity Theory (DST) is extended to the multi-component system of equations solved by the MESA2D hydrocode. DST uses adjoint techniques to determine exact sensitivity derivatives, i.e., if R is a calculation result of interest (response R) and {alpha}{sub i} is a calculation input (parameter {alpha}{sub i}), then {partial_derivative}R/{partial_derivative}{alpha}{sub i} is defined as the sensitivity. The advantage of using DST is that for an n-parameter problem {ital all} n sensitivities can be obtained by integrating the solutions from only {ital two} calculations, a MESA calculation and its corresponding adjoint calculation using an Adjoint Continuum Mechanics (ACM) code. Previous papers have described application of the technique to one-dimensional, single-material problems. This work presents the derivation and solution of the additional adjoint equations for the purpose of computing sensitivities for two-dimensional, multi-component problems. As an example, results for a multi-material flyer plate impact problem featuring an oblique impact are given. {copyright} {ital 1996 American Institute of Physics.}
Differential Sensitivity Theory applied to the MESA2D code for multi-material problems
NASA Astrophysics Data System (ADS)
Henninger, R. J.; Maudlin, P. J.; Harstad, E. N.
1996-05-01
The technique called Differential Sensitivity Theory (DST) is extended to the multi-component system of equations solved by the MESA2D hydrocode. DST uses adjoint techniques to determine exact sensitivity derivatives, i.e., if R is a calculation result of interest (response R) and αi is a calculation input (parameter αi), then ∂R/∂αi is defined as the sensitivity. The advantage of using DST is that for an n-parameter problem all n sensitivities can be obtained by integrating the solutions from only two calculations, a MESA calculation and its corresponding adjoint calculation using an Adjoint Continuum Mechanics (ACM) code. Previous papers have described application of the technique to one-dimensional, single-material problems. This work presents the derivation and solution of the additional adjoint equations for the purpose of computing sensitivities for two-dimensional, multi-component problems. As an example, results for a multi-material flyer plate impact problem featuring an oblique impact are given.
Differential sensitivity theory applied to the MESA2D code for multi-material problems
Henninger, R.J.; Maudlin, P.J.; Harstad, E.N.
1995-09-01
The technique called Differential Sensitivity Theory (DST) is extended to the multi-component system of equations solved by the MESA2D hydrocode. DST uses adjoint techniques to determine exact sensitivity derivatives, i.e., if R is a calculation result of interest (response R) and {alpha}{sub i} is a calculation input (parameter {alpha}{sub i}), then {partial_derivative}R/{partial_derivative}{alpha}{sub i} is defined as the sensitivity. The advantage of using DST is that for an n-parameter problem all n sensitivities can be obtained by integrating the solutions from only two calculations, a MESA calculation and its corresponding adjoint calculation using an Adjoint Continuum Mechanics (ACM) code. Previous papers have described application of the technique to one-dimensional, single-material problems. This work presents the derivation and solution of the additional adjoint equations for the purpose of computing sensitivities for two-dimensional, multi-component problems. As an example, results for a multi-material flyer plate impact problem featuring an oblique impact are given.
Fluctuating Pressure Data from 2-D Nozzle Cold Flow Tests (Dual Bell)
NASA Technical Reports Server (NTRS)
Nesman, Tomas E.
2001-01-01
Rocket engines nozzle performance changes as a vehicle climbs through the atmosphere. An altitude compensating nozzle, ACN, is intended to improve on a fixed geometry bell nozzle that performs at optimum at only one trajectory point. In addition to nozzle performance, nozzle transient loads are an important consideration. Any nozzle experiences large transient toads when shocks pass through the nozzle at start and shutdown. Additional transient toads will occur at transitional flow conditions. The objectives of cold flow nozzle testing at MSFC are CFD benchmark / calibration and Unsteady flow / sideloads. Initial testing performed with 2-D inserts to 14" transonic wind tunnel. Recent review of 2-D data in preparation for nozzle test facility 3-D testing. This presentation shows fluctuating pressure data and some observations from 2-D dual-bell nozzle cold flow tests.
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.
A 2D inverse problem of predicting boiling heat transfer in a long fin
NASA Astrophysics Data System (ADS)
Orzechowski, Tadeusz
2015-12-01
A method for the determination of local values of the heat transfer coefficient on non-isothermal surfaces was analyzed on the example of a long smooth-surfaced fin made of aluminium. On the basis of the experimental data, two cases were taken into consideration: one-dimensional model for Bi < 0.1 and two-dimensional model for thicker elements. In the case when the drop in temperature over the thickness could be omitted, the rejected local values of heat fluxes were calculated from the integral of the equation describing temperature distribution on the fin. The corresponding boiling curve was plotted on the basis of temperature gradient distribution as a function of superheat. For thicker specimens, where Bi > 0.1, the problem was modelled using a 2-D heat conduction equation, for which the boundary conditions were posed on the surface observed with a thermovision camera. The ill-conditioned inverse problem was solved using a method of heat polynomials, which required validation.
Fung, Jimmy; Masser, Thomas; Morgan, Nathaniel R.
2012-06-25
The Sedov test is classically defined as a point blast problem. The Sedov problem has led us to advances in algorithms and in their understanding. Vorticity generation can be physical or numerical. Both play a role in Sedov calculations. The RAGE code (Eulerian) resolves the shock well, but produces vorticity. The source definition matters. For the FLAG code (Lagrange), CCH is superior to SGH by avoiding spurious vorticity generation. FLAG SGH currently has a number of options that improve results over traditional settings. Vorticity production, not shock capture, has driven the Sedov work. We are pursuing treatments with respect to the hydro discretization as well as to artificial viscosity.
Design and true Reynolds number 2-D testing of an advanced technology airfoil
NASA Technical Reports Server (NTRS)
Reaser, J. S.; Hallissy, J. B.; Campbell, R. L.
1983-01-01
A NASA-industry program has been conducted to determine the accuracy of available 2-D airfoil analysis procedures over a wide range of Reynolds numbers. The program also served to develop and demonstrate effective wind tunnel model designs for use in a cryogenic environment. A Lockheed design, CRYO 12X, supercritical, shockfree airfoil was configured using a continuous curvature analytical definition of the ordinates. Test results show a very close ordinate tolerance was necessary to realize the intended pressure distribution. Correlation of test with Korn-Garabedian 2-D analysis pressure data were generally good. GRUMFOIL analysis with a sidewall correction gave a better correlation.
Moran, B
2007-08-08
We present analytic solutions to two test problems that can be used to check the hydrodynamic implementation in computer codes designed to calculate the propagation of shocks in spherically convergent geometry. Our analysis is restricted to fluid materials with constant bulk modulus. In the first problem we present the exact initial acceleration and pressure gradient at the outer surface of a sphere subjected to an exponentially decaying pressure of the form P(t) = P{sub 0}e{sup -at}. We show that finely-zoned hydro-code simulations are in good agreement with our analytic solution. In the second problem we discuss the implosions of incompressible spherical fluid shells and we present the radial pressure profile across the shell thickness. We also discuss a semi-analytic solution to the time-evolution of a nearly spherical shell with arbitrary but small initial 3-dimensional (3-D) perturbations on its inner and outer surfaces.
An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems
NASA Astrophysics Data System (ADS)
Fazli, Roohallah; Nakhkash, Mansor
2012-07-01
This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions.
2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability
Mayeda, K M; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D S; Morasca, P
2005-07-13
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter
An F2D analysis of the Flow Instability Test (FIT) experiment
Suo-Anttila, A.
1993-10-01
The F2D code was used to analyze the Flow-Instability-Test (FIT) experiments at Brookhaven National Laboratories. A one-dimensional analysis of the experiment indicated that at the higher temperature levels the element should be unstable. The experimental data corroborated this theory. The two-dimensional simulation behaved in a manner that was very similar to the experimentally measured behavior. In conclusion, the FIT experimental analysis yields partial code validation of F2D, and it also validates the methodology that is used in analyzing thermal flow stability.
Evaluation of a [13C]-Dextromethorphan Breath Test to Assess CYP2D6 Phenotype
Leeder, J. Steven; Pearce, Robin E.; Gaedigk, Andrea; Modak, Anil; Rosen, David I.
2016-01-01
A [13C]-dextromethorphan ([13C]-DM) breath test was evaluated to assess its feasibility as a rapid, phenotyping assay for CYP2D6 activity. [13C]-DM (0.5 mg/kg) was administered orally with water or potassium bicarbonate-sodium bicarbonate to 30 adult Caucasian volunteers (n = 1 each): CYP2D6 poor metabolizers (2 null alleles; PM-0) and extensive metabolizers with 1 (EM-1) or 2 functional alleles (EM-2). CYP2D6 phenotype was determined by 13CO2 enrichment measured by infrared spectrometry (delta-over-baseline [DOB] value) in expired breath samples collected before and up to 240 minutes after [13C]-DM ingestion and by 4-hour urinary metabolite ratio. The PM-0 group was readily distinguishable from either EM group by both the breath test and urinary metabolite ratio. Using a single point determination of phenotype at 40 minutes and defining PMs as subjects with a DOB ≤ 0.5, the sensitivity of the method was 100%; specificity was 95% with 95% accuracy and resulted in the misclassification of 1 EM-1 individual as a PM. Modification of the initial protocol (timing of potassium bicarbonate-sodium bicarbonate administration relative to dose) yielded comparable results, but there was a tendency toward increased DOB values. Although further development is required, these studies suggest that the [13C]-DM breath test offers promise as a rapid, minimally invasive phenotyping assay for CYP2D6 activity. PMID:18728242
NASA Astrophysics Data System (ADS)
Bouclier, R.; Elguedj, T.; Combescure, A.
2013-11-01
This work deals with the development of 2D solid shell non-uniform rational B-spline elements. We address a static problem, that can be solved with a 2D model, involving a thin slender structure under small perturbations. The plane stress, plane strain and axisymmetric assumption can be made. projection and reduced integration techniques are considered to deal with the locking phenomenon. The use of the approach leads to the implementation of two strategies insensitive to locking: the first strategy is based on a 1D projection of the mean strain across the thickness; the second strategy undertakes to project all the strains onto a suitably chosen 2D space. Conversely, the reduced integration approach based on Gauss points is less expensive, but only alleviates locking and is limited to quadratic approximations. The performance of the various 2D elements developed is assessed through several numerical examples. Simple extensions of these techniques to 3D are finally performed.
Criminality and the 2D:4D ratio: testing the prenatal androgen hypothesis.
Ellis, Lee; Hoskin, Anthony W
2015-03-01
A decade old theory hypothesizes that brain exposure to androgens promotes involvement in criminal behavior. General support for this hypothesis has been provided by studies of postpubertal circulating levels of testosterone, at least among males. However, the theory also predicts that for both genders, prenatal androgens will be positively correlated with persistent offending, an idea for which no evidence currently exists. The present study used an indirect measure of prenatal androgen exposure-the relative length of the second and fourth fingers of the right hand (r2D:4D)-to test the hypothesis that elevated prenatal androgens promote criminal tendencies later in life for males and females. Questionnaires were administered to 2,059 college students in Malaysia and 1,291 college students in the United States. Respondents reported their r2D:4D relative finger lengths along with involvement in 13 categories of delinquent and criminal acts. Statistically significant correlations between the commission of most types of offenses and r2D:4D ratios were found for males and females even after controlling for age. It is concluded that high exposure to androgens during prenatal development contributes to most forms of offending following the onset of puberty. PMID:24013770
Exact ground state for the four-electron problem in a 2D finite honeycomb lattice
NASA Astrophysics Data System (ADS)
Trencsényi, Réka; Glukhov, Konstantin; Gulácsi, Zsolt
2014-07-01
Working in a subspace with dimensionality much smaller than the dimension of the full Hilbert space, we deduce exact four-particle ground states in 2D samples containing hexagonal repeat units and described by Hubbard type of models. The procedure identifies first a small subspace ? in which the ground state ? is placed, than deduces ? by exact diagonalization in ?. The small subspace is obtained by the repeated application of the Hamiltonian ? on a carefully chosen starting wave vector describing the most interacting particle configuration, and the wave vectors resulting from the application of ?, till the obtained system of equations closes in itself. The procedure which can be applied in principle at fixed but arbitrary system size and number of particles is interesting on its own since it provides exact information for the numerical approximation techniques which use a similar strategy, but apply non-complete basis for ?. The diagonalization inside ? provides an incomplete image of the low lying part of the excitation spectrum, but provides the exact ?. Once the exact ground state is obtained, its properties can be easily analysed. The ? is found always as a singlet state whose energy, interestingly, saturates in the ? limit. The unapproximated results show that the emergence probabilities of different particle configurations in the ground state presents 'Zittern' (trembling) characteristics which are absent in 2D square Hubbard systems. Consequently, the manifestation of the local Coulomb repulsion in 2D square and honeycomb types of systems presents differences, which can be a real source in the differences in the many-body behaviour.
NASA Astrophysics Data System (ADS)
Tanaka, Satoyuki; Suzuki, Hirotaka; Sadamoto, Shota; Sannomaru, Shogo; Yu, Tiantang; Bui, Tinh Quoc
2016-08-01
Two-dimensional (2D) in-plane mixed-mode fracture mechanics problems are analyzed employing an efficient meshfree Galerkin method based on stabilized conforming nodal integration (SCNI). In this setting, the reproducing kernel function as meshfree interpolant is taken, while employing the SCNI for numerical integration of stiffness matrix in the Galerkin formulation. The strain components are smoothed and stabilized employing Gauss divergence theorem. The path-independent integral ( J-integral) is solved based on the nodal integration by summing the smoothed physical quantities and the segments of the contour integrals. In addition, mixed-mode stress intensity factors (SIFs) are extracted from the J-integral by decomposing the displacement and stress fields into symmetric and antisymmetric parts. The advantages and features of the present formulation and discretization in evaluation of the J-integral of in-plane 2D fracture problems are demonstrated through several representative numerical examples. The mixed-mode SIFs are evaluated and compared with reference solutions. The obtained results reveal high accuracy and good performance of the proposed meshfree method in the analysis of 2D fracture problems.
An ant colony optimisation algorithm for the 2D and 3D hydrophobic polar protein folding problem
Shmygelska, Alena; Hoos, Holger H
2005-01-01
Background The protein folding problem is a fundamental problems in computational molecular biology and biochemical physics. Various optimisation methods have been applied to formulations of the ab-initio folding problem that are based on reduced models of protein structure, including Monte Carlo methods, Evolutionary Algorithms, Tabu Search and hybrid approaches. In our work, we have introduced an ant colony optimisation (ACO) algorithm to address the non-deterministic polynomial-time hard (NP-hard) combinatorial problem of predicting a protein's conformation from its amino acid sequence under a widely studied, conceptually simple model – the 2-dimensional (2D) and 3-dimensional (3D) hydrophobic-polar (HP) model. Results We present an improvement of our previous ACO algorithm for the 2D HP model and its extension to the 3D HP model. We show that this new algorithm, dubbed ACO-HPPFP-3, performs better than previous state-of-the-art algorithms on sequences whose native conformations do not contain structural nuclei (parts of the native fold that predominantly consist of local interactions) at the ends, but rather in the middle of the sequence, and that it generally finds a more diverse set of native conformations. Conclusions The application of ACO to this bioinformatics problem compares favourably with specialised, state-of-the-art methods for the 2D and 3D HP protein folding problem; our empirical results indicate that our rather simple ACO algorithm scales worse with sequence length but usually finds a more diverse ensemble of native states. Therefore the development of ACO algorithms for more complex and realistic models of protein structure holds significant promise. PMID:15710037
An investigation of DTNS2D for use as an incompressible turbulence modelling test-bed
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
1992-01-01
This paper documents an investigation of a two dimensional, incompressible Navier-Stokes solver for use as a test-bed for turbulence modelling. DTNS2D is the code under consideration for use at the Center for Modelling of Turbulence and Transition (CMOTT). This code was created by Gorski at the David Taylor Research Center and incorporates the pseudo compressibility method. Two laminar benchmark flows are used to measure the performance and implementation of the method. The classical solution of the Blasius boundary layer is used for validating the flat plate flow, while experimental data is incorporated in the validation of backward facing step flow. Velocity profiles, convergence histories, and reattachment lengths are used to quantify these calculations. The organization and adaptability of the code are also examined in light of the role as a numerical test-bed.
Dong, Jianping
2014-03-15
The 2D space-fractional Schrödinger equation in the time-independent and time-dependent cases for the scattering problems in the fractional quantum mechanics is studied. We define the Green's functions for the two cases and give the mathematical expression of them in infinite series form and in terms of some special functions. The asymptotic formulas of the Green's functions are also given, and applied to get the approximate wave functions for the fractional quantum scattering problems. These results contain those in the standard (integer) quantum mechanics as special cases, and can be applied to study the complex quantum systems.
OECD/MCCI 2-D Core Concrete Interaction (CCI) tests : final report February 28, 2006.
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
reactor material database for dry cavity conditions is solely one-dimensional. Although the MACE Scoping Test was carried out with a two-dimensional concrete cavity, the interaction was flooded soon after ablation was initiated to investigate debris coolability. Moreover, due to the scoping nature of this test, the apparatus was minimally instrumented and therefore the results are of limited value from the code validation viewpoint. Aside from the MACE program, the COTELS test series also investigated 2-D CCI under flooded cavity conditions. However, the input power density for these tests was quite high relative to the prototypic case. Finally, the BETA test series provided valuable data on 2-D core concrete interaction under dry cavity conditions, but these tests focused on investigating the interaction of the metallic (steel) phase with concrete. Due to these limitations, there is significant uncertainty in the partition of energy dissipated for the ablation of concrete in the lateral and axial directions under dry cavity conditions for the case of a core oxide melt. Accurate knowledge of this 'power split' is important in the evaluation of the consequences of an ex-vessel severe accident; e.g., lateral erosion can undermine containment structures, while axial erosion can penetrate the basemat, leading to ground contamination and/or possible containment bypass. As a result of this uncertainty, there are still substantial differences among computer codes in the prediction of 2-D cavity erosion behavior under both wet and dry cavity conditions. In light of the above issues, the OECD-sponsored Melt Coolability and Concrete Interaction (MCCI) program was initiated at Argonne National Laboratory. The project conducted reactor materials experiments and associated analysis to achieve the following technical objectives: (1) resolve the ex-vessel debris coolability issue through a program that focused on providing both confirmatory evidence and test data for the coolability
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 and B.2.
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
National Prociency Testing Result of CYP2D6*10 Genotyping for Adjuvant Tamoxifen Therapy in China.
Lin, Guigao; Zhang, Kuo; Yi, Lang; Han, Yanxi; Xie, Jiehong; Li, Jinming
2016-01-01
Tamoxifen has been successfully used for treating breast cancer and preventing cancer recurrence. Cytochrome P450 2D6 (CYP2D6) plays a key role in the process of metabolizing tamoxifen to its active moiety, endoxifen. Patients with variants of the CYP2D6 gene may not receive the full benefit of tamoxifen treatment. The CYP2D6*10 variant (the most common variant in Asians) was analyzed to optimize the prescription of tamoxifen in China. To ensure referring clinicians have accurate information for genotype-guided tamoxifen treatment, the Chinese National Center for Clinical Laboratories (NCCL) organized a national proficiency testing (PT) to evaluate the performance of laboratories providing CYP2D6*10 genotyping. Ten genomic DNA samples with CYP2D6 wild-type or CYP2D6*10 variants were validated by PCR-sequencing and sent to 28 participant laboratories. The genotyping results and pharmacogenomic test reports were submitted and evaluated by NCCL experts. Additional information regarding the number of samples tested, the accreditation/certification status, and detecting technology was also requested. Thirty-one data sets were received, with a corresponding analytical sensitivity of 98.2% (548/558 challenges; 95% confidence interval: 96.7-99.1%) and an analytic specificity of 96.5% (675/682; 95% confidence interval: 97.9-99.5%). Overall, 25/28 participants correctly identified CYP2D6*10 status in 10 samples; however, two laboratories made serious genotyping errors. Most of the essential information was included in the 20 submitted CYP2D6*10 test reports. The majority of Chinese laboratories are reliable for detecting the CYP2D6*10 variant; however, several issues revealed in this study underline the importance of PT schemes in continued external assessment and provision of guidelines. PMID:27603206
A multiple-scale Pascal polynomial for 2D Stokes and inverse Cauchy-Stokes problems
NASA Astrophysics Data System (ADS)
Liu, Chein-Shan; Young, D. L.
2016-05-01
The polynomial expansion method is a useful tool for solving both the direct and inverse Stokes problems, which together with the pointwise collocation technique is easy to derive the algebraic equations for satisfying the Stokes differential equations and the specified boundary conditions. In this paper we propose two novel numerical algorithms, based on a third-first order system and a third-third order system, to solve the direct and the inverse Cauchy problems in Stokes flows by developing a multiple-scale Pascal polynomial method, of which the scales are determined a priori by the collocation points. To assess the performance through numerical experiments, we find that the multiple-scale Pascal polynomial expansion method (MSPEM) is accurate and stable against large noise.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
Algebraic rings of integers and some 2D lattice problems in physics
NASA Astrophysics Data System (ADS)
Nanxian, Chen; Zhaodou, Chen; Shaojun, Liu; Yanan, Shen; Xijin, Ge
1996-09-01
This paper develops the Möbius inversion formula for the Gaussian integers and Eisenstein's integers, and gives two applications. The first application is to the two-dimensional arithmetic Fourier transform (AFT), which is suitable for parallel processing. The second application is to two-dimensional inverse lattice problems, and is illustrated with the recovery of interatomic potentials from the cohesive energy for monolayer graphite. The paper demonstrates the potential application in the physical science of integral domains other than the standard integers.
NASA Astrophysics Data System (ADS)
Mo, Yike; Greenhalgh, Stewart A.; Robertsson, Johan O. A.; Karaman, Hakki
2015-05-01
Lateral velocity variations and low velocity near-surface layers can produce strong scattered and guided waves which interfere with reflections and lead to severe imaging problems in seismic exploration. In order to investigate these specific problems by laboratory seismic modelling, a simple 2D ultrasonic model facility has been recently assembled within the Wave Propagation Lab at ETH Zurich. The simulated geological structures are constructed from 2 mm thick metal and plastic sheets, cut and bonded together. The experiments entail the use of a piezoelectric source driven by a pulse amplifier at ultrasonic frequencies to generate Lamb waves in the plate, which are detected by piezoelectric receivers and recorded digitally on a National Instruments recording system, under LabVIEW software control. The 2D models employed were constructed in-house in full recognition of the similitude relations. The first heterogeneous model features a flat uniform low velocity near-surface layer and deeper dipping and flat interfaces separating different materials. The second model is comparable but also incorporates two rectangular shaped inserts, one of low velocity, the other of high velocity. The third model is identical to the second other than it has an irregular low velocity surface layer of variable thickness. Reflection as well as transmission experiments (crosshole & vertical seismic profiling) were performed on each model. The two dominant Lamb waves recorded are the fundamental symmetric mode (non-dispersive) and the fundamental antisymmetric (flexural) dispersive mode, the latter normally being absent when the source transducer is located on a model edge but dominant when it is on the flat planar surface of the plate. Experimental group and phase velocity dispersion curves were determined and plotted for both modes in a uniform aluminium plate. For the reflection seismic data, various processing techniques were applied, as far as pre-stack Kirchhoff migration. The
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.
Veijola, Timo; Råback, Peter
2007-01-01
We present a straightforward method to solve gas damping problems for perforated structures in two dimensions (2D) utilising a Perforation Profile Reynolds (PPR) solver. The PPR equation is an extended Reynolds equation that includes additional terms modelling the leakage flow through the perforations, and variable diffusivity and compressibility profiles. The solution method consists of two phases: 1) determination of the specific admittance profile and relative diffusivity (and relative compressibility) profiles due to the perforation, and 2) solution of the PPR equation with a FEM solver in 2D. Rarefied gas corrections in the slip-flow region are also included. Analytic profiles for circular and square holes with slip conditions are presented in the paper. To verify the method, square perforated dampers with 16–64 holes were simulated with a three-dimensional (3D) Navier-Stokes solver, a homogenised extended Reynolds solver, and a 2D PPR solver. Cases for both translational (in normal to the surfaces) and torsional motion were simulated. The presented method extends the region of accurate simulation of perforated structures to cases where the homogenisation method is inaccurate and the full 3D Navier-Stokes simulation is too time-consuming.
NASA Astrophysics Data System (ADS)
Moustafa, Salli; Févotte, François; Lathuilière, Bruno; Plagne, Laurent
2014-06-01
The past few years have been marked by a noticeable increase in the interest in 3D whole-core heterogeneous deterministic neutron transport solvers for reference calculations. Due to the extremely large problem sizes tackled by such solvers, they need to use adapted numerical methods and need to be efficiently implemented to take advantage of the full computing power of modern systems. As for numerical methods, one possible approach consists in iterating over resolutions of 2D and 1D MOC problems by taking advantage of prismatic geometries. The MICADO solver, developed at EDF R&D, is a parallel implementation of such a method in distributed and shared memory systems. However it is currently unable to use SIMD vectorization to leverage the full computing power of modern CPUs. In this paper, we describe our first effort to support vectorization in MICADO, typically targeting Intel© SSE CPUs. Both the 2D and 1D algorithms are vectorized, allowing for high expected speedups for the whole spatial solver. We present benchmark computations, which show nearly optimal speedups for our vectorized implementation on the TAKEDA case.
A proposed experimental test to distinguish waves from 2-D turbulence
NASA Technical Reports Server (NTRS)
Dewan, E. M.
1986-01-01
A theory of buoyancy range turbulence that leads to a unique scale, K sub B, that allows one to differentiate between waves and turbulence for the special case of theta = 0 (i.e., horizontally propagating waves) is discussed. The theory does not seem to lead to a practical empirical distinction for the general situation. This is due to the fact that, as theta is increased, one has the ever-increasing presence of BRT for longer wavelengths. The fact that the numerical values of epsilon prime are not yet available compounds the difficulty. In addition, it does not appear possible to encompass true 2-D turbulence in the theory. We are thus driven to a test which circumvents all these difficulties. A proposed test is based on the idea that waves are coherent and propagate, while in turbulence we have the opposite situation. In particular, the test is suggested by the following quotation from MULLER (1984), on the nature of such turbulence: The turbulence in each horizontal plane is independent from the turbulence in the other planes. If this statement were to be taken literally, it would imply that the temporal coherence between horizontal speeds, separated only in altitude, would be zero. Any vertical separation would be forced to take into account the effects of viscosity: that is to say, a specific finite vertical separation would be needed to destroy coherence. In order to estimate this distance, L, one can use L = C(v/S) (1/2) were v is the kinematic viscosity, S is the shear scale, and C is a constant of order unity.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT
Safgren, Stephanie L.; Suman, Vera J.; Kosel, Matthew L.; Gilbert, Judith A; Buhrow, Sarah A.; Black, John L.; Northfelt, Donald W.; Modak, Anil S.; Rosen, David; Ingle, James N.; Ames, Matthew M.; Reid, Joel M.; Goetz, Matthew P.
2015-01-01
Background In tamoxifen-treated patients, breast cancer recurrence differs according to CYP2D6 genotype and endoxifen steady state concentrations (Endx Css). The 13Cdextromethorphan breath test (DM-BT), labeled with 13C at the O-CH3 moiety, measures CYP2D6 enzyme activity. We sought to examine the ability of the DM-BT to identify known CYP2D6 genotypic poor metabolizers and examine the correlation between DMBT and Endx Css. Methods DM-BT and tamoxifen pharmacokinetics were obtained at baseline (b), 3 month (3m) and 6 months (6m) following tamoxifen initiation. Potent CYP2D6 inhibitors were prohibited. The correlation between bDM-BT with CYP2D6 genotype and Endx Css was determined. The association between bDM-BT (where values ≤0.9 is an indicator of poor in vivo CYP2D6 metabolism) and Endx Css (using values ≤ 11.2 known to be associated with poorer recurrence free survival) was explored. Results 91 patients were enrolled and 77 were eligible. CYP2D6 genotype was positively correlated with b, 3m and 6m DMBT (r ranging from 0.457-0. 60 p < 0.001). Both CYP2D6 genotype (r = 0.47; 0.56, p <.0001), and bDM-BT (r=0.60; 0.54; p<.001) were associated with 3m and 6m Endx Css respectively. Seven of 9 patients (78%) with low (≤11.2 nM) 3m Endx Css also had low DM-BT (≤0.9) including 2/2 CYP2D6 PM/PM and 5/5 IM/PM. In contrast, 1 of 48 pts (2%) with a low DM-BT had Endx Css > 11.2 nM. Conclusions In patients not taking potent CYP2D6 inhibitors, DM-BT was associated with CYP2D6 genotype and 3m and 6 m Endx Css but did not provide better discrimination of Endx Css compared to CYP2D6 genotype alone. Further studies are needed to identify additional factors which alter Endx Css. PMID:25714002
NASA Astrophysics Data System (ADS)
Cockmartin, Lesley; Marshall, Nicholas W.; Van Ongeval, Chantal; Aerts, Gwen; Stalmans, Davina; Zanca, Federica; Shaheen, Eman; De Keyzer, Frederik; Dance, David R.; Young, Kenneth C.; Bosmans, Hilde
2015-05-01
This paper introduces a hybrid method for performing detection studies in projection image based modalities, based on image acquisitions of target objects and patients. The method was used to compare 2D mammography and digital breast tomosynthesis (DBT) in terms of the detection performance of spherical densities and microcalcifications. The method starts with the acquisition of spheres of different glandular equivalent densities and microcalcifications of different sizes immersed in a homogeneous breast tissue simulating medium. These target objects are then segmented and the subsequent templates are fused in projection images of patients and processed or reconstructed. This results in hybrid images with true mammographic anatomy and clinically relevant target objects, ready for use in observer studies. The detection study of spherical densities used 108 normal and 178 hybrid 2D and DBT images; 156 normal and 321 hybrid images were used for the microcalcifications. Seven observers scored the presence/absence of the spheres/microcalcifications in a square region via a 5-point confidence rating scale. Detection performance in 2D and DBT was compared via ROC analysis with sub-analyses for the density of the spheres, microcalcification size, breast thickness and z-position. The study was performed on a Siemens Inspiration tomosynthesis system using patient acquisitions with an average age of 58 years and an average breast thickness of 53 mm providing mean glandular doses of 1.06 mGy (2D) and 2.39 mGy (DBT). Study results showed that breast tomosynthesis (AUC = 0.973) outperformed 2D (AUC = 0.831) for the detection of spheres (p < 0.0001) and this applied for all spherical densities and breast thicknesses. By way of contrast, DBT was worse than 2D for microcalcification detection (AUC2D = 0.974, AUCDBT = 0.838, p < 0.0001), with significant differences found for all sizes (150-354 µm), for breast thicknesses above 40 mm and for heights
Simulation and Analysis of Converging Shock Wave Test Problems
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the original problem, and minimally straining the general credibility of associated analysis and conclusions.
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
NASA Astrophysics Data System (ADS)
Hamm, Peter
2014-11-01
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
Hamm, Peter
2014-11-14
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
OECD 2-D Core Concrete Interaction (CCI) tests : CCI-2 test plan, Rev. 0 January 31, 2004.
Farmer, M. T.; Kilsdonk, D. J.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. The first of these two tests, CCI-1, was conducted on December 19, 2003. This test investigated the interaction of a fully oxidized 400 kg PWR core melt, initially containing 8 wt % calcined siliceous concrete, with a specially designed two
HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2015-12-01
Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.
Comparison between 2D and 3D Numerical Modelling of a hot forging simulative test
Croin, M.; Ghiotti, A.; Bruschi, S.
2007-04-07
The paper presents the comparative analysis between 2D and 3D modelling of a simulative experiment, performed in laboratory environment, in which operating conditions approximate hot forging of a turbine aerofoil section. The plane strain deformation was chosen as an ideal case to analyze the process because of the thickness variations in the final section and the consequent distributions of contact pressure and sliding velocity at the interface that are closed to the conditions of the real industrial process. In order to compare the performances of 2D and 3D approaches, two different analyses were performed and compared with the experiments in terms of loads and temperatures peaks at the interface between the dies and the workpiece.
Fabrication and Testing of Low Cost 2D Carbon-Carbon Nozzle Extensions at NASA/MSFC
NASA Technical Reports Server (NTRS)
Greene, Sandra Elam; Shigley, John K.; George, Russ; Roberts, Robert
2015-01-01
Subscale liquid engine tests were conducted at NASA/MSFC using a 1.2 Klbf engine with liquid oxygen (LOX) and gaseous hydrogen. Testing was performed for main-stage durations ranging from 10 to 160 seconds at a chamber pressure of 550 psia and a mixture ratio of 5.7. Operating the engine in this manner demonstrated a new and affordable test capability for evaluating subscale nozzles by exposing them to long duration tests. A series of 2D C-C nozzle extensions were manufactured, oxidation protection applied and then tested on a liquid engine test facility at NASA/MSFC. The C-C nozzle extensions had oxidation protection applied using three very distinct methods with a wide range of costs and process times: SiC via Polymer Impregnation & Pyrolysis (PIP), Air Plasma Spray (APS) and Melt Infiltration. The tested extensions were about 6" long with an exit plane ID of about 6.6". The test results, material properties and performance of the 2D C-C extensions and attachment features will be discussed.
Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.
2015-02-01
An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking, in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.
NASA Astrophysics Data System (ADS)
Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.
2015-02-01
An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking; in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.
Ignition problems in scramjet testing
Mitani, Tohru
1995-05-01
Ignition of H{sub 2} in heated air containing H{sub 2}O, radicals, and dust was investigated for scramjet testing. Using a reduced kinetic model for H{sub 2}{minus}O{sub 2} systems, the effects of H{sub 2}O and radicals in nozzles are discussed in relation to engine testing with vitiation heaters. Analysis using linearized rate-equations suggested that the addition of O atoms was 1.5 times more effective than the addition of H atoms for ignition. This result can be applied to the problem of premature ignition caused by residual radicals and to plasma-jet igniters. Thermal and chemical effects of dust, inevitable in storage air heaters, were studied next. The effects of heat capacity and size of dust were expressed in terms of an exponential integral function. It was found that the radical termination on the surface of dust produces an effect equivalent to heat loss. The inhibition of ignition by dust may result, if the mass fraction of dust becomes 10{sup {minus}3}.
Implementation of a system to life test 2-D laser arrays
NASA Astrophysics Data System (ADS)
Faltus, Thomas H.; Bicket, Daniel J.
1992-02-01
Multi-emitter laser devices, stacked to form 2-dimensional arrays, have been shown to effectively pump Nd:YAG slabs in solid state laser systems. Using these arrays as substitutes for flashlamps provides the potential for increased reliability of laser systems. However, to quantify this reliability improvement, laser arrays must be life tested. To ensure that the life test data accurately describes the array lifetimes, the life test system must possess the following characteristics: adequate control of operating stresses, to ensure that the test results apply to true use-conditions; continuous monitoring and recording of array health, to capture unpredictable variations in array performance; in-situ parameter measurement, to measure array performance without inducing handling damage; and extensive safety interlocks, to protect personnel from laser hazards. This paper describes an array life test system possessing these characteristics. It describes the system hardware, operating and test software, and the methodology behind the system's use. We demonstrate the system's performance by life testing 2-dimensional laser arrays having previously documented front facet anomalies. Disadvantages as well as advantages of design decisions are discussed.
Altitude testing of the 2D V/STOL ADEN demonstrator on an F404 engine
NASA Technical Reports Server (NTRS)
Blozy, J. T.
1985-01-01
The Augmented Deflector Exhaust Nozzle (ADEN) exhaust system was tested in the PSL-3 altitude chamber at the NASA Lewis Research Center in order to evaluate aerodynamic performance, cooling-system effectiveness, and mechanical operation at flight-type conditions. The ADEN, a flight-weight, two-dimensional, thrust-vectoring nozzle, was successfully tested on the F404 engine using a remote engine control system for automatic or manual setting of the throat-area control and available fan air for the nozzle internal cooling system. Throughout the test, the ADEN performed with no adverse effects on the engine or augmentor operation.
Altitude testing of a flight weight, self-cooled, 2D thrust vectoring exhaust nozzle
NASA Technical Reports Server (NTRS)
Wooten, W. H.; Blozy, J. T.; Speir, D. W.; Lottig, R. A.
1984-01-01
The Augmented Deflector Exhaust Nozzle (ADEN) was tested in PSL-3 at NASA-Lewis Research Center using an F404 engine. The ADEN is a flight weight Single Expansion Ramp Nozzle with thrust vectoring, an internal cooling system utilizing the available engine fan flow, and a variable area throat controlled by the engine control system. Test conditions included dry and max A/B operation at nozzle pressure ratios from 2.0 to 15.0. High nozzle pressure loading was simulated to verify structural integrity at near maximum design pressure. Nozzle settings covered the full range in throat area and + or - 15 deg deflection angle. Test results demonstrated expected aerodynamic performance, cooling system effectiveness, control system stability, and mechanical integrity.
Critical Heat Flux Experiments on the Reactor Vessel Wall Using 2-D Slice Test Section
Jeong, Yong Hoon; Chang, Soon Heung; Baek, Won-Pil
2005-11-15
The critical heat flux (CHF) on the reactor vessel outer wall was measured using the two-dimensional slice test section. The radius and the channel area of the test section were 2.5 m and 10 cm x 15 cm, respectively. The flow channel area and the heater width were smaller than those of the ULPU experiments, but the radius was greater than that of the ULPU. The CHF data under the inlet subcooling of 2 to 25 deg. C and the mass flux 0 to 300 kg/m{sup 2}.s had been acquired. The measured CHF value was generally slightly lower than that of the ULPU. The difference possibly comes from the difference of the test section material and the thickness. However, the general trend of CHF according to the mass flux was similar with that of the ULPU. The experimental CHF data were compared with the predicted values by SULTAN correlation. The SULTAN correlation predicted well this study's data only for the mass flux higher than 200 kg/m{sup 2}.s, and for the exit quality lower than 0.05. The local condition-based correlation was developed, and it showed good prediction capability for broad quality (-0.01 to 0.5) and mass flux (<300 kg/m{sup 2}.s) conditions with a root-mean-square error of 2.4%. There were increases in the CHF with trisodium phosphate-added water.
Surrogate Guderley Test Problem Definition
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-07-06
The surrogate Guderley problem (SGP) is a 'spherical shock tube' (or 'spherical driven implosion') designed to ease the notoriously subtle initialization of the true Guderley problem, while still maintaining a high degree of fidelity. In this problem (similar to the Guderley problem), an infinitely strong shock wave forms and converges in one-dimensional (1D) cylindrical or spherical symmetry through a polytropic gas with arbitrary adiabatic index {gamma}, uniform density {rho}{sub 0}, zero velocity, and negligible pre-shock pressure and specific internal energy (SIE). This shock proceeds to focus on the point or axis of symmetry at r = 0 (resulting in ostensibly infinite pressure, velocity, etc.) and reflect back out into the incoming perturbed gas.
Medical Tests for Prostate Problems
... to be related to urine blockage, the health care provider may recommend tests that measure bladder pressure and urine flow rate. ... pain, chills, or fever—should call their health care provider immediately. [ Top ] How soon will prostate test results be available? Results for simple medical tests ...
Medical Tests for Prostate Problems
... appears to be related to urine blockage, the health care provider may recommend tests that measure bladder pressure and urine flow rate. ... including pain, chills, or fever—should call their health care provider ... soon will prostate test results be available? Results for simple medical tests ...
Najjar, F M; Solberg, J; White, D
2008-04-17
A verification test suite has been assessed with primary focus on low reynolds number flow of liquid metals. This is representative of the interface between the armature and rail in gun applications. The computational multiphysics framework, ALE3D, is used. The main objective of the current study is to provide guidance and gain confidence in the results obtained with ALE3D. A verification test suite based on 2-D cases is proposed and includes the lid-driven cavity and the Couette flow are investigated. The hydro and thermal fields are assumed to be steady and laminar in nature. Results are compared with analytical solutions and previously published data. Mesh resolution studies are performed along with various models for the equation of state.
NASA Astrophysics Data System (ADS)
Pérez-Corona, M.; García, J. A.; Taller, G.; Polgár, D.; Bustos, E.; Plank, Z.
2016-02-01
The purpose of geophysical electrical surveys is to determine the subsurface resistivity distribution by making measurements on the ground surface. From these measurements, the true resistivity of the subsurface can be estimated. The ground resistivity is related to various geological parameters, such as the mineral and fluid content, porosity and degree of water saturation in the rock. Electrical resistivity surveys have been used for many decades in hydrogeological, mining and geotechnical investigations. More recently, they have been used for environmental surveys. To obtain a more accurate subsurface model than is possible with a simple 1-D model, a more complex model must be used. In a 2-D model, the resistivity values are allowed to vary in one horizontal direction (usually referred to as the x direction) but are assumed to be constant in the other horizontal (the y) direction. A more realistic model would be a fully 3-D model where the resistivity values are allowed to change in all three directions. In this research, a simulation of the cone penetration test and 2D imaging resistivity are used as tools to simulate the distribution of hydrocarbons in soil.
Problem-Solving Test: Pyrosequencing
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2013-01-01
Terms to be familiar with before you start to solve the test: Maxam-Gilbert sequencing, Sanger sequencing, gel electrophoresis, DNA synthesis reaction, polymerase chain reaction, template, primer, DNA polymerase, deoxyribonucleoside triphosphates, orthophosphate, pyrophosphate, nucleoside monophosphates, luminescence, acid anhydride bond,…
Inverse Problem in Nondestructive Testing Using Arrayed Eddy Current Sensors
Zaoui, Abdelhalim; Menana, Hocine; Feliachi, Mouloud; Berthiau, Gérard
2010-01-01
A fast crack profile reconstitution model in nondestructive testing is developed using an arrayed eddy current sensor. The inverse problem is based on an iterative solving of the direct problem using genetic algorithms. In the direct problem, assuming a current excitation, the incident field produced by all the coils of the arrayed sensor is obtained by the translation and superposition of the 2D axisymmetric finite element results obtained for one coil; the impedance variation of each coil, due to the crack, is obtained by the reciprocity principle involving the dyadic Green’s function. For the inverse problem, the surface of the crack is subdivided into rectangular cells, and the objective function is expressed only in terms of the depth of each cell. The evaluation of the dyadic Green’s function matrix is made independently of the iterative procedure, making the inversion very fast. PMID:22163680
Inverse problem in nondestructive testing using arrayed eddy current sensors.
Zaoui, Abdelhalim; Menana, Hocine; Feliachi, Mouloud; Berthiau, Gérard
2010-01-01
A fast crack profile reconstitution model in nondestructive testing is developed using an arrayed eddy current sensor. The inverse problem is based on an iterative solving of the direct problem using genetic algorithms. In the direct problem, assuming a current excitation, the incident field produced by all the coils of the arrayed sensor is obtained by the translation and superposition of the 2D axisymmetric finite element results obtained for one coil; the impedance variation of each coil, due to the crack, is obtained by the reciprocity principle involving the dyadic Green's function. For the inverse problem, the surface of the crack is subdivided into rectangular cells, and the objective function is expressed only in terms of the depth of each cell. The evaluation of the dyadic Green's function matrix is made independently of the iterative procedure, making the inversion very fast. PMID:22163680
Photoluminescence and the gallium problem for highest-mobility GaAs/AlGaAs-based 2d electron gases
NASA Astrophysics Data System (ADS)
Schläpfer, F.; Dietsche, W.; Reichl, C.; Faelt, S.; Wegscheider, W.
2016-05-01
The quest for extremely high mobilities of 2d electron gases in MBE-grown heterostructures is hampered by the available purity of the starting materials, particularly of the gallium. Here we compare the role of different Ga lots having nominally the highest possible quality on the mobility and the photoluminescence (PL) of modulation doped single interface structures and find significant differences. A weak exciton PL reveals that the purity of the Ga is insufficient. No high mobility can be reached with such a lot with a reasonable effort. On the other hand, a strong exciton PL indicates a high initial Ga purity, allowing to reach mobilities of 15 million (single interface) or 28 million cm2/V s (doped quantum wells) in our MBE systems. We discuss possible origins of the inconsistent Ga quality. Furthermore, we compare samples grown in different MBE systems over a period of several years and find that mobility and PL are correlated if similar structures and growth procedures are used.
NASA Astrophysics Data System (ADS)
Kh., Lotfy
2012-06-01
In the present paper, we introduce the coupled theory (CD), Lord-Schulman (LS) theory, and Green-Lindsay (GL) theory to study the influences of a magnetic field and rotation on a two-dimensional problem of fibre-reinforced thermoelasticity. The material is a homogeneous isotropic elastic half-space. The method applied here is to use normal mode analysis to solve a thermal shock problem. Some particular cases are also discussed in the context of the problem. Deformation of a body depends on the nature of the force applied as well as the type of boundary conditions. Numerical results for the temperature, displacement, and thermal stress components are given and illustrated graphically in the absence and the presence of the magnetic field and rotation.
ERIC Educational Resources Information Center
Leighty, Katherine A.; Menzel, Charles R.; Fragaszy, Dorothy M.
2008-01-01
Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees ("Pan troglodytes") and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old…
Fevotte, F.; Lathuiliere, B.
2013-07-01
The large increase in computing power over the past few years now makes it possible to consider developing 3D full-core heterogeneous deterministic neutron transport solvers for reference calculations. Among all approaches presented in the literature, the method first introduced in [1] seems very promising. It consists in iterating over resolutions of 2D and ID MOC problems by taking advantage of prismatic geometries without introducing approximations of a low order operator such as diffusion. However, before developing a solver with all industrial options at EDF, several points needed to be clarified. In this work, we first prove the convergence of this iterative process, under some assumptions. We then present our high-performance, parallel implementation of this algorithm in the MICADO solver. Benchmarking the solver against the Takeda case shows that the 2D-1D coupling algorithm does not seem to affect the spatial convergence order of the MOC solver. As for performance issues, our study shows that even though the data distribution is suited to the 2D solver part, the efficiency of the ID part is sufficient to ensure a good parallel efficiency of the global algorithm. After this study, the main remaining difficulty implementation-wise is about the memory requirement of a vector used for initialization. An efficient acceleration operator will also need to be developed. (authors)
Farmer, M. T.; Kilsdonk, D. J.; Lomperski, S.; Aeschliman, R. W.; Basu, S.
2011-05-23
experiments to address remaining uncertainties related to long-term two-dimensional molten core-concrete interaction. In particular, for both wet and dry cavity conditions, there is uncertainty insofar as evaluating the lateral vs. axial power split during a core-concrete interaction due to a lack of experiment data. As a result, there are differences in the 2-D cavity erosion predicted by codes such as MELCOR, WECHSL, and COSACO. The first step towards generating this data is to produce a test plan for review by the Project Review Group (PRG). The purpose of this document is to provide this plan.
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Frelat, Joël
2014-03-01
It is experimentally well-known that a crack loaded in mode I+III propagates through formation of discrete fracture facets inclined at a certain tilt angle on the original crack plane, depending on the ratio of the mode III to mode I initial stress intensity factors. Pollard et al. (1982) have proposed to calculate this angle by considering the tractions on all possible future infinitesimal facets and assuming shear tractions to be zero on that which will actually develop. In this paper we consider the opposite case of well-developed facets; the stress field near the lateral fronts of such facets becomes independent of the initial crack and essentially 2D in a plane perpendicular to the main direction of crack propagation. To determine this stress field, we solve the model 2D problem of an infinite plate containing an infinite periodic array of cracks inclined at some angle on a straight line, and loaded through uniform stresses at infinity. This is done first analytically, for small values of this angle, by combining Muskhelishvili's (1953) formalism and a first-order perturbation procedure. The formulae found for the 2D stress intensity factors are then extended in an approximate way to larger angles by using another reference solution, and finally assessed through comparison with some finite element results. To finally illustrate the possible future application of these formulae to the prediction of the stationary tilt angle, we introduce the tentative assumption that the 2D mode II stress intensity factor is zero on the lateral fronts of the facets. An approximate formula providing the tilt angle as a function of the ratio of the mode III to mode I stress intensity factors of the initial crack is deduced from there. This formula, which slightly depends on the type of loading imposed, predicts somewhat smaller angles than that of Pollard et al. (1982).
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1994-01-01
This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.
Techniques utilized in the simulated altitude testing of a 2D-CD vectoring and reversing nozzle
NASA Technical Reports Server (NTRS)
Block, H. Bruce; Bryant, Lively; Dicus, John H.; Moore, Allan S.; Burns, Maureen E.; Solomon, Robert F.; Sheer, Irving
1988-01-01
Simulated altitude testing of a two-dimensional, convergent-divergent, thrust vectoring and reversing exhaust nozzle was accomplished. An important objective of this test was to develop test hardware and techniques to properly operate a vectoring and reversing nozzle within the confines of an altitude test facility. This report presents detailed information on the major test support systems utilized, the operational performance of the systems and the problems encountered, and test equipment improvements recommended for future tests. The most challenging support systems included the multi-axis thrust measurement system, vectored and reverse exhaust gas collection systems, and infrared temperature measurement systems used to evaluate and monitor the nozzle. The feasibility of testing a vectoring and reversing nozzle of this type in an altitude chamber was successfully demonstrated. Supporting systems performed as required. During reverser operation, engine exhaust gases were successfully captured and turned downstream. However, a small amount of exhaust gas spilled out the collector ducts' inlet openings when the reverser was opened more than 60 percent. The spillage did not affect engine or nozzle performance. The three infrared systems which viewed the nozzle through the exhaust collection system worked remarkably well considering the harsh environment.
Leighty, Katherine A; Menzel, Charles R; Fragaszy, Dorothy M
2008-09-01
Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees (Pan troglodytes) and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old children utilize a response rule guided by local features to solve cross-dimensional tasks. Four-year-old toddlers and adult chimpanzees use information about object form and compositional structure from a 2D image to guide their search in three dimensions. Findings have specific implications to research conducted in object recognition/perception and broad relevance to all areas of research and daily living that incorporate 2D displays. PMID:18801134
The MINPACK-2 test problem collection
Averick, B.M.; Carter, R.G.; Xue, Guo-Liang; More, J.J.
1992-06-01
Optimization software has often been developed without any specific application in mind. This generic approach has worked well in many cases, but as we seek the solution of larger and more complex optimization problems on high-performance computers, the development of optimization software should take into account specific optimization problems that arise in a wide range of applications. This observation was the motivation for the development of the MINPACK-2 test problem collection. Each of the problems in this collection comes from a real application and is representative of other commonly encountered problems. There are problems from such diverse fields as fluid dynamics, medicine, elasticity, combustion, molecular conformation, nondestructive testing, chemical kinetics, lubrication, and superconductivity.
NASA Astrophysics Data System (ADS)
Lotfy, Kh.; Othman, Mohamed I. A.
2014-01-01
In the present paper, the coupled theory, Lord-Şhulman theory, and Green-Lindsay theory are introduced to study the influence of a magnetic field on the 2-D problem of a fiber-reinforced thermoelastic. These theories are also applied to study the influence of reinforcement on the total deformation of an infinite space weakened by a finite linear opening Mode-I crack. The material is homogeneous and an isotropic elastic half-space. The crack is subjected to a prescribed temperature and stress distribution. Normal mode analysis is used to solve the problem of a Mode-I crack. Numerical results for the temperature, the displacement, and thermal stress components are given and illustrated graphically in the absence and the presence of the magnetic field. A comparison between the three theories is also made for different depths.
Rua, Francesco; Sadeghi, Sheila J; Castrignanò, Silvia; Valetti, Francesca; Gilardi, Gianfranco
2015-10-01
This work reports for the first time the direct electron transfer of the Canis familiaris cytochrome P450 2D15 on glassy carbon electrodes to provide an analytical tool as an alternative to P450 animal testing in the drug discovery process. Cytochrome P450 2D15, that corresponds to the human homologue P450 2D6, was recombinantly expressed in Escherichia coli and entrapped on glassy carbon electrodes (GC) either with the cationic polymer polydiallyldimethylammonium chloride (PDDA) or in the presence of gold nanoparticles (AuNPs). Reversible electrochemical signals of P450 2D15 were observed with calculated midpoint potentials (E1/2) of −191 ± 5 and −233 ± 4 mV vs. Ag/AgCl for GC/PDDA/2D15 and GC/AuNPs/2D15, respectively. These experiments were then followed by the electro-catalytic activity of the immobilized enzyme in the presence of metoprolol. The latter drug is a beta-blocker used for the treatment of hypertension and is a specific marker of the human P450 2D6 activity. Electrocatalysis data showed that only in the presence of AuNps the expected α-hydroxy-metoprolol product was present as shown by HPLC. The successful immobilization of the electroactive C. familiaris cytochrome P450 2D15 on electrode surfaces addresses the ever increasing demand of developing alternative in vitromethods for amore detailed study of animal P450 enzymes' metabolism, reducing the number of animals sacrificed in preclinical tests. PMID:26092534
NASA Technical Reports Server (NTRS)
Miller, Franklin; Bagdanove, paul; Blake, Peter; Canavan, Ed; Cofie, Emmanuel; Crane, J. Allen; Dominquez, Kareny; Hagopian, John; Johnston, John; Madison, Tim; Miller, Dave; Oaks, Darrell; Williams, Pat; Young, Dan; Zukowski, Barbara; Zukowski, Tim
2007-01-01
The James Webb Space Telescope Instrument Support Integration Module (ISIM) is being designed and developed at the Goddard Space Flight Center. The ISM Thermal Distortion Testing (ITDT) program was started with the primary objective to validate the ISM mechanical design process. The ITDT effort seeks to establish confidence and demonstrate the ability to predict thermal distortion in composite structures at cryogenic temperatures using solid element models. This-program's goal is to better ensure that ISIM meets all the mechanical and structural requirements by using test results to verify or improve structural modeling techniques. The first step to accomplish the ITDT objectives was to design, and then construct solid element models of a series 2-D test assemblies that represent critical building blocks of the ISIM structure. Second, the actual test assemblies consisting of composite tubes and invar end fittings were fabricated and tested for thermal distortion. This paper presents the development of the GSFC Cryo Distortion Measurement Facility (CDMF) to meet the requirements of the ISIM 2-D test. assemblies, and other future ISIM testing needs. The CDMF provides efficient cooling with both a single, and two-stage cryo-cooler. Temperature uniformity of the test assemblies during thermal transients and at steady state is accomplished by using sapphire windows for all of the optical ports on the radiation shields and by using .thermal straps to cool the test assemblies. Numerical thermal models of the test assemblies were used to predict the temperature uniformity of the parts during cooldown and at steady state. Results of these models are compared to actual temperature data from the tests. Temperature sensors with a 0.25K precision were used to insure that test assembly gradients did not exceed 2K lateral, and 4K axially. The thermal distortions of two assemblies were measured during six thermal cycles from 320K to 35K using laser interferometers. The standard
Demidenko, Eugene
2011-01-01
An analytic solution of the potential distribution on a 2D homogeneous disk for electrical impedance tomography under the complete electrode model is expressed via an infinite system of linear equations. For the shunt electrode model with two electrodes, our solution coincides with the previously derived solution expressed via elliptic integral (Pidcock et al 1995). The Dirichlet-to-Neumann map is derived for statistical estimation via nonlinear least squares. The solution is validated in phantom experiments and applied for breast contact impedance estimation in vivo. Statistical hypothesis testing is used to test whether the contact impedances are the same across electrodes or all equal zero. Our solution can be especially useful for a rapid real-time test for bad surface contact in clinical setting. PMID:21799240
Testing Times: Problems Arising from Misdiagnosis.
ERIC Educational Resources Information Center
Vialle, Wilma; Konza, Deslea
1997-01-01
Three case studies illustrate problems in the identification of gifted students when tests are not used appropriately. The paper concludes that testing must occur within the context of intensive observations of and discussions with the child and family. The importance of all teachers receiving training in gifted education is stressed. (DB)
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Rogojerov, Marin; Keresztury, Gábor; Kamenova-Nacheva, Mariana; Sundius, Tom
2012-12-01
A new analytical approach for improving the precision in determination of vibrational transition moment directions of low symmetry molecules (lacking orthogonal axes) is discussed in this paper. The target molecules are partially uniaxially oriented in nematic liquid crystalline solvent and are studied by IR absorption spectroscopy using polarized light. The fundamental problem addressed is that IR linear dichroism measurements of low symmetry molecules alone cannot provide sufficient information on molecular orientation and transition moment directions. It is shown that computational prediction of these quantities can supply relevant complementary data, helping to reveal the hidden information content and achieve a more meaningful and more precise interpretation of the measured dichroic ratios. The combined experimental and theoretical/computational method proposed by us recently for determination of the average orientation of molecules with C(s) symmetry has now been replaced by a more precise analytical approach. The new method introduced and discussed in full detail here uses a mathematically evaluated angle between two vibrational transition moment vectors as a reference. The discussion also deals with error analysis and estimation of uncertainties of the orientational parameters. The proposed procedure has been tested in an analysis of the infrared linear dichroism (IR-LD) spectra of 1-D- and 2-D-naphthalene complemented with DFT calculations using the scaled quantum mechanical force field (SQM FF) method. PMID:22981590
Problem-Solving Test: Tryptophan Operon Mutants
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2010-01-01
This paper presents a problem-solving test that deals with the regulation of the "trp" operon of "Escherichia coli." Two mutants of this operon are described: in mutant A, the operator region of the operon carries a point mutation so that it is unable to carry out its function; mutant B expresses a "trp" repressor protein unable to bind…
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Motor operated valves problems tests and simulations
Pinier, D.; Haas, J.L.
1996-12-01
An analysis of the two refusals of operation of the EAS recirculation shutoff valves enabled two distinct problems to be identified on the motorized valves: the calculation methods for the operating torques of valves in use in the power plants are not conservative enough, which results in the misadjustement of the torque limiters installed on their motorizations, the second problem concerns the pressure locking phenomenon: a number of valves may entrap a pressure exceeding the in-line pressure between the disks, which may cause a jamming of the valve. EDF has made the following approach to settle the first problem: determination of the friction coefficients and the efficiency of the valve and its actuator through general and specific tests and models, definition of a new calculation method. In order to solve the second problem, EDF has made the following operations: identification of the valves whose technology enables the pressure to be entrapped: the tests and numerical simulations carried out in the Research and Development Division confirm the possibility of a {open_quotes}boiler{close_quotes} effect: determination of the necessary modifications: development and testing of anti-boiler effect systems.
NASA Astrophysics Data System (ADS)
Biondi, Marco; Guarnieri, Daniela; Yu, Hui; Belli, Valentina; Netti, Paolo Antonio
2013-02-01
A big challenge in tumor targeting by nanoparticles (NPs), taking advantage of the enhanced permeability and retention effect, is the fabrication of small size devices for enhanced tumor penetration, which is considered fundamental to improve chemotherapy efficacy. The purposes of this study are (i) to engineer the formulation of doxorubicin-loaded poly(d,l-lactic-co-glycolic acid) (PLGA)-block-poly(ethylene glycol) (PEG) NPs to obtain <100 nm devices and (ii) to translate standard 2D cytotoxicity studies to 3D collagen systems in which an initial step gradient of the NPs is present. Doxorubicin release can be prolonged for days to weeks depending on the NP formulation and the pH of the release medium. Sub-100 nm NPs are effectively internalized by HeLa cells in 2D and are less cytotoxic than free doxorubicin. In 3D, <100 nm NPs are significantly more toxic than larger ones towards HeLa cells, and the cell death rate is affected by the contributions of drug release and device transport through collagen. Thus, the reduction of NP size is a fundamental feature from both a technological and a biological point of view and must be properly engineered to optimize the tumor response to the NPs.
Farmer, M. T.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten coreconcrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-1 experiment, which was conducted on December 19, 2003. Test specifications for CCI-1 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of a third long-term 2-D Core-Concrete Interaction (CCI) experiment designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-3 experiment, which was conducted on September 22, 2005. Test specifications for CCI-3 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 375
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-2 experiment, which was conducted on August 24, 2004. Test specifications for CCI-2 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-01-01
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps. PMID:27608025
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included. PMID:16566462
Word Problems: Where Test Bias Creeps In.
ERIC Educational Resources Information Center
Chipman, Susan F.
The problem of sex bias in mathematics word problems is discussed, with references to the appropriate literature. Word problems are assessed via cognitive science analysis of word problem solving. It has been suggested that five basic semantic relations are adequate to classify nearly all story problems, namely, change, combine, compare, vary, and…
Radix, P.; Leonard, M.; Papantoniou, C.; Roman, G.; Saouter, E.; Gallotti-Schmitt, S.; Thiebaud, H.; Vasseur, P.
1999-10-01
The Daphnia magna 21-d test may be required by European authorities as a criterion for the assessment of aquatic chronic toxicity for the notification of new substances. However, this test has several drawbacks. It is labor-intensive, relatively expensive, and requires the breeding of test organisms. The Brachionous calyciflorus 2-d test and Microtox chronic 22-h test do not suffer from these disadvantages and could be used as substitutes for the Daphnia 21-d test for screening assays. During this study, the toxicity of 25 chemicals was measured using both the microtox chronic toxicity and B. calyciflorus 2-d tests, and the no-observed-effect concentrations (NOECs) were compared to the D. magna 21-d test. The Brachionus test was slightly less sensitive than the Daphnia test, but the correlation between the two tests was relatively good (r{sup 2} = 0.54). The B. calyciflorus 2-d test, and to a lesser extent the Microtox chronic 22-h test, were able to predict the chronic toxicity values of the Daphnia 21-d test. They constitute promising cost-effective tools for chronic toxicity screening.
Li, Yan; Zhu, Zhuo R; Ou, Bao C; Wang, Ya Q; Tan, Zhou B; Deng, Chang M; Gao, Yi Y; Tang, Ming; So, Ji H; Mu, Yang L; Zhang, Lan Q
2015-02-15
Major depressive disorder is one of the most prevalent and life-threatening forms of mental illnesses. The traditional antidepressants often take several weeks, even months, to obtain clinical effects. However, recent clinical studies have shown that ketamine, an N-methyl-D-aspartate (NMDA) receptor antagonist, exerts rapid antidepressant effects within 2h and are long-lasting. The aim of the present study was to investigate whether dopaminergic system was involved in the rapid antidepressant effects of ketamine. The acute administration of ketamine (20 mg/kg) significantly reduced the immobility time in the forced swim test. MK-801 (0.1 mg/kg), the more selective NMDA antagonist, also exerted rapid antidepressant-like effects. In contrast, fluoxetine (10 mg/kg) did not significantly reduced the immobility time in the forced swim test after 30 min administration. Notably, pretreatment with haloperidol (0.15 mg/kg, a nonselective dopamine D2/D3 antagonist), but not SCH23390 (0.04 and 0.1 mg/kg, a selective dopamine D1 receptor antagonist), significantly prevented the effects of ketamine or MK-801. Moreover, the administration of sub-effective dose of ketamine (10 mg/kg) in combination with pramipexole (0.3 mg/kg, a dopamine D2/D3 receptor agonist) exerted antidepressant-like effects compared with each drug alone. In conclusion, our results indicated that the dopamine D2/D3 receptors, but not D1 receptors, are involved in the rapid antidepressant-like effects of ketamine. PMID:25449845
Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.
2013-01-01
Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.
A class of ejecta transport test problems
Hammerberg, James E; Buttler, William T; Oro, David M; Rousculp, Christopher L; Morris, Christopher; Mariam, Fesseha G
2011-01-31
Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function ofparticulate masses and velocities, f{sub 0}(m, v;t). Some of the properties of this source distribution function have been determined from extensive Taylor and supported wave experiments on shock loaded Sn interfaces of varying surface and subsurface morphology. Such experiments measure the mass moment of f{sub o} under vacuum conditions assuming weak particle-particle interaction and, usually, fully inelastic capture by piezo-electric diagnostic probes. Recently, planar Sn experiments in He, Ar, and Kr gas atmospheres have been carried out to provide transport data both for machined surfaces and for coated surfaces. A hydro code model of ejecta transport usually specifies a criterion for the instantaneous temporal appearance of ejecta with source distribution f{sub 0}(m, v;t{sub 0}). Under the further assumption of separability, f{sub 0}(m,v;t{sub 0}) = f{sub 1}(m)f{sub 2}(v), the motion of particles under the influence of gas dynamic forces is calculated. For the situation of non-interacting particulates, interacting with a gas via drag forces, with the assumption of separability and simplified approximations to the Reynolds number dependence of the drag coefficient, the dynamical equation for the time evolution of the distribution function, f(r,v,m;t), can be resolved as a one-dimensional integral which can be compared to a direct hydro simulation as a test problem. Such solutions can also be used for preliminary analysis of experimental data. We report solutions for several shape dependent drag coefficients and analyze the results of recent planar dsh experiments in Ar and Xe.
Testing Developmental Pathways to Antisocial Personality Problems
ERIC Educational Resources Information Center
Diamantopoulou, Sofia; Verhulst, Frank C.; van der Ende, Jan
2010-01-01
This study examined the development of antisocial personality problems (APP) in young adulthood from disruptive behaviors and internalizing problems in childhood and adolescence. Parent ratings of 507 children's (aged 6-8 years) symptoms of attention deficit hyperactivity disorder, oppositional defiant disorder, and anxiety, were linked to…
Opdam, F L; Modak, A S; Gelderblom, H; Guchelaar, H J
2015-06-01
In a previous study, we found that the CYP2D6 phenotype determined by (13)C-dextromethorphan breath test (DM-BT) might be used to predict tamoxifen treatment outcome in breast cancer patients in the adjuvant setting. However, large variation in the delta-over-baseline (DOB) values was observed in the extensive metabolizer predicted phenotype group based on single point measures. In the present work we aimed to analyze the variability of phenotype results and determine reproducibility to further characterize the clinical utility of DM-BT by introducing multiple breath sampling instead of single breath sampling and by administration of a fixed dose of (13)C-DM. PMID:25891764
Hoffman, E.L.; Ammerman, D.J.
1993-08-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several finite element simulations of the event. The purpose of the study is to compare the performance of the various analysis codes and element types with respect to a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry.
Problems and Issues in Translating International Educational Achievement Tests
ERIC Educational Resources Information Center
Arffman, Inga
2013-01-01
The article reviews research and findings on problems and issues faced when translating international academic achievement tests. The purpose is to draw attention to the problems, to help to develop the procedures followed when translating the tests, and to provide suggestions for further research. The problems concentrate on the following: the…
ERIC Educational Resources Information Center
Hill, Kennedy T.; Horton, Margaret W.
Educational solutions to the problem of test anxiety were explored. Test anxiety has a debilitating effect on performance which increases over the school years. The solution is, first, to measure test anxiety so that the extent of it, as well as the effectiveness of programs designed to alleviate it, can be measured. The seven-item Comfort Index,…
NASA Astrophysics Data System (ADS)
Gnanvo, Kondo; Bai, Xinzhan; Gu, Chao; Liyanage, Nilanga; Nelyubin, Vladimir; Zhao, Yuxiang
2016-02-01
A large-area and light-weight gas electron multiplier (GEM) detector was built at the University of Virginia as a prototype for the detector R&D program of the future Electron Ion Collider. The prototype has a trapezoidal geometry designed as a generic sector module in a disk layer configuration of a forward tracker in collider detectors. It is based on light-weight material and narrow support frames in order to minimize multiple scattering and dead-to-sensitive area ratio. The chamber has a novel type of two dimensional (2D) stereo-angle readout board with U-V strips that provides (r,φ) position information in the cylindrical coordinate system of a collider environment. The prototype was tested at the Fermilab Test Beam Facility in October 2013 and the analysis of the test beam data demonstrates an excellent response uniformity of the large area chamber with an efficiency higher than 95%. An angular resolution of 60 μrad in the azimuthal direction and a position resolution better than 550 μm in the radial direction were achieved with the U-V strip readout board. The results are discussed in this paper.
Errors in Standardized Tests: A Systemic Problem.
ERIC Educational Resources Information Center
Rhoades, Kathleen; Madaus, George
The nature and extent of human error in educational testing over the past 25 years were studied. In contrast to the random measurement error expected in all tests, the presence of human error is unexpected and brings unknown, often harmful, consequences for students and their schools. Using data from a variety of sources, researchers found 103…
Test problem construction for single-objective bilevel optimization.
Sinha, Ankur; Malo, Pekka; Deb, Kalyanmoy
2014-01-01
In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of 12 problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The code related to the paper may be accessed from the website http://bilevel.org . PMID:24364674
Problem-solving test: Tryptophan operon mutants.
Szeberényi, József
2010-09-01
Terms to be familiar with before you start to solve the test: tryptophan, operon, operator, repressor, inducer, corepressor, promoter, RNA polymerase, chromosome-polysome complex, regulatory gene, cis-acting element, trans-acting element, plasmid, transformation. PMID:21567855
ESTL: Innovative Solutions to Tribology Test Problems
NASA Astrophysics Data System (ADS)
Roberts, E.; Eiden, M.
2004-08-01
For over 30 years, ESTL, through the financial and technical support of ESA and ESTEC, has provided a unique service to the European space industry by ensuring the reliability of the moving parts of spacecraft mechanisms through the application of sound tribology. ESTL's activities range from fundamental measurements of adhesion, friction and wear of material couples to the full qualification and life-testing of primary spacecraft mechanisms. In all cases, test work is carried out under conditions that simulate the thermal and vacuum conditions that prevail in the space environment. Occasionally there have arisen specific measurement requirements which have proved challenging under the constraints imposed by thermal- vacuum conditions. How ESTL has met some of its more demanding test requirements, and the significance of the test results obtained, is the subject of this paper.
NASA Astrophysics Data System (ADS)
Ekberg, Peter; Stiblert, Lars; Mattsson, Lars
2014-05-01
The manufacturing of flat panel displays requires a number of photomasks for the placement of pixel patterns and supporting transistor arrays. For large area photomasks, dedicated ultra-precision writers have been developed for the production of these chromium patterns on glass or quartz plates. The dimensional tolerances in X and Y for absolute pattern placement on these plates, with areas measured in square meters, are in the range of 200-300 nm (3σ). To verify these photomasks, 2D ultra-precision coordinate measurement machines are used having even tighter tolerance requirements. This paper will present how the world standard metrology tool used for verifying large masks, the Micronic Mydata MMS15000, is calibrated without any other references than the wavelength of the interferometers in an extremely well-controlled temperature environment. This process is called self-calibration and is the only way to calibrate the metrology tool, as no square-meter-sized large area 2D traceable artifact is available. The only parameter that cannot be found using self-calibration is the absolute length scale. To make the MMS15000 traceable, a 1D reference rod, calibrated at a national metrology lab, is used. The reference plates used in the calibration of the MMS15000 may have sizes up to 1 m2 and a weight of 50 kg. Therefore, standard methods for self-calibration on a small scale with exact placements cannot be used in the large area case. A new, more general method had to be developed for the purpose of calibrating the MMS15000. Using this method, it is possible to calibrate the measurement tool down to an uncertainty level of <90 nm (3σ) over an area of (0.8 × 0.8) m2. The method used, which is based on the concept of iteration, does not introduce any more noise than the random noise introduced by the measurements, resulting in the lowest possible noise level that can be achieved by any self-calibration method.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.
2010-01-01
Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…
NASA Astrophysics Data System (ADS)
Morgan, J. P.; de Monserrat, A.; Hall, R.; Taramon, J. M.; Perez-Gussinye, M.
2015-12-01
This work focuses on improving current 2D numerical approaches to modeling the boundary conditions associated with computing accurate deformation and melting associated with continental rifting. Recent models primarily use far-field boundary conditions that have been used for decades with little assessment of their effects on asthenospheric flow beneath the rifting region. All are clearly extremely oversimplified — Huismans and Buiter assume there is no vertical flow into the rifting region, with the asthenosphere flowing uniformly into the rifting region from the sides beneath lithosphere moving in the opposing direction, Armitage et al. and van Wijk use divergent velocities on the upper boundary to impose break-up within a Cartesian box, while other studies generally assume there is uniform horizontal flow away from the center of rifting, with uniform vertical flow replenishing the material pulled out of the sides of the computational region. All are likely to significantly shape the pattern of asthenospheric flow beneath the stretching lithosphere that is associated with pressure-release melting and rift volcanism. Thus while ALL may lead to similar predictions of the effects of crustal stretching and thinning, NONE may lead to accurate determination of the the asthenospheric flow and melting associated with lithospheric stretching and breakup. Here we discuss a suite of numerical experiments that compare these choices to likely more realistic boundary condition choices like the analytical solution for flow associated with two diverging plates stretching over a finite-width region, and a high-resolution 2-D region embedded within a cylindrical annulus 'whole mantle cross-section' at 5% extra numerical problem size. Our initial results imply that the choice of far-field boundary conditions does indeed significantly influence predicted melting distributions and melt volumes associated with continental breakup. For calculations including asthenospheric melting
Testing general relativity: Progress, problems, and prospects
NASA Technical Reports Server (NTRS)
Shapiro, I. I.
1971-01-01
The results from ground-based experimental testing are presented. Prospects for improving these experiments are discussed. Radar echo time delays, perihelion advance and solar oblateness, time variation of the gravitational constant, and radio wave deflection are considered. Ground-based and spacecraft techniques are compared on an accuracy vs. cost basis.
Problem-Solving Test: Southwestern Blotting
ERIC Educational Resources Information Center
Szeberényi, József
2014-01-01
Terms to be familiar with before you start to solve the test: Southern blotting, Western blotting, restriction endonucleases, agarose gel electrophoresis, nitrocellulose filter, molecular hybridization, polyacrylamide gel electrophoresis, proto-oncogene, c-abl, Src-homology domains, tyrosine protein kinase, nuclear localization signal, cDNA,…
Problem-Solving Test: Restriction Endonuclease Mapping
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
The term "restriction endonuclease mapping" covers a number of related techniques used to identify specific restriction enzyme recognition sites on small DNA molecules. A method for restriction endonuclease mapping of a 1,000-basepair (bp)-long DNA molecule is described in the fictitious experiment of this test. The most important fact needed to…
American History's Problem with Standardized Testing
ERIC Educational Resources Information Center
McCoog, Ian J.
2005-01-01
This article looks at current research concerning how students best learn the discipline of history, commentaries both in favor of and against standardized testing, and basic philosophical beliefs about the discipline. It explains methods of how to incorporate differentiated lessons and performance based assessments to NCLB standards and…
Crash test for the Copenhagen problem.
Nagler, Jan
2004-06-01
The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems. PMID:15244719
Group Testing: Four Student Solutions to a Classic Optimization Problem
ERIC Educational Resources Information Center
Teague, Daniel
2006-01-01
This article describes several creative solutions developed by calculus and modeling students to the classic optimization problem of testing in groups to find a small number of individuals who test positive in a large population.
Energy Science and Technology Software Center (ESTSC)
2005-07-01
Aniso2d is a two-dimensional seismic forward modeling code. The earth is parameterized by an X-Z plane in which the seismic properties Can have monoclinic with x-z plane symmetry. The program uses a user define time-domain wavelet to produce synthetic seismograms anrwhere within the two-dimensional media.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Greg Flach, Frank Smith
2011-12-31
Mesh2d is a Fortran90 program designed to generate two-dimensional structured grids of the form [x(i),y(i,j)] where [x,y] are grid coordinates identified by indices (i,j). The x(i) coordinates alone can be used to specify a one-dimensional grid. Because the x-coordinates vary only with the i index, a two-dimensional grid is composed in part of straight vertical lines. However, the nominally horizontal y(i,j0) coordinates along index i are permitted to undulate or otherwise vary. Mesh2d also assigns an integer material type to each grid cell, mtyp(i,j), in a user-specified manner. The complete grid is specified through three separate input files defining the x(i), y(i,j), and mtyp(i,j) variations.
Energy Science and Technology Software Center (ESTSC)
2011-12-31
Mesh2d is a Fortran90 program designed to generate two-dimensional structured grids of the form [x(i),y(i,j)] where [x,y] are grid coordinates identified by indices (i,j). The x(i) coordinates alone can be used to specify a one-dimensional grid. Because the x-coordinates vary only with the i index, a two-dimensional grid is composed in part of straight vertical lines. However, the nominally horizontal y(i,j0) coordinates along index i are permitted to undulate or otherwise vary. Mesh2d also assignsmore » an integer material type to each grid cell, mtyp(i,j), in a user-specified manner. The complete grid is specified through three separate input files defining the x(i), y(i,j), and mtyp(i,j) variations.« less
NASA Astrophysics Data System (ADS)
Lotsch, Bettina V.
2015-07-01
Graphene's legacy has become an integral part of today's condensed matter science and has equipped a whole generation of scientists with an armory of concepts and techniques that open up new perspectives for the postgraphene area. In particular, the judicious combination of 2D building blocks into vertical heterostructures has recently been identified as a promising route to rationally engineer complex multilayer systems and artificial solids with intriguing properties. The present review highlights recent developments in the rapidly emerging field of 2D nanoarchitectonics from a materials chemistry perspective, with a focus on the types of heterostructures available, their assembly strategies, and their emerging properties. This overview is intended to bridge the gap between two major—yet largely disjunct—developments in 2D heterostructures, which are firmly rooted in solid-state chemistry or physics. Although the underlying types of heterostructures differ with respect to their dimensions, layer alignment, and interfacial quality, there is common ground, and future synergies between the various assembly strategies are to be expected.
Group Work Tests for Context-Rich Problems
ERIC Educational Resources Information Center
Meyer, Chris
2016-01-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion…
Development of a Test of Experimental Problem-Solving Skills.
ERIC Educational Resources Information Center
Ross, John A.; Maynes, Florence J.
1983-01-01
Multiple-choice tests were constructed for seven problem-solving skills using learning hierarchies based on expert-novice differences and refined in three phases of field testing. Includes test reliabilities (sufficient for making judgments of group performance but insufficient in single-administration for individual assessment), validity, and…
Problems and Alternatives in Testing Mexican American Students.
ERIC Educational Resources Information Center
Cervantes, Robert A.
The problems of standardized tests with regard to Mexican American students, particularly "ethnic validity", are reviewed. Inadequate norm group representation, cultural bias, and language bias are purported by the author to be the most common faults of standardized tests. Suggested is the elimination of standardized testing as a principal means…
Problems in Testing the Intonation of Advanced Foreign Learners.
ERIC Educational Resources Information Center
Mendelsohn, David
1978-01-01
It is argued that knowledge about the testing of intonation in English as a foreign language is inadequate; the major problems are outlined and tentative suggestions are given. The basic problem is that the traditional foreign language teacher's conception of intonation is limited. A three-part definition of intonation is favored, with suggestions…
Some Current Problems in Simulator Design, Testing and Use.
ERIC Educational Resources Information Center
Caro, Paul W.
Concerned with the general problem of the effectiveness of simulator training, this report reflects information developed during the conduct of aircraft simulator training research projects sponsored by the Air Force, Army, Navy, and Coast Guard. Problems are identified related to simulator design, testing, and use, all of which impact upon…
Invitational Conference on Testing Problems (New York, October 29, 1966).
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
The 1966 Invitational Conference on Testing Problems dealt with the innovations of the new age of flexibility and the problems of evaluating and preparing for them. Papers presented in Session I, Innovation and Evaluation, were: (1) "Innovation and Evaluation: In Whose Hands?" by Nils Y. Wessell; (2) "The Discovery and Development of Educational…
A description of the test problems in the TRAC-P standard test matrix
Steinke, R.G.
1996-03-01
This report describes 15 different test problems in the TRAC-P (Transient Reactor Analysis Code) standard test matrix of 42 test-problem calculations. Their TRACIN input-data files are listed in Appendix A. The description of each test problem includes the nature of what the test problem models and evaluates, the principal models of TRAC-P that the test problem serves to verify or validate, and the TRAC-P features and options that are being involved in its calculation. The test-problem calculations will determine the effect that changes made to a TRAC-P version have on the results. This will help the developers assess the acceptance of those changes to TRAC-P.
Brittle damage models in DYNA2D
Faux, D.R.
1997-09-01
DYNA2D is an explicit Lagrangian finite element code used to model dynamic events where stress wave interactions influence the overall response of the system. DYNA2D is often used to model penetration problems involving ductile-to-ductile impacts; however, with the advent of the use of ceramics in the armor-anti-armor community and the need to model damage to laser optics components, good brittle damage models are now needed in DYNA2D. This report will detail the implementation of four brittle damage models in DYNA2D, three scalar damage models and one tensor damage model. These new brittle damage models are then used to predict experimental results from three distinctly different glass damage problems.
Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo
2012-01-01
Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches. PMID:22074112
ERIC Educational Resources Information Center
Charalambous, Charalambos; Kyriakides, Leonidas; Philippou, George
2003-01-01
The study reported in this paper is an attempt to develop a comprehensive model of measuring problem solving and posing (PSP) skills based on Marshall's schema theory (ST). A battery of tests on PSP skills was administered to 5th and 6th grade Cypriot students (n=2519). The Rasch model was used and a scale was created for the battery of tests and…
Flat flame olympics: test problem a. Final report
Coffee, T.P.
1982-10-01
This report discusses a test problem for a computer program for numerically solving the equations governing a laminar, premixed, one-dimensional flame. The problem was proposed by GAMM (Committee for Numerical Methods in Fluid Mechanics), and has been solved for presentation at a workshop at the Technical University, Aachen, Germany, 12-14 Oct. 1981. The test problem is an unsteady propagating flame with one-step chemistry and Lewis number different from unity. A code developed for steady state problems with elementary chemistry was modified to use the simplified transport and chemistry of the test problem and to follow the details of the transient solution. The problem is solved for six cases. The cases differ in the Lewis number chosen and the activation energy of the single reaction. The initial conditions used are the steady state solutions predicted by the simplified analytic method of asymptotic analysis. In most cases, the numerical solutions rapidly converge, and the steady state solutions are similar to the asymptotic solutions. However, in one case, with activation energy and Lewis number equal to two, the solution does not converge. Instead, large oscillations in the flame speed and the profiles occur.
Group Work Tests for Context-Rich Problems
NASA Astrophysics Data System (ADS)
Meyer, Chris
2016-05-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion phase, when students are given a "number-free" version of the problem. This phase allows students to digest the story-like problem, explore solution ideas, and alleviate some test anxiety. After 10-15 minutes of discussion, students inform the instructor of their readiness for the individual part of the test. What follows next is a pedagogical phase change from lively group discussion to quiet individual work. The group work test is a natural continuation of the group work in our daily physics classes and helps reinforce the importance of collaboration. This method has met with success at York Mills Collegiate Institute, in Toronto, Ontario, where it has been used consistently for unit tests and the final exam of the grade 12 university preparation physics course.
A one-loop test for construction of 4D N = 4 SYM from 2D SYM via fuzzy-sphere geometry
NASA Astrophysics Data System (ADS)
Matsuura, So; Sugino, Fumihiko
2016-04-01
As a perturbative check of the construction of 4D N=4 supersymmetric Yang-Mills theory (SYM) from mass-deformed N=(8,8) SYM on the 2D lattice, the one-loop effective action for scalar kinetic terms is computed in N=4 U(k) SYM on R^2 × (fuzzy S^2), which is obtained by expanding 2D N=(8,8) U(N) SYM with mass deformation around its fuzzy-sphere classical solution. The radius of the fuzzy sphere is proportional to the inverse of the mass. We consider two successive limits: (1) decompactify the fuzzy sphere to a noncommutative (Moyal) plane and (2) turn off the noncommutativity of the Moyal plane. It is straightforward at the classical level to obtain the ordinary N=4 SYM on R^4 in the limits, while it is nontrivial at the quantum level. The one-loop effective action for the SU(k) sector of the gauge group U(k) coincides with that of the ordinary 4D N=4 SYM in the above limits. Although a "noncommutative anomaly" appears in the overall U(1) sector of the U(k) gauge group, this can be expected to be a gauge artifact not affecting gauge-invariant observables.
Nyström, Monica E; Terris, Darcey D; Sparring, Vibeke; Tolf, Sara; Brown, Claire R
2012-01-01
Our objective was to test whether the Structured Problem and Success Inventory (SPI) instrument could capture mental representations of organizational and work-related problems as described by individuals working in health care organizations and to test whether these representations varied according to organizational position. A convenience sample (n = 56) of middle managers (n = 20), lower-level managers (n = 20), and staff (n = 16) from health care organizations in Stockholm (Sweden) attending organizational development courses during 2003-2004 was recruited. Participants used the SPI to describe the 3 most pressing organizational and work-related problems. Data were systematically reviewed to identify problem categories and themes. One hundred sixty-four problems were described, clustered into 13 problem categories. Generally, middle managers focused on organizational factors and managerial responsibilities, whereas lower-level managers and staff focused on operational issues and what others did or ought to do. Furthermore, we observed similarities and variation in perceptions and their association with respondents' position within an organization. Our results support the need for further evaluation of the SPI as a promising tool for health care organizations. Collecting structured inventories of organizational and work-related problems from multiple perspectives may assist in the development of shared understandings of organizational challenges and lead to more effective and efficient processes of solution planning and implementation. PMID:22453820
The measurand problem in infrared breath alcohol testing
NASA Astrophysics Data System (ADS)
Vosk, Ted
2012-02-01
Measurements are made to determine the value of a quantity known as a measurand. The measurand is not always the quantity subject to measurement, however. Often, a distinct quantity will be measured and related to the measurand through a measurement function. When the identities of the measurand and the quantity actually measured are not well defined or distinguished, it can lead to the misinterpretation of results. This is referred to as the measurand problem. The measurand problem can present significant difficulties when the law and not science determines the measurand. This arises when the law requires that a particular quantity be measured. Legal definitions are seldom as rigorous or complete as those utilized in science. Thus, legally defined measurands often fall prey to the measurand problem. An example is the measurement of breath alcohol concentration by infrared spectroscopy. All 50 states authorize such tests but the measurand differs by jurisdiction. This leads to misinterpretation of results in both the forensic and legal communities due to the measurand problem with the consequence that the innocent are convicted and guilty set free. Correct interpretation of breath test results requires that the measurand be properly understood and accounted for. I set forth the varying measurands defined by law, the impact these differing measurands have on the interpretation of breath test results and how the measurand problem can be avoided in the measurement of breath alcohol concentration.
2-d Finite Element Code Postprocessor
Energy Science and Technology Software Center (ESTSC)
1996-07-15
ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forcesmore » along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.« less
MAZE96. Generates 2D Input for DYNA NIKE & TOPAZ
Sanford, L.; Hallquist, J.O.
1992-02-24
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Generates 2D Input for DYNA NIKE & TOPAZ
Energy Science and Technology Software Center (ESTSC)
1996-07-15
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Internal Photoemission Spectroscopy of 2-D Materials
NASA Astrophysics Data System (ADS)
Nguyen, Nhan; Li, Mingda; Vishwanath, Suresh; Yan, Rusen; Xiao, Shudong; Xing, Huili; Cheng, Guangjun; Hight Walker, Angela; Zhang, Qin
Recent research has shown the great benefits of using 2-D materials in the tunnel field-effect transistor (TFET), which is considered a promising candidate for the beyond-CMOS technology. The on-state current of TFET can be enhanced by engineering the band alignment of different 2D-2D or 2D-3D heterostructures. Here we present the internal photoemission spectroscopy (IPE) approach to determine the band alignments of various 2-D materials, in particular SnSe2 and WSe2, which have been proposed for new TFET designs. The metal-oxide-2-D semiconductor test structures are fabricated and characterized by IPE, where the band offsets from the 2-D semiconductor to the oxide conduction band minimum are determined by the threshold of the cube root of IPE yields as a function of photon energy. In particular, we find that SnSe2 has a larger electron affinity than most semiconductors and can be combined with other semiconductors to form near broken-gap heterojunctions with low barrier heights which can produce a higher on-state current. The details of data analysis of IPE and the results from Raman spectroscopy and spectroscopic ellipsometry measurements will also be presented and discussed.
Discuss the testing problems of ultraviolet irradiance meters
NASA Astrophysics Data System (ADS)
Ye, Jun'an; Lin, Fangsheng
2014-09-01
Ultraviolet irradiance meters are widely used in many areas such as medical treatment, epidemic prevention, energy conservation and environment protection, computers, manufacture, electronics, ageing of material and photo-electric effect, for testing ultraviolet irradiance intensity. So the accuracy of value directly affects the sterile control in hospital, treatment, the prevention level of CDC and the control accuracy of curing and aging in manufacturing industry etc. Because the display of ultraviolet irradiance meters is easy to change, in order to ensure the accuracy, it needs to be recalibrated after being used period of time. By the comparison with the standard ultraviolet irradiance meters, which are traceable to national benchmarks, we can acquire the correction factor to ensure that the instruments working under accurate status and giving the accurate measured data. This leads to an important question: what kind of testing device is more accurate and reliable? This article introduces the testing method and problems of the current testing device for ultraviolet irradiance meters. In order to solve these problems, we have developed a new three-dimensional automatic testing device. We introduce structure and working principle of this system and compare the advantages and disadvantages of two devices. In addition, we analyses the errors in the testing of ultraviolet irradiance meters.
Qualification tests and electrical measurements: Practice and problems
NASA Technical Reports Server (NTRS)
Smokler, M. I.
1983-01-01
As part of the Flat-Plate Solar Array Project, 138 different module designs were subjected to qualification tests. Electrical measurements were subjected on well over a thousand modules representing more than 150 designs. From this experience, conclusions are drawn regarding results and problems, with discussion of the need for change or improvement. The qualification test sequence incuded application of environmental and electrical stresses to the module. With few exceptions, the tests have revealed defects necessitation of environmental and electrical stresses to the module. With few exceptions, the tests have revealed defects necessitating module design or process changes. However, the continued need for these tests may be questioned on the basis of technical and logistical factors. Technically, the current test sequence does not cover all design characteristics, does not include all field conditions and is not known to represent the desired 30-year module life. Logistically, the tests are time-consuming and costly, and there is a lack of, fully qualified independent test organizations. Alternatives to the current test program include simplification based on design specification and site environment, and/or the use of warranties or other commercial practices.
ERIC Educational Resources Information Center
Marchis, Iuliana
2009-01-01
The results of the Romanian pupils on international tests PISA and TIMSS in Mathematics are below the average. These poor results have many explications. In this article we compare the Mathematics problems given on these international tests with those given on national tests in Romania.
Testing problem-solving capacities: differences between individual testing and social group setting.
Krasheninnikova, Anastasia; Schneider, Jutta M
2014-09-01
Testing animals individually in problem-solving tasks limits distractions of the subjects during the test, so that they can fully concentrate on the problem. However, such individual performance may not indicate the problem-solving capacity that is commonly employed in the wild when individuals are faced with a novel problem in their social groups, where the presence of a conspecific influences an individual's behaviour. To assess the validity of data gathered from parrots when tested individually, we compared the performance on patterned-string tasks among parrots tested singly and parrots tested in social context. We tested two captive groups of orange-winged amazons (Amazona amazonica) with several patterned-string tasks. Despite the differences in the testing environment (singly vs. social context), parrots from both groups performed similarly. However, we found that the willingness to participate in the tasks was significantly higher for the individuals tested in social context. The study provides further evidence for the crucial influence of social context on individual's response to a challenging situation such as a problem-solving test. PMID:24668582
Reproducibility problems with the AMPLICOR PCR Chlamydia trachomatis test.
Peterson, E M; Darrow, V; Blanding, J; Aarnaes, S; de la Maza, L M
1997-01-01
In an attempt to use an expanded "gold standard" in an evaluation of an antigen detection test for Chlamydia trachomatis, the AMPLICOR (Roche Diagnostics Systems, Inc., Branchburg, N.J.) PCR Chlamydia trachomatis test and culture were used with 591 sets of cervical specimens. Of the 591 specimens assayed, 35 were retested due to either an equivocal result by the PCR (19 samples) or a discrepancy between the results of culture, PCR, and the antigen detection method. During the repeat testing of the samples with equivocal and discrepant results, all but one interpretation change was due to the PCR result. In addition, upon repeat testing the PCR assay value measured in optical density units varied widely for 13 of these specimens. These 13 specimens were then tested in triplicate by the manufacturer with primers to the chlamydia plasmid and in duplicate with primers to the major outer membrane protein. Only 3 of the 13 specimens gave the same interpretation with these five replicates. In summary, reproducibility problems with the AMPLICOR test should be considered before it is incorporated as part of routine testing or used as an expanded gold standard for chlamydia testing. PMID:9157161
NASA Astrophysics Data System (ADS)
Wang, Jin; Ma, Jianyong; Zhou, Changhe
2014-11-01
A 3×3 high divergent 2D-grating with period of 3.842μm at wavelength of 850nm under normal incidence is designed and fabricated in this paper. This high divergent 2D-grating is designed by the vector theory. The Rigorous Coupled Wave Analysis (RCWA) in association with the simulated annealing (SA) is adopted to calculate and optimize this 2D-grating.The properties of this grating are also investigated by the RCWA. The diffraction angles are more than 10 degrees in the whole wavelength band, which are bigger than the traditional 2D-grating. In addition, the small period of grating increases the difficulties of fabrication. So we fabricate the 2D-gratings by direct laser writing (DLW) instead of traditional manufacturing method. Then the method of ICP etching is used to obtain the high divergent 2D-grating.
Mason, W.E.
1983-03-01
A set of finite element codes for the solution of nonlinear, two-dimensional (TACO2D) and three-dimensional (TACO3D) heat transfer problems. Performs linear and nonlinear analyses of both transient and steady state heat transfer problems. Has the capability to handle time or temperature dependent material properties. Materials may be either isotropic or orthotropic. A variety of time and temperature dependent boundary conditions and loadings are available including temperature, flux, convection, radiation, and internal heat generation.
NASA Astrophysics Data System (ADS)
Slanger, T. G.; Cosby, P. C.; Huestis, D. L.
2003-04-01
N(^2D) is an important species in the nighttime ionosphere, as its reaction with O_2 is a principal source of NO. Its modeled concentration peaks near 200 km, at approximately 4 × 10^5 cm-3. Nightglow emission in the optically forbidden lines at 519.8 and 520.0 nm is quite weak, a consequence of the combination of an extremely long radiative lifetime, about 10^5 sec, and quenching by O-atoms, O_2, and N_2. The radiative lifetime is known only from theory, and various calculations lead to a range of possible values for the intensity ratio R = I(519.8)/I(520.0) of 1.5-2.5. On the observational side, Hernandez and Turtle [1969] determined a range of R = 1.3-1.9 in the nightglow, and Sivjee et al. [1981] reported a variable ratio in aurorae, between 1.2 and 1.6. From sky spectra obtained at the Keck II telescope on Mauna Kea, we have accumulated eighty-five 30-60 minute data sets, from March and October, 2000, and April, 2001, over 13 nights of astronomical observations. We find R to have a quite precise value of 1.760± 0.012 (2-σ). There is no difference between the three data sets in terms of the extracted ratio, which therefore seems to be independent of external conditions. At the same time, determination of the O(^1D - ^3P) doublet intensity ratio, I(630.0)/I(636.4), gives a value of 3.03 ± 0.01, the statistical expectation. G. Hernandez and J. P. Turtle, Planet. Space Sci. 17, 675, 1969. G. G. Sivjee, C. S. Deehr, and K. Henricksen, J. Geophys. Res. 86, 1581, 1981.
Test-state approach to the quantum search problem
Sehrawat, Arun; Nguyen, Le Huy; Englert, Berthold-Georg
2011-05-15
The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These test states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.
Extended 2D generalized dilaton gravity theories
NASA Astrophysics Data System (ADS)
de Mello, R. O.
2008-09-01
We show that an anomaly-free description of matter in (1+1) dimensions requires a deformation of the 2D relativity principle, which introduces a non-trivial centre in the 2D Poincaré algebra. Then we work out the reduced phase space of the anomaly-free 2D relativistic particle, in order to show that it lives in a noncommutative 2D Minkowski space. Moreover, we build a Gaussian wave packet to show that a Planck length is well defined in two dimensions. In order to provide a gravitational interpretation for this noncommutativity, we propose to extend the usual 2D generalized dilaton gravity models by a specific Maxwell component, which guages the extra symmetry associated with the centre of the 2D Poincaré algebra. In addition, we show that this extension is a high energy correction to the unextended dilaton theories that can affect the topology of spacetime. Further, we couple a test particle to the general extended dilaton models with the purpose of showing that they predict a noncommutativity in curved spacetime, which is locally described by a Moyal star product in the low energy limit. We also conjecture a probable generalization of this result, which provides strong evidence that the noncommutativity is described by a certain star product which is not of the Moyal type at high energies. Finally, we prove that the extended dilaton theories can be formulated as Poisson Sigma models based on a nonlinear deformation of the extended Poincaré algebra.
Application of successive test feature classifier to dynamic recognition problems
NASA Astrophysics Data System (ADS)
Sakata, Yukinobu; Kaneko, Shun'ichi; Tanaka, Takayuki
2005-12-01
A novel successive learning algorithm is proposed for efficiently handling sequentially provided training data based on Test Feature Classifier (TFC), which is non-parametric and effective even for small data. We have proposed a novel classifier TFC utilizing prime test features (PTF) which is combination feature subsets for getting excellent performance. TFC has characteristics as follows: non-parametric learning, no mis-classification of training data. And then, in some real-world problems, the effectiveness of TFC is confirmed through way applications. However, TFC has a problem that it must be reconstructed even when any sub-set of data is changed. In the successive learning, after recognition of a set of unknown objects, they are fed into the classifier in order to obtain a modified classifier. We propose an efficient algorithm for reconstruction of PTFs, which is formalized in cases of addition and deletion of training data. In the verification experiment, using the successive learning algorithm, we can save about 70% on the total computational cost in comparison with a batch learning. We applied the proposed successive TFC to dynamic recognition problems from which the characteristic of training data changes with progress of time, and examine the characteristic by the fundamental experiments. Support Vector Machine (SVM) which is well established in algorithm and on practical application, was compared with the proposed successive TFC. And successive TFC indicated high performance compared with SVM.
NASA Astrophysics Data System (ADS)
Antinori, Samuele; Falchieri, Davide; Gabrielli, Alessandro; Gandolfi, Enzo
2004-09-01
CARLOSv3 is a third version of a chip that plays a significant role in the data acquisition chain of the A Large Ion Collider Experiment Inner Tracking System experiment. It has been designed and realized with a 0.25 μm CMOS 3-metal rad-hard digital library. The chip elaborates and compresses, by means of a bi-dimensional compressor, data belonging to a so-called event. The compressor looks for cross-shaped clusters within the whole data set coming from the silicon detector. To test the chip a specific PCB has been designed; it contains the connectors for probing the ASIC with a pattern generator and a logic state analyzer. The chip is inserted on the PCB using a ZIF socket. This allows to test the 35 packaged samples out of the total amount of bare chips we have from the foundry. The test phase has shown that 32 out of 35 chips under test work well. It is planned to redesign a new version of the chip by adding extra features and to submit the final version of CARLOS upon the final DAQ chain will be totally tested both in Bologna and at CERN.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Education and the Workforce.
H.R. 2846, a bill to prohibit spending Federal education funds on national testing without explicit and specific legislation was referred to the Committee on Education and the Workforce of the U.S. House of Representatives. The Committee, having reviewed the bill, reports favorably on it in this document, proposes some amendments, and recommends…
Review of measurement and testing problems. [of aircraft emissions
NASA Technical Reports Server (NTRS)
1976-01-01
Good instrumentation was required to obtain reliable and repeatable baseline data. Problems that were encountered in developing such a total system were: (1) accurate airflow measurement, (2) precise fuel flow measurement, and (3) the instrumentation used for pollutant measurement was susceptible to frequent malfunctions. Span gas quality had a significant effect on emissions test results. The Spindt method was used in the piston aircraft emissions program. The Spindt method provided a comparative computational procedure for fuel/air ratio based on measured emissions concentrations.
A class of self-similar hydrodynamics test problems
Ramsey, Scott D; Brown, Lowell S; Nelson, Eric M; Alme, Marv L
2010-12-08
We consider self-similar solutions to the gas dynamics equations. One such solution - a spherical geometry Gaussian density profile - has been analyzed in the existing literature, and a connection between it, a linear velocity profile, and a uniform specific internal energy profile has been identified. In this work, we assume the linear velocity profile to construct an entire class of self-similar sol utions in both cylindrical and spherical geometry, of which the Gaussian form is one possible member. After completing the derivation, we present some results in the context of a test problem for compressible flow codes.
Scaling in the 2D SU(3) × SU(3) spin model as a test of a new coding method for SU(3) matrices
NASA Astrophysics Data System (ADS)
Bunk, B.; Sommer, R.
1985-02-01
We present a Monte Carlo measurement of the magnetic susceptibility in the SU(3) × SU(3) spin model in two dimensions. Asymptotic scaling is verified on a 20 × 20 lattice. This laboratory is then used to test a new method for coding SU(3) variables in one (60 bit)- word of computer memory. In this approach, real numbers are truncated to fit into a 5-bit representation.
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false How are drug test problems corrected? 40.205... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.205 How are drug test problems...), you must try to correct the problem promptly, if doing so is practicable. You may conduct...
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false How are drug test problems corrected? 40.205... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.205 How are drug test problems...), you must try to correct the problem promptly, if doing so is practicable. You may conduct...
Ji, Yuan; Skierka, Jennifer M; Blommel, Joseph H; Moore, Brenda E; VanCuyk, Douglas L; Bruflat, Jamie K; Peterson, Lisa M; Veldhuizen, Tamra L; Fadra, Numrah; Peterson, Sandra E; Lagerstedt, Susan A; Train, Laura J; Baudhuin, Linnea M; Klee, Eric W; Ferber, Matthew J; Bielinski, Suzette J; Caraballo, Pedro J; Weinshilboum, Richard M; Black, John L
2016-05-01
Significant barriers, such as lack of professional guidelines, specialized training for interpretation of pharmacogenomics (PGx) data, and insufficient evidence to support clinical utility, prevent preemptive PGx testing from being widely clinically implemented. The current study, as a pilot project for the Right Drug, Right Dose, Right Time-Using Genomic Data to Individualize Treatment Protocol, was designed to evaluate the impact of preemptive PGx and to optimize the workflow in the clinic setting. We used an 84-gene next-generation sequencing panel that included SLCO1B1, CYP2C19, CYP2C9, and VKORC1 together with a custom-designed CYP2D6 testing cascade to genotype the 1013 subjects in laboratories approved by the Clinical Laboratory Improvement Act. Actionable PGx variants were placed in patient's electronic medical records where integrated clinical decision support rules alert providers when a relevant medication is ordered. The fraction of this cohort carrying actionable PGx variant(s) in individual genes ranged from 30% (SLCO1B1) to 79% (CYP2D6). When considering all five genes together, 99% of the subjects carried an actionable PGx variant(s) in at least one gene. Our study provides evidence in favor of preemptive PGx testing by identifying the risk of a variant being present in the population we studied. PMID:26947514
Phillips, Lawrence M; Hachamovitch, Rory; Berman, Daniel S; Iskandrian, Ami E; Min, James K; Picard, Michael H; Kwong, Raymond Y; Friedrich, Matthias G; Scherrer-Crosbie, Marielle; Hayes, Sean W; Sharir, Tali; Gosselin, Gilbert; Mazzanti, Marco; Senior, Roxy; Beanlands, Rob; Smanio, Paola; Goyal, Abhi; Al-Mallah, Mouaz; Reynolds, Harmony; Stone, Gregg W; Maron, David J; Shaw, Leslee J
2013-12-01
There is a preponderance of evidence that, in the setting of an acute coronary syndrome, an invasive approach using coronary revascularization has a morbidity and mortality benefit. However, recent stable ischemic heart disease (SIHD) randomized clinical trials testing whether the addition of coronary revascularization to guideline-directed medical therapy (GDMT) reduces death or major cardiovascular events have been negative. Based on the evidence from these trials, the primary role of GDMT as a front line medical management approach has been clearly defined in the recent SIHD clinical practice guideline; the role of prompt revascularization is less precisely defined. Based on data from observational studies, it has been hypothesized that there is a level of ischemia above which a revascularization strategy might result in benefit regarding cardiovascular events. However, eligibility for recent negative trials in SIHD has mandated at most minimal standards for ischemia. An ongoing randomized trial evaluating the effectiveness of randomization of patients to coronary angiography and revascularization as compared to no coronary angiography and GDMT in patients with moderate-severe ischemia will formally test this hypothesis. The current review will highlight the available evidence including a review of the published and ongoing SIHD trials. PMID:23963599
Phillips, Lawrence M.; Hachamovitch, Rory; Berman, Daniel S.; Iskandrian, Ami E.; Min, James K.; Picard, Michael H.; Kwong, Raymond Y.; Friedrich, Matthias G.; Scherrer-Crosbie, Marielle; Hayes, Sean W.; Sharir, Tali; Gosselin, Gilbert; Mazzanti, Marco; Senior, Roxy; Beanlands, Rob; Smanio, Paola; Goyal, Abhi; Al-Mallah, Mouaz; Reynolds, Harmony; Stone, Gregg W.; Maron, David J.; Shaw, Leslee J.
2014-01-01
There is a preponderance of evidence that, in the setting of an acute coronary syndrome, an invasive approach using coronary revascularization has a morbidity and mortality benefit. However, recent stable ischemic heart disease (SIHD) randomized clinical trials testing whether the addition of coronary revascularization to guideline-directed medical therapy (GDMT) reduces death or major cardiovascular events have been negative. Based on the evidence from these trials, the primary role of GDMT as a front line medical management approach has been clearly defined in the recent SIHD clinical practice guideline; the role of prompt revascularization is less precisely defined. Based on data from observational studies, it has been hypothesized that there is a level of ischemia above which a revascularization strategy might result in benefit regarding cardiovascular events. However, eligibility for recent negative trials in SIHD has mandated at most minimal standards for ischemia. An ongoing randomized trial evaluating the effectiveness of randomization of patients to coronary angiography and revascularization as compared to no coronary angiography and GDMT in patients with moderate-severe ischemia will formally test this hypothesis. The current review will highlight the available evidence including a review of the published and ongoing SIHD trials. PMID:23963599
Predicting non-square 2D dice probabilities
NASA Astrophysics Data System (ADS)
Pender, G. A. T.; Uhrin, M.
2014-07-01
The prediction of the final state probabilities of a general cuboid randomly thrown onto a surface is a problem that naturally arises in the minds of men and women familiar with regular cubic dice and the basic concepts of probability. Indeed, it was considered by Newton in 1664 (Newton 1967 The Mathematical Papers of Issac Newton vol I (Cambridge: Cambridge University Press) pp 60-1). In this paper we make progress on the 2D problem (which can be realized in 3D by considering a long cuboid, or alternatively a rectangular cross-sectioned dreidel). For the two-dimensional case we suggest that the ratio of the probabilities of landing on each of the two sides is given by \\frac{\\sqrt{{{k}^{2}}+{{l}^{2}}}-k}{\\sqrt{{{k}^{2}}+{{l}^{2}}}-l}\\frac{arctan \\frac{l}{k}}{arctan \\frac{k}{l}} where k and l are the lengths of the two sides. We test this theory both experimentally and computationally, and find good agreement between our theory, experimental and computational results. Our theory is known, from its derivation, to be an approximation for particularly bouncy or ‘grippy’ surfaces where the die rolls through many revolutions before settling. On real surfaces we would expect (and we observe) that the true probability ratio for a 2D die is a somewhat closer to unity than predicted by our theory. This problem may also have wider relevance in the testing of physics engines.
Energy Science and Technology Software Center (ESTSC)
2004-08-01
AnisWave2D is a 2D finite-difference code for a simulating seismic wave propagation in fully anisotropic materials. The code is implemented to run in parallel over multiple processors and is fully portable. A mesh refinement algorithm has been utilized to allow the grid-spacing to be tailored to the velocity model, avoiding the over-sampling of high-velocity materials that usually occurs in fixed-grid schemes.
Leak testing of cryogenic components — problems and solutions
NASA Astrophysics Data System (ADS)
Srivastava, S. P.; Pandarkar, S. P.; Unni, T. G.; Sinha, A. K.; Mahajan, K.; Suthar, R. L.
2008-05-01
moderator pot was driving the MSLD out of range. Since it was very difficult to locate the leak by Tracer Probe Method, some other technique was ventured to solve the problem of leak location. Finally, it was possible to locate the leak by observing the change in Helium background reading of MSLD during masking/unmasking of the welded joints. This paper, in general describes the design and leak testing aspects of cryogenic components of Cold Neutron Source and in particular, the problems and solutions for leak testing of transfer lines and moderator pot.
Feautrier, D.; Smith, D.L.
1992-03-01
This report describes the development and testing of a deuterium gas target intended for use at a low-energy accelerator facility to produce neutrons for basic research and various nuclear applications. The principle source reaction is H-2(d,n)He-3. It produces a nearly mono-energetic group of neutrons. However, a lower-energy continuum neutron spectrum is produced by the H-2(d;n,p)H-2 reaction and also by deuterons which strike various components in the target assembly. The present target is designed to achieve the following objectives: (1) minimize unwanted background neutron production from the target assembly, (2) provide a relatively low level of residual long-term activity within the target components, (3) have the capacity to dissipate up to 150 watts of beam power with good target longevity, and (4) possess a relatively modest target mass in order to minimize neutron scattering from the target components. The basic physical principles that have to be considered in designing an accelerator target are discussed and the major engineering features of this particular target design are outlined. The results of initial performance tests on this target are documented and some conclusions concerning the viability of the target design are presented.
Ultrasonic 2D matrix PVDF transducer
NASA Astrophysics Data System (ADS)
Ptchelintsev, A.; Maev, R. Gr.
2000-05-01
During the past decade a substantial amount of work has been done in the area of ultrasonic imaging technology using 2D arrays. The main problems arising for the two-dimensional matrix transducers at megahertz frequencies are small size and huge count of the elements, high electrical impedance, low sensitivity, bad SNR and slower data acquisition rate. The major technological difficulty remains the high density of the interconnect. To solve these problems numerous approaches have been suggested. In the present work, a 24×24 elements (24 transmit+24 receive) matrix and a switching board were developed. The transducer consists of two 52 μm PVDF layers each representing a linear array of 24 elements placed one on the top of the other. Electrodes in these two layers are perpendicular and form the grid of 0.5×0.5 mm pitch. The layers are bonded together with the ground electrode being monolithic and located between the layers. The matrix is backed from the rear surface with an epoxy composition. During the emission, a linear element from the emitting layer generates a longitudinal wave pulse propagating inside the test object. Reflected pulses are picked-up by the receiving layer. During one transmit-receive cycle one transmit element and one receive element are selected by corresponding multiplexers. These crossed elements emulate a small element formed by their intersection. The present design presents the following advantages: minimizes number of active channels and density of the interconnect; reduces the electrical impedance of the element improving electrical matching; enables the transmit-receive mode; due to the efficient backing provides bandwidth and good time resolution; and, significantly reduces the electronics complexity. The matrix can not be used for the beam steering and focusing. Owing to this impossibility of focusing, the penetration depth is limited as well by the diffraction phenomena.
NASA Astrophysics Data System (ADS)
Jones, Alan G.; Afonso, Juan Carlos; Fullea, Javier; Salajegheh, Farshad
2014-02-01
Modeling the continental lithosphere's physical properties, especially its depth extent, must be done within a self-consistent petrological-geophysical framework; modeling using only one or two data types may easily lead to inconsistencies and erroneous interpretations. Using the LitMod approach for hypothesis testing and first-order modeling, we show how assumptions made about crustal information and the probable compositions of the lithospheric and sub-lithospheric mantle affect particular observables, particularly especially surface topographic elevation. The critical crustal parameter is density, leading to ca. 600 m error in topography for 50 kg m- 3 imprecision. The next key parameter is crustal thickness, and uncertainties in its definition lead to around ca. 4 km uncertainty in LAB for every 1 km of variation in Moho depth. Possible errors in the other assumed crustal parameters introduce a few kilometers of uncertainty in the depth to the LAB. We use Ireland as a natural laboratory to demonstrate the approach. From first-order arguments and given reasonable assumptions, a topographic elevation in the range of 50-100 m, which is the average across Ireland, requires that the lithosphere-asthenosphere boundary (LAB) beneath most of Ireland must lie in the range 90-115 km. A somewhat shallower (to 85 km) LAB is permitted, but the crust must be thinned (< 29 km) to compensate. The observations, especially topography, are inconsistent with suggestions, based on interpretation of S-to-P receiver functions, that the LAB thins from 85 km in southern Ireland to 55 km in central northern Ireland over a distance of < 150 km. Such a thin lithosphere would result in over 1000 m of uplift, and such rapid thinning by 30 km over less than 150 km would yield significant north-south variations in topographic elevation, Bouguer anomaly, and geoid height, none of which are observed. Even juxtaposing the most extreme probable depleted composition for the lithospheric mantle
Mechanical modeling of porous oxide fuel pellet A Test Problem
Nukala, Phani K; Barai, Pallab; Simunovic, Srdjan; Ott, Larry J
2009-10-01
A poro-elasto-plastic material model has been developed to capture the response of oxide fuels inside the nuclear reactors under operating conditions. Behavior of the oxide fuel and variation in void volume fraction under mechanical loading as predicted by the developed model has been reported in this article. The significant effect of void volume fraction on the overall stress distribution of the fuel pellet has also been described. An important oxide fuel issue that can have significant impact on the fuel performance is the mechanical response of oxide fuel pellet and clad system. Specifically, modeling the thermo-mechanical response of the fuel pellet in terms of its thermal expansion, mechanical deformation, swelling due to void formation and evolution, and the eventual contact of the fuel with the clad is of significant interest in understanding the fuel-clad mechanical interaction (FCMI). These phenomena are nonlinear and coupled since reduction in the fuel-clad gap affects thermal conductivity of the gap, which in turn affects temperature distribution within the fuel and the material properties of the fuel. Consequently, in order to accurately capture fuel-clad gap closure, we need to account for fuel swelling due to generation, retention, and evolution of fission gas in addition to the usual thermal expansion and mechanical deformation. Both fuel chemistry and microstructure also have a significant effect on the nucleation and growth of fission gas bubbles. Fuel-clad gap closure leading to eventual contact of the fuel with the clad introduces significant stresses in the clad, which makes thermo-mechanical response of the clad even more relevant. The overall aim of this test problem is to incorporate the above features in order to accurately capture fuel-clad mechanical interaction. Because of the complex nature of the problem, a series of test problems with increasing multi-physics coupling features, modeling accuracy, and complexity are defined with the
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6*15 and *35 Genotyping
Riffel, Amanda K.; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C.; Leeder, J. Steven; Rosenblatt, Kevin P.; Gaedigk, Andrea
2016-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6*15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6*15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6*35) which is also located in exon 1. Although alternative CYP2D6*15 and *35 assays resolved the issue, we discovered a novel CYP2D6*15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6*15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6*43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer and/or probe regions can impact
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6 (*) 15 and (*) 35 Genotyping.
Riffel, Amanda K; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C; Leeder, J Steven; Rosenblatt, Kevin P; Gaedigk, Andrea
2015-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6 (*) 15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6 (*) 15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6 (*) 35) which is also located in exon 1. Although alternative CYP2D6 (*) 15 and (*) 35 assays resolved the issue, we discovered a novel CYP2D6 (*) 15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6 (*) 15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6 (*) 43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing problems corrected? (a) As a BAT or STT, you have the responsibility of trying to complete successfully...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2014 CFR
2014-10-01
... 49 Transportation 1 2014-10-01 2014-10-01 false What problems always cause an alcohol test to be cancelled? 40.267 Section 40.267 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.267 What problems always cause an alcohol test to...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 1 2012-10-01 2012-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
Testing problem solving in turkey vultures (Cathartes aura) using the string-pulling test.
Ellison, Anne Margaret; Watson, Jane; Demers, Eric
2015-01-01
To examine problem solving in turkey vultures (Cathartes aura), six captive vultures were presented with a string-pulling task, which involved drawing a string up to access food. This test has been used to assess cognition in many bird species. A small piece of meat suspended by a string was attached to a perch. Two birds solved the problem without apparent trial-and-error learning; a third bird solved the problem after observing a successful bird, suggesting that this individual learned from the other vulture. The remaining birds failed to complete the task. The successful birds significantly reduced the time needed to solve the task from early trials compared to late trials, suggesting that they had learned to solve the problem and improved their technique. The successful vultures solved the problem in a novel way: they pulled the string through their beak with their tongue, and may have gathered the string in their crop until the food was in reach. In contrast, ravens, parrots and finches use a stepwise process; they pull the string up, tuck it under foot, and reach down to pull up another length. As scavengers, turkey vultures use their beak for tearing and ripping at carcasses, but possess large, flat, webbed feet that are ill-suited to pulling or grasping. The ability to solve this problem and the novel approach used by the turkey vultures in this study may be a result of the unique evolutionary pressures imposed on this scavenging species. PMID:25015133
Chimpanzee Problem-Solving: A Test for Comprehension.
ERIC Educational Resources Information Center
Premack, David; Woodruff, Guy
1978-01-01
Investigates a chimpanzee's capacity to recognize representations of problems and solutions, as well as its ability to perceive the relationship between each type of problem and its appropriate solutions using televised programs and photographic solutions. (HM)
NASA Astrophysics Data System (ADS)
Mayor, Louise
2016-05-01
Graphene might be the most famous example, but there are other 2D materials and compounds too. Louise Mayor explains how these atomically thin sheets can be layered together to create flexible “van der Waals heterostructures”, which could lead to a range of novel applications.
Material behavior and materials problems in TFTR (Tokamak Fusion Test Reactor)
Dylla, H.F.; Ulrickson, M.A.; Owens, D.K.; Heifetz, D.B.; Mills, B.E.; Pontau, A.E.; Wampler, W.R.; Doyle, B.L.; Lee, S.R.; Watson, R.D.; Croessmann, C.D.
1988-05-01
This paper reviews the experience with first-wall materials over a 20-month period of operation spanning 1985--1987. Experience with the axisymmetric inner wall limiter, constructed of graphite tiles, will be described including the necessary conditioning procedures needed for impurity and particle control of high power ({le}20 MW) neutral injection experiments. The thermal effects in disruptions have been quantified and no significant damage to the bumper limiter has occurred as a result of disruptions. Carbon and metal impurity redeposition effects have been quantified through surface analysis of wall samples. Estimates of the tritium retention in the graphite limiter tiles and redeposited carbon films have been made based on analysis of deuterium retention in removed graphite tiles and wall samples. New limiter structures have been designed using a 2D carbon/carbon (C/C) composite material for RF antenna protection. Laboratory tests of the important thermal, mechanical and vacuum properties of C/C materials will be described. Finally, the last series of experiments in TFTR with in-situ Zr/Al surface pumps will be described. Problems with Ar/Al embrittlement have led to the removal of the getter material from the in-torus environment. 53 refs., 8 figs., 3 tabs.
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
49 CFR 40.199 - What problems always cause a drug test to be cancelled?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problems always cause a drug test to be cancelled? 40.199 Section 40.199 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.199 What problems...
Energy Science and Technology Software Center (ESTSC)
2001-01-31
This software reduces the data from two-dimensional kSA MOS program, k-Space Associates, Ann Arbor, MI. Initial MOS data is recorded without headers in 38 columns, with one row of data per acquisition per lase beam tracked. The final MOSS 2d data file is reduced, graphed, and saved in a tab-delimited column format with headers that can be plotted in any graphing software.
Tomosynthesis imaging with 2D scanning trajectories
NASA Astrophysics Data System (ADS)
Khare, Kedar; Claus, Bernhard E. H.; Eberhard, Jeffrey W.
2011-03-01
Tomosynthesis imaging in chest radiography provides volumetric information with the potential for improved diagnostic value when compared to the standard AP or LAT projections. In this paper we explore the image quality benefits of 2D scanning trajectories when coupled with advanced image reconstruction approaches. It is intuitively clear that 2D trajectories provide projection data that is more complete in terms of Radon space filling, when compared with conventional tomosynthesis using a linearly scanned source. Incorporating this additional information for obtaining improved image quality is, however, not a straightforward problem. The typical tomosynthesis reconstruction algorithms are based on direct inversion methods e.g. Filtered Backprojection (FBP) or iterative algorithms that are variants of the Algebraic Reconstruction Technique (ART). The FBP approach is fast and provides high frequency details in the image but at the same time introduces streaking artifacts degrading the image quality. The iterative methods can reduce the image artifacts by using image priors but suffer from a slow convergence rate, thereby producing images lacking high frequency details. In this paper we propose using a fast converging optimal gradient iterative scheme that has advantages of both the FBP and iterative methods in that it produces images with high frequency details while reducing the image artifacts. We show that using favorable 2D scanning trajectories along with the proposed reconstruction method has the advantage of providing improved depth information for structures such as the spine and potentially producing images with more isotropic resolution.
MAGNUM-2D computer code: user's guide
England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.
1985-01-01
Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.
Interparticle Attraction in 2D Complex Plasmas
NASA Astrophysics Data System (ADS)
Kompaneets, Roman; Morfill, Gregor E.; Ivlev, Alexei V.
2016-03-01
Complex (dusty) plasmas allow experimental studies of various physical processes occurring in classical liquids and solids by directly observing individual microparticles. A major problem is that the interaction between microparticles is generally not molecularlike. In this Letter, we propose how to achieve a molecularlike interaction potential in laboratory 2D complex plasmas. We argue that this principal aim can be achieved by using relatively small microparticles and properly adjusting discharge parameters. If experimentally confirmed, this will make it possible to employ complex plasmas as a model system with an interaction potential resembling that of conventional liquids.
A scalable 2-D parallel sparse solver
Kothari, S.C.; Mitra, S.
1995-12-01
Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.
VAM2D: Variably saturated analysis model in two dimensions
Huyakorn, P.S.; Kool, J.B.; Wu, Y.S. )
1991-10-01
This report documents a two-dimensional finite element model, VAM2D, developed to simulate water flow and solute transport in variably saturated porous media. Both flow and transport simulation can be handled concurrently or sequentially. The formulation of the governing equations and the numerical procedures used in the code are presented. The flow equation is approximated using the Galerkin finite element method. Nonlinear soil moisture characteristics and atmospheric boundary conditions (e.g., infiltration, evaporation and seepage face), are treated using Picard and Newton-Raphson iterations. Hysteresis effects and anisotropy in the unsaturated hydraulic conductivity can be taken into account if needed. The contaminant transport simulation can account for advection, hydrodynamic dispersion, linear equilibrium sorption, and first-order degradation. Transport of a single component or a multi-component decay chain can be handled. The transport equation is approximated using an upstream weighted residual method. Several test problems are presented to verify the code and demonstrate its utility. These problems range from simple one-dimensional to complex two-dimensional and axisymmetric problems. This document has been produced as a user's manual. It contains detailed information on the code structure along with instructions for input data preparation and sample input and printed output for selected test problems. Also included are instructions for job set up and restarting procedures. 44 refs., 54 figs., 24 tabs.
Nanoimprint lithography: 2D or not 2D? A review
NASA Astrophysics Data System (ADS)
Schift, Helmut
2015-11-01
Nanoimprint lithography (NIL) is more than a planar high-end technology for the patterning of wafer-like substrates. It is essentially a 3D process, because it replicates various stamp topographies by 3D displacement of material and takes advantage of the bending of stamps while the mold cavities are filled. But at the same time, it keeps all assets of a 2D technique being able to pattern thin masking layers like in photon- and electron-based traditional lithography. This review reports about 20 years of development of replication techniques at Paul Scherrer Institut, with a focus on 3D aspects of molding, which enable NIL to stay 2D, but at the same time enable 3D applications which are "more than Moore." As an example, the manufacturing of a demonstrator for backlighting applications based on thermally activated selective topography equilibration will be presented. This technique allows generating almost arbitrary sloped, convex and concave profiles in the same polymer film with dimensions in micro- and nanometer scale.
Observed-Score Equating as a Test Assembly Problem.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Luecht, Richard M.
1998-01-01
Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)
Numerical Evaluation of 2D Ground States
NASA Astrophysics Data System (ADS)
Kolkovska, Natalia
2016-02-01
A ground state is defined as the positive radial solution of the multidimensional nonlinear problem
Some Reliability Problems in a Criterion-Referenced Test.
ERIC Educational Resources Information Center
Roudabush, Glenn E.; Green, Donald Ross
This paper describes the development of a criterion-referenced test. The Prescriptive Mathematics Inventory (PMI) was developed to measure 400 stated behavioral objectives. The test consists of three overlapping levels with the objectives chosen to cover 90 to 95 per cent of the mathematics curriculum nominally taught in grades 4 through 8. Each…
An empirical coverage test for the g-sample problem
Orlowski, L.A.; Grundy, W.D.; Mielke, P.W., Jr.
1991-01-01
A nonparametric g-sample empirical coverage test has recently been developed for univariate continuous data. It is based upon the empirical coverages which are spacings of multiple random samples. The test is capable of detecting any distributional differences which may exist among the parent populations, without additional assumptions beyond randomness and continuity. The test can be effective with the limited and/or unequal sample sizes most often encountered in geologic studies. A computer program for implementing this procedure, G-SECT 1, is available. ?? 1991 International Association for Mathematical Geology.
Dominant 2D magnetic turbulence in the solar wind
NASA Technical Reports Server (NTRS)
Bieber, John W.; Wanner, Wolfgang; Matthaeus, William H.
1995-01-01
There have been recent suggestions that solar wind magnetic turbulence may be a composite of slab geometry (wavevector aligned with the mean magnetic field) and 2D geometry (wavevectors perpendicular to the mean field). We report results of two new tests of this hypothesis using Helios measurements of inertial ranged magnetic spectra in the solar wind. The first test is based upon a characteristic difference between perpendicular and parallel reduced power spectra which is expected for the 2D component but not for the slab component. The second test examines the dependence of power spectrum density upon the magnetic field angle (i.e., the angle between the mean magnetic field and the radial direction), a relationship which is expected to be in opposite directions for the slab and 2D components. Both tests support the presence of a dominant (approximately 85 percent by energy) 2D component in solar wind magnetic turbulence.
Dominant 2D magnetic turbulence in the solar wind
Bieber, John W.; Wanner, Wolfgang; Matthaeus, William H.
1996-07-20
There have been recent suggestions that solar wind magnetic turbulence may be a composite of slab geometry (wavevectors aligned with the mean magnetic field) and 2D geometry (wavevectors perpendicular to the mean field). We report results of two new tests of this hypothesis using Helios measurements of mid-inertial range magnetic spectra in the solar wind. The first test is based upon a characteristic difference between reduced magnetic power spectra in the two different directions perpendicular to the mean field. Such a difference is expected for 2D geometry but not for slab geometry. The second test examines the dependence of power spectrum density upon the magnetic field angle (i.e., the angle between the mean magnetic field and the radial direction), a relationship which is expected to be in opposite directions for the slab and 2D components. Both tests support the presence of a dominant ({approx}85% by energy) 2D component in solar wind magnetic turbulence.
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth; Dirkx, Kim; Hoogerheide, Vincent; Boerboom, Joris; Verkoeijen, Peter P. J. L.
2015-01-01
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n?=?120) showed no beneficial effects of testing consisting of "isomorphic" problem solving or "example recall" on final test performance, which consisted of isomorphic problem…
The 2D large deformation analysis using Daubechies wavelet
NASA Astrophysics Data System (ADS)
Liu, Yanan; Qin, Fei; Liu, Yinghua; Cen, Zhangzhi
2010-01-01
In this paper, Daubechies (DB) wavelet is used for solution of 2D large deformation problems. Because the DB wavelet scaling functions are directly used as basis function, no meshes are needed in function approximation. Using the DB wavelet, the solution formulations based on total Lagrangian approach for two-dimensional large deformation problems are established. Due to the lack of Kroneker delta properties in wavelet scaling functions, Lagrange multipliers are used for imposition of boundary condition. Numerical examples of 2D large deformation problems illustrate that this method is effective and stable.
Usability Testing Finds Problems for Novice Users of Pediatric Portals
Britto, Maria T.; Jimison, Holly B.; Munafo, Jennifer Knopf; Wissman, Jennifer; Rogers, Michelle L.; Hersh, William
2009-01-01
Objective Patient portals may improve pediatric chronic disease outcomes, but few have been rigorously evaluated for usability by parents. Using scenario-based testing with think-aloud protocols, we evaluated the usability of portals for parents of children with cystic fibrosis, diabetes or arthritis. Design Sixteen parents used a prototype and test data to complete 14 tasks followed by a validated satisfaction questionnaire. Three iterations of the prototype were used. Measurements During the usability testing, we measured the time it took participants to complete or give up on each task. Sessions were videotaped and content-analyzed for common themes. Following testing, participants completed the Computer Usability Satisfaction Questionnaire which measured their opinions on the efficiency of the system, its ease of use, and the likability of the system interface. A 7-point Likert scale was used, with seven indicating the highest possible satisfaction. Results Mean task completion times ranged from 73 (± 61) seconds to locate a document to 431 (± 286) seconds to graph laboratory results. Tasks such as graphing, location of data, requesting access, and data interpretation were challenging. Satisfaction was greatest for interface pleasantness (5.9 ± 0.7) and likeability (5.8 ± 0.6) and lowest for error messages (2.3 ± 1.2) and clarity of information (4.2 ± 1.4). Overall mean satisfaction scores improved between iteration one and three. Conclusions Despite parental involvement and prior heuristic testing, scenario-based testing demonstrated difficulties in navigation, medical language complexity, error recovery, and provider-based organizational schema. While such usability testing can be expensive, the current study demonstrates that it can assist in making healthcare system interfaces for laypersons more user-friendly and potentially more functional for patients and their families. PMID:19567793
Crash test for the Copenhagen problem with oblateness
NASA Astrophysics Data System (ADS)
Zotos, Euaggelos E.
2015-05-01
The case of the planar circular restricted three-body problem where one of the two primaries is an oblate spheroid is investigated. We conduct a thorough numerical analysis on the phase space mixing by classifying initial conditions of orbits and distinguishing between three types of motion: (i) bounded, (ii) escape and (iii) collisional. The presented outcomes reveal the high complexity of this dynamical system. Furthermore, our numerical analysis shows a strong dependence of the properties of the considered escape basins with the total orbital energy, with a remarkable presence of fractal basin boundaries along all the escape regimes. Interpreting the collisional motion as leaking in the phase space we related our results to both chaotic scattering and the theory of leaking Hamiltonian systems. We also determined the escape and collisional basins and computed the corresponding escape/crash times. The highly fractal basin boundaries observed are related with high sensitivity to initial conditions thus implying an uncertainty between escape solutions which evolve to different regions of the phase space. We hope our contribution to be useful for a further understanding of the escape and crash mechanism of orbits in this version of the restricted three-body problem.
Problem-Solving Test: Real-Time Polymerase Chain Reaction
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: polymerase chain reaction, DNA amplification, electrophoresis, breast cancer, "HER2" gene, genomic DNA, "in vitro" DNA synthesis, template, primer, Taq polymerase, 5[prime][right arrow]3[prime] elongation activity, 5[prime][right arrow]3[prime] exonuclease activity, deoxyribonucleoside…
Problem-Solving Test: Submitochondrial Localization of Proteins
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
Mitochondria are surrounded by two membranes (outer and inner mitochondrial membrane) that separate two mitochondrial compartments (intermembrane space and matrix). Hundreds of proteins are distributed among these submitochondrial components. A simple biochemical/immunological procedure is described in this test to determine the localization of…
Problem-Solving Test: The Mechanism of Protein Synthesis
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: protein synthesis, ribosomes, amino acids, peptides, peptide bond, polypeptide chain, N- and C-terminus, hemoglobin, [alpha]- and [beta]-globin chains, radioactive labeling, [[to the third power]H] and [[to the fourteenth power]C]leucine, cytosol, differential centrifugation, density…
Language Testing in the Military: Problems, Politics and Progress
ERIC Educational Resources Information Center
Green, Rita; Wall, Dianne
2005-01-01
There appears to be little literature available -- either descriptive or research-related -- on language testing in the military. This form of specific purposes assessment affects both military personnel and civilians working within the military structure in terms of posting, promotion and remuneration, and it could be argued that it has serious…
[Problems of lung function testing in the laboratory].
Tojo, Naoko
2006-08-01
Spirometry is indispensable for the screening test of general respiratory function, and measurements of lung volume and diffusing capacity play an important role in the assessment of disease severity, functional disability, disease activity and response to treatment. Pulmonary function testing requires cooperation between the subjects and the examiner, and the results obtained depend on technical as well as personal factors. In order to diminish the variability of results and improve measurement accuracy, the Japan Respiratory Society published the first guidelines on the standardization of spirometry and diffusing capacity for both technical and clinical staff in 2004. It is therefore essential to distribute the guidelines to both laboratory personnel and general physicians. Furthermore, training workshops are mandatory to improve their understanding of the basics of lung function testing. Recently, there has been increasing interest in noninvasive methods of lung function testing without requiring the patient's cooperation during spontaneous breathing. Three alternative techniques, i.e. the negative expiratory pressure (NEP) method to detect expiratory flow limitation, impulse oscillation system (IOS) to measure respiratory system resistance (Rrs) and reactance (Xrs), and interruption resistance (Rint) to measure respiratory resistance have been introduced. Further study is required to determine the advantage of these methods. PMID:16989403
Common Problems of Mobile Applications for Foreign Language Testing
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez
2011-01-01
As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…
Differential Validity: A Problem with Tests or Criteria?
ERIC Educational Resources Information Center
Hollmann, Thomas D.
The evidence used in condemning a test as racially biased is usually a validity coefficient for one racial group that is significantly different from that of another racial group. However, both variables in the calculation of a validity coefficient should be examined to determine where the bias lies. A study was conducted to investigate the…
Problem-Solving Test: Expression Cloning of the Erythropoietin Receptor
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
Terms to be familiar with before you start to solve the test: cytokines, cytokine receptors, cDNA library, cDNA synthesis, poly(A)[superscript +] RNA, primer, template, reverse transcriptase, restriction endonucleases, cohesive ends, expression vector, promoter, Shine-Dalgarno sequence, poly(A) signal, DNA helicase, DNA ligase, topoisomerases,…
A new inversion method for (T2, D) 2D NMR logging and fluid typing
NASA Astrophysics Data System (ADS)
Tan, Maojin; Zou, Youlong; Zhou, Cancan
2013-02-01
One-dimensional nuclear magnetic resonance (1D NMR) logging technology has some significant limitations in fluid typing. However, not only can two-dimensional nuclear magnetic resonance (2D NMR) provide some accurate porosity parameters, but it can also identify fluids more accurately than 1D NMR. In this paper, based on the relaxation mechanism of (T2, D) 2D NMR in a gradient magnetic field, a hybrid inversion method that combines least-squares-based QR decomposition (LSQR) and truncated singular value decomposition (TSVD) is examined in the 2D NMR inversion of various fluid models. The forward modeling and inversion tests are performed in detail with different acquisition parameters, such as magnetic field gradients (G) and echo spacing (TE) groups. The simulated results are discussed and described in detail, the influence of the above-mentioned observation parameters on the inversion accuracy is investigated and analyzed, and the observation parameters in multi-TE activation are optimized. Furthermore, the hybrid inversion can be applied to quantitatively determine the fluid saturation. To study the effects of noise level on the hybrid method and inversion results, the numerical simulation experiments are performed using different signal-to-noise-ratios (SNRs), and the effect of different SNRs on fluid typing using three fluid models are discussed and analyzed in detail.
Crash test for the restricted three-body problem.
Nagler, Jan
2005-02-01
The restricted three-body problem serves to investigate the chaotic behavior of a small body under the gravitational influence of two heavy primary bodies. We analyze numerically the phase space mixing of bounded motion, escape, and crash in this simple model of (chaotic) celestial mechanics. The presented extensive numerical analysis reveals a high degree of complexity. We extend the recently presented findings for the Copenhagen case of equal main masses to the general case of different primary body masses. Collisions of the small body onto the primaries are comparatively frequent, and their probability displays a scale-free dependence on the size of the primaries as shown for the Copenhagen case. Interpreting the crash as leaking in phase space the results are related to both chaotic scattering and the theory of leaking Hamiltonian systems. PMID:15783407
Development and Implementation of Radiation-Hydrodynamics Verification Test Problems
Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.
2012-08-22
Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.
[Problem-solving in immunohematology: direct compatibility laboratory test ].
Mannessier, L; Roubinet, F; Chiaroni, J
2001-12-01
Cross-matching between the serum of a patient and the red blood cells to be transfused is most important for the prevention of hemolytic transfusion reactions in allo-immunized or new-born patients found positive with direct antiglobulin test. Cross-matching is a time-consuming and complex laboratory test. In order to obtain valid results, it is necessary to abide by some technical rules detailed in this article. The choice of the blood units to be cross-matched depends on the patient's clinical story and on the specificity of anti-erythrocyte antibodies present in the serum. The identification and the management of most frequent difficulties met by using the cross-match technique are discussed hereby. PMID:11802611
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
NASA Technical Reports Server (NTRS)
Scharton, Terry D.
1995-01-01
The intent of this paper is to make a case for developing and conducting vibration tests which are both realistic and practical (a question of tailoring versus standards). Tests are essential for finding things overlooked in the analyses. The best test is often the most realistic test which can be conducted within the cost and budget constraints. Some standards are essential, but the author believes more in the individual's ingenuity to solve a specific problem than in the application of standards which reduce problems (and technology) to their lowest common denominator. Force limited vibration tests and base-drive modal tests are two examples of realistic, but practical testing approaches. Since both of these approaches are relatively new, a number of interesting research problems exist, and these are emphasized herein.
Proof test of the computer program BUCKY for plasticity problems
NASA Technical Reports Server (NTRS)
Smith, James P.
1994-01-01
A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
ERIC Educational Resources Information Center
Toronto Board of Education (Ontario). Research Dept.
In addition to a review of the Differential Aptitude Tests (DAT), a number of other aptitude tests are examined. They are: (1) Flanagan Aptitude Classification Tests, (2) Holzinger-Crowder Uni-Factor Tests, (3) Employee Aptitude Survey, (4) Revised Minnesota Paper Form Board Test, (5) Minnesota Clerical Test, and (6) Turse Clerical Aptitudes Test.…
Molecular testing in oncology: problems, pitfalls and progress.
O'Brien, Cathal P; Taylor, Sarah E; O'Leary, John J; Finn, Stephen P
2014-03-01
Recent advances in the understanding of the molecular basis of cancer and the development of molecular diagnostics based on this knowledge have done much to progress the fields of oncology and pathology. Technological developments such as Next Generation Sequencing (NGS) and multiplex assays have made feasible the widespread adoption of molecular diagnostics for clinical use. While these developments and advances carry much promise, there are pitfalls to implementing this testing. Choosing appropriate biomarkers is a vital first step for clinical use and being able to understand the complex relationship between predictive and prognostic biomarkers is a crucial component of this. Testing for standard of care biomarkers is not straightforward, one must choose carefully between clinical trial assays, assays that analyse the same biological phenomenon or surrogate biomarkers. Sample heterogeneity and population specific difference is assays may skew results and must be controlled for at the assay design stage. At a technical level, NGS has the potential to revolutionise laboratory practice and approaches to cancer treatment. However, use of this technology requires careful planning and implementation if one is to avoid technical and ethical quagmires. Finally, with FDA regulation of companion diagnostics one may be limited to therapy specific assays. PMID:24472389
Sex Differences and Self-Reported Attention Problems During Baseline Concussion Testing.
Brooks, Brian L; Iverson, Grant L; Atkins, Joseph E; Zafonte, Ross; Berkner, Paul D
2016-01-01
Amateur athletic programs often use computerized cognitive testing as part of their concussion management programs. There is evidence that athletes with preexisting attention problems will have worse cognitive performance and more symptoms at baseline testing. The purpose of this study was to examine whether attention problems affect assessments differently for male and female athletes. Participants were drawn from a database that included 6,840 adolescents from Maine who completed Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) at baseline (primary outcome measure). The final sample included 249 boys and 100 girls with self-reported attention problems. Each participant was individually matched for sex, age, number of past concussions, and sport to a control participant (249 boys, 100 girls). Boys with attention problems had worse reaction time than boys without attention problems. Girls with attention problems had worse visual-motor speed than girls without attention problems. Boys with attention problems reported more total symptoms, including more cognitive-sensory and sleep-arousal symptoms, compared with boys without attention problems. Girls with attention problems reported more cognitive-sensory, sleep-arousal, and affective symptoms than girls without attention problems. When considering the assessment, management, and outcome from concussions in adolescent athletes, it is important to consider both sex and preinjury attention problems regarding cognitive test results and symptom reporting. PMID:25923339
Assessing corrosion problems in photovoltaic cells via electrochemical stress testing
NASA Technical Reports Server (NTRS)
Shalaby, H.
1985-01-01
A series of accelerated electrochemical experiments to study the degradation properties of polyvinylbutyral-encapsulated silicon solar cells has been carried out. The cells' electrical performance with silk screen-silver and nickel-solder contacts was evaluated. The degradation mechanism was shown to be electrochemical corrosion of the cell contacts; metallization elements migrate into the encapsulating material, which acts as an ionic conducting medium. The corrosion products form a conductive path which results in a gradual loss of the insulation characteristics of the encapsulant. The precipitation of corrosion products in the encapsulant also contributes to its discoloration which in turn leads to a reduction in its transparency and the consequent optical loss. Delamination of the encapsulating layers could be attributed to electrochemical gas evolution reactions. The usefulness of the testing technique in qualitatively establishing a reliability difference between metallizations and antireflection coating types is demonstrated.
Extension and application of the Preissmann slot model to 2D transient mixed flows
NASA Astrophysics Data System (ADS)
Maranzoni, Andrea; Dazzi, Susanna; Aureli, Francesca; Mignosa, Paolo
2015-08-01
This paper presents an extension of the Preissmann slot concept for the modeling of highly transient two-dimensional (2D) mixed flows. The classic conservative formulation of the 2D shallow water equations for free surface flows is adapted by assuming that two fictitious vertical slots, aligned along the two Cartesian plane directions and normally intersecting, are added on the ceiling of each integration element. Accordingly, transitions between free surface and pressurized flow can be handled in a natural and straightforward way by using the same set of governing equations. The opportunity of coupling free surface and pressurized flows is actually useful not only in one-dimensional (1D) problems concerning sewer systems but also for modeling 2D flooding phenomena in which the pressurization of bridges, culverts, or other crossing hydraulic structures can be expected. Numerical simulations are performed by using a shock-capturing MUSCL-Hancock finite volume scheme combined with the FORCE (First-Order Centred) solver for the evaluation of the numerical fluxes. The validation of the mathematical model is accomplished on the basis of both exact solutions of 1D discontinuous initial value problems and reference radial solutions of idealized test cases with cylindrical symmetry. Furthermore, the capability of the model to deal with practical field-scale applications is assessed by simulating the transit of a bore under an arch bridge. Numerical results show that the proposed model is suitable for the prediction of highly transient 2D mixed flows.
An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
ERIC Educational Resources Information Center
Hill, Kennedy T.
1983-01-01
Reviews a 20-year program of research on motivation and test performance, concluding that test anxiety and test-taking skill deficits are distorting factors in efforts to test student aptitude, achievement, and competency. (FL)
ERIC Educational Resources Information Center
Keating, Xiaofen Deng
2003-01-01
This paper aims to examine current nationwide youth fitness test programs, address problems embedded in the programs, and possible solutions. The current Fitnessgram, President's Challenge, and YMCA youth fitness test programs were selected to represent nationwide youth fitness test programs. Sponsors of the nationwide youth fitness test programs…
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension. PMID:21815770
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
A faster method for 3D/2D medical image registration--a simulation study.
Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels Claudius; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter
2003-08-21
3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(degrees) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(degrees) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications. PMID:12974581
NKG2D ligands as therapeutic targets
Spear, Paul; Wu, Ming-Ru; Sentman, Marie-Louise; Sentman, Charles L.
2013-01-01
The Natural Killer Group 2D (NKG2D) receptor plays an important role in protecting the host from infections and cancer. By recognizing ligands induced on infected or tumor cells, NKG2D modulates lymphocyte activation and promotes immunity to eliminate ligand-expressing cells. Because these ligands are not widely expressed on healthy adult tissue, NKG2D ligands may present a useful target for immunotherapeutic approaches in cancer. Novel therapies targeting NKG2D ligands for the treatment of cancer have shown preclinical success and are poised to enter into clinical trials. In this review, the NKG2D receptor and its ligands are discussed in the context of cancer, infection, and autoimmunity. In addition, therapies targeting NKG2D ligands in cancer are also reviewed. PMID:23833565
Canard configured aircraft with 2-D nozzle
NASA Technical Reports Server (NTRS)
Child, R. D.; Henderson, W. P.
1978-01-01
A closely-coupled canard fighter with vectorable two-dimensional nozzle was designed for enhanced transonic maneuvering. The HiMAT maneuver goal of a sustained 8g turn at a free-stream Mach number of 0.9 and 30,000 feet was the primary design consideration. The aerodynamic design process was initiated with a linear theory optimization minimizing the zero percent suction drag including jet effects and refined with three-dimensional nonlinear potential flow techniques. Allowances were made for mutual interference and viscous effects. The design process to arrive at the resultant configuration is described, and the design of a powered 2-D nozzle model to be tested in the LRC 16-foot Propulsion Wind Tunnel is shown.
2D Electrostatic Actuation of Microshutter Arrays
NASA Technical Reports Server (NTRS)
Burns, Devin E.; Oh, Lance H.; Li, Mary J.; Jones, Justin S.; Kelly, Daniel P.; Zheng, Yun; Kutyrev, Alexander S.; Moseley, Samuel H.
2015-01-01
An electrostatically actuated microshutter array consisting of rotational microshutters (shutters that rotate about a torsion bar) were designed and fabricated through the use of models and experiments. Design iterations focused on minimizing the torsional stiffness of the microshutters, while maintaining their structural integrity. Mechanical and electromechanical test systems were constructed to measure the static and dynamic behavior of the microshutters. The torsional stiffness was reduced by a factor of four over initial designs without sacrificing durability. Analysis of the resonant behavior of the microshutter arrays demonstrates that the first resonant mode is a torsional mode occurring around 3000 Hz. At low vacuum pressures, this resonant mode can be used to significantly reduce the drive voltage necessary for actuation requiring as little as 25V. 2D electrostatic latching and addressing was demonstrated using both a resonant and pulsed addressing scheme.
2D Electrostatic Actuation of Microshutter Arrays
NASA Technical Reports Server (NTRS)
Burns, Devin E.; Oh, Lance H.; Li, Mary J.; Kelly, Daniel P.; Kutyrev, Alexander S.; Moseley, Samuel H.
2015-01-01
Electrostatically actuated microshutter arrays consisting of rotational microshutters (shutters that rotate about a torsion bar) were designed and fabricated through the use of models and experiments. Design iterations focused on minimizing the torsional stiffness of the microshutters, while maintaining their structural integrity. Mechanical and electromechanical test systems were constructed to measure the static and dynamic behavior of the microshutters. The torsional stiffness was reduced by a factor of four over initial designs without sacrificing durability. Analysis of the resonant behavior of the microshutters demonstrates that the first resonant mode is a torsional mode occurring around 3000 Hz. At low vacuum pressures, this resonant mode can be used to significantly reduce the drive voltage necessary for actuation requiring as little as 25V. 2D electrostatic latching and addressing was demonstrated using both a resonant and pulsed addressing scheme.
Beta/gamma test problems for ITS. [Integrated Tiger Series (ITS)
Mei, G.T.
1993-01-01
The Integrated Tiger Series of Coupled Electron/Photon Monte Carlo Transport Codes (ITS 3.0, PC Version) was used at Oak Ridge National Laboratory (ORNL) to compare with and extend the experimental findings of the beta/gamma response of selected health physics instruments. In order to assure that ITS gives correct results, several beta/gamma problems have been tested. ITS was used to simulate these problems numerically, and results for each were compared to the problem's experimental or analytical results. ITS successfully predicted the experimental or analytical results of all tested problems within the statistical uncertainty inherent in the Monte Carlo method.
Some Problems of Computer-Aided Testing and "Interview-Like Tests"
ERIC Educational Resources Information Center
Smoline, D.V.
2008-01-01
Computer-based testing--is an effective teacher's tool, intended to optimize course goals and assessment techniques in a comparatively short time. However, this is accomplished only if we deal with high-quality tests. It is strange, but despite the 100-year history of Testing Theory (see, Anastasi, A., Urbina, S. (1997). Psychological testing.…
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2013 CFR
2013-10-01
... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT reads... 49 Transportation 1 2013-10-01 2013-10-01 false What problems always cause an alcohol test to be... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2010 CFR
2010-10-01
... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT reads... 49 Transportation 1 2010-10-01 2010-10-01 false What problems always cause an alcohol test to be... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2011 CFR
2011-10-01
... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT reads... 49 Transportation 1 2011-10-01 2011-10-01 false What problems always cause an alcohol test to be... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel...
On Regularity Criteria for the 2D Generalized MHD System
NASA Astrophysics Data System (ADS)
Jiang, Zaihong; Wang, Yanan; Zhou, Yong
2016-06-01
This paper deals with the problem of regularity criteria for the 2D generalized MHD system with fractional dissipative terms {-Λ^{2α}u} for the velocity field and {-Λ^{2β}b} for the magnetic field respectively. Various regularity criteria are established to guarantee smoothness of solutions. It turns out that our regularity criteria imply previous global existence results naturally.
Dispersionless 2D Toda hierarchy, Hurwitz numbers and Riemann theorem
NASA Astrophysics Data System (ADS)
Natanzon, Sergey M.
2016-01-01
We describe all formal symmetric solutions of dispersionless 2D Toda hierarchy. This classification we use for solving of two classical problems: 1) The calculation of conformal mapping of an arbitrary simply connected domain to the standard disk; 2) Calculation of 2- Hurwitz numbers of genus 0.
2D signature for detection and identification of drugs
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Shen, Jingling; Zhang, Cunlin; Zhou, Qingli; Shi, Yulei
2011-06-01
The method of spectral dynamics analysis (SDA-method) is used for obtaining the2D THz signature of drugs. This signature is used for the detection and identification of drugs with similar Fourier spectra by transmitted THz signal. We discuss the efficiency of SDA method for the identification problem of pure methamphetamine (MA), methylenedioxyamphetamine (MDA), 3, 4-methylenedioxymethamphetamine (MDMA) and Ketamine.
NASA Technical Reports Server (NTRS)
Salmon, R. F.; Imbrogno, S.
1976-01-01
The importance of measuring accurate air and fuel flows as well as the importance of obtaining accurate exhaust pollutant measurements were emphasized. Some of the problems and the corrective actions taken to incorporate fixes and/or modifications were identified.
2d PDE Linear Symmetric Matrix Solver
Energy Science and Technology Software Center (ESTSC)
1983-10-01
ICCG2 (Incomplete Cholesky factorized Conjugate Gradient algorithm for 2d symmetric problems) was developed to solve a linear symmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as resistive MHD, spatial diffusive transport, and phase space transport (Fokker-Planck equation) problems. These problems share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized withmore » finite-difference or finite-element methods,the resulting matrix system is frequently of block-tridiagonal form. To use ICCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. The incomplete Cholesky conjugate gradient algorithm is used to solve the linear symmetric matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For matrices lacking symmetry, ILUCG2 should be used. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
2d PDE Linear Asymmetric Matrix Solver
Energy Science and Technology Software Center (ESTSC)
1983-10-01
ILUCG2 (Incomplete LU factorized Conjugate Gradient algorithm for 2d problems) was developed to solve a linear asymmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as plasma diffusion, equilibria, and phase space transport (Fokker-Planck equation) problems. These equations share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized with finite-difference or finite-elementmore » methods, the resulting matrix system is frequently of block-tridiagonal form. To use ILUCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. A generalization of the incomplete Cholesky conjugate gradient algorithm is used to solve the matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For problems having a symmetric matrix ICCG2 should be used since it runs up to four times faster and uses approximately 30% less storage. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source, containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
A Test of the Testing Effect: Acquiring Problem-Solving Skills from Worked Examples
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth
2012-01-01
The "testing effect" refers to the finding that after an initial study opportunity, testing is more effective for long-term retention than restudying. The testing effect seems robust and is a finding from the field of cognitive science that has important implications for education. However, it is unclear whether this effect also applies to the…
Prospects and Problems for a National Test: Some Reflections of a Test Author.
ERIC Educational Resources Information Center
Hogan, Thomas P.
Reflections on the proposal for creation and implementation of a national test are presented from the perspective of a test author. The most readily identified characteristic of the proposed national test is the intensity of debate surrounding it. Another easily identified characteristic is the anticipated effect. While proponents expect higher…
Perspectives for spintronics in 2D materials
NASA Astrophysics Data System (ADS)
Han, Wei
2016-03-01
The past decade has been especially creative for spintronics since the (re)discovery of various two dimensional (2D) materials. Due to the unusual physical characteristics, 2D materials have provided new platforms to probe the spin interaction with other degrees of freedom for electrons, as well as to be used for novel spintronics applications. This review briefly presents the most important recent and ongoing research for spintronics in 2D materials.
Likelihood Methods for Testing Group Problem Solving Models with Censored Data.
ERIC Educational Resources Information Center
Regal, Ronald R.; Larntz, Kinley
1978-01-01
Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)
Testing foreign language impact on engineering students' scientific problem-solving performance
NASA Astrophysics Data System (ADS)
Tatzl, Dietmar; Messnarz, Bernd
2013-12-01
This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in the Degree Programme of Aviation at the FH JOANNEUM University of Applied Sciences, Graz, Austria. Half of each test group were given a set of 12 physics problems described in German, the other half received the same set of problems described in English. It was the goal to test linguistic reading comprehension necessary for scientific problem solving instead of physics knowledge as such. The results imply that written undergraduate English-medium engineering tests and examinations may not require additional examination time or language-specific aids for students who have reached university-entrance proficiency in English as a foreign language.
An inverse design method for 2D airfoil
NASA Astrophysics Data System (ADS)
Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao
2010-03-01
The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.