Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies
Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.
2012-08-14
reasonable quality photographic data. The photographs in Figure 2 also reveal the appearance of a boundary layer at the left and right walls; this boundary layer has not been included in the test problem as preliminary calculations suggested it had a negligible effect on plume penetration and RT mixing. The significance of this test problem is that, unlike planar RT experiments such as the Rocket-Rig (Youngs, 1984), Linear Electric Motor - LEM (Dimonte, 1990), or the Water Tunnel (Andrews, 1992), the Tilted-Rig is a unique two-dimensional RT mixing experiment that has experimental data and now (in this TP) Direct Numerical Simulation data from Livescu and Wei. The availability of DNS data for the tilted-rig has made this TP viable as it provides detailed results for comparison purposes. The purpose of the test problem is to provide 3D simulation results, validated by comparison with experiment, which can be used for the development and validation of 2D RANS models. When such models are applied to 2D flows, various physics issues are raised such as double counting, combined buoyancy and shear, and 2-D strain, which have not yet been adequately addressed. The current objective of the test problem is to compare key results, which are needed for RANS model validation, obtained from high-Reynolds number DNS, high-resolution ILES or LES with explicit sub-grid-scale models. The experiment is incompressible and so is directly suitable for algorithms that are designed for incompressible flows (e.g. pressure correction algorithms with multi-grid); however, we have extended the TP so that compressible algorithms, run at low Mach number, may also be used if careful consideration is given to initial pressure fields. Thus, this TP serves as a useful tool for incompressible and compressible simulation codes, and mathematical models. In the remainder of this TP we provide a detailed specification; the next section provides the underlying assumptions for the TP, fluids, geometry details
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
Validation and testing of the VAM2D computer code
Kool, J.B.; Wu, Y.S. )
1991-10-01
This document describes two modeling studies conducted by HydroGeoLogic, Inc. for the US NRC under contract no. NRC-04089-090, entitled, Validation and Testing of the VAM2D Computer Code.'' VAM2D is a two-dimensional, variably saturated flow and transport code, with applications for performance assessment of nuclear waste disposal. The computer code itself is documented in a separate NUREG document (NUREG/CR-5352, 1989). The studies presented in this report involve application of the VAM2D code to two diverse subsurface modeling problems. The first one involves modeling of infiltration and redistribution of water and solutes in an initially dry, heterogeneous field soil. This application involves detailed modeling over a relatively short, 9-month time period. The second problem pertains to the application of VAM2D to the modeling of a waste disposal facility in a fractured clay, over much larger space and time scales and with particular emphasis on the applicability and reliability of using equivalent porous medium approach for simulating flow and transport in fractured geologic media. Reflecting the separate and distinct nature of the two problems studied, this report is organized in two separate parts. 61 refs., 31 figs., 9 tabs.
2-D or not 2-D, that is the question: A Northern California test
Mayeda, K; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D
2005-06-06
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2
On numerical solving a rigid inclusions problem in 2D elasticity
NASA Astrophysics Data System (ADS)
Rudoy, Evgeny
2017-02-01
A 2D elastic problem for a body containing a set of bulk and thin rigid inclusions of arbitrary shapes is considered. It is assumed that rigid inclusions are bonded into elastic matrix. To state the equilibrium problem, a variational approach is used. The problem is formulated as a problem of minimization of the energy functional over the set of admissible displacements. Moreover, it is equivalent to a variational equality which holds for test functions belonging to the subspace of functions with the prescribed rigid displacement structure on the inclusions. We propose a novel algorithm of solving the equilibrium problem. The algorithm is based on reducing the original problem to a system of the Dirichlet and Neumann problems. A numerical examination is carried out to demonstrate the efficiency of the proposed technique.
Theoretical analysis of the 2D thermal cloaking problem
NASA Astrophysics Data System (ADS)
Alekseev, G. V.; Spivak, Yu E.; Yashchenko, E. N.
2017-01-01
Coefficient inverse problems for the model of heat scattering with variable coefficients arising when developing technologies of design of thermal cloaking devices are considered. By the optimization method, these problems are reduced to respective control problems. The material parameters (radial and azimuthal conductivities) of the inhomogeneous anisotropic medium, filling the thermal cloak, play the role of control. The model of heat scattering acts as a functional restriction. A unique solvability of direct heat scattering problem in the Sobolev space is proved and the new estimates of solutions are established. Using these results, the solvability of control problem is proved and the optimality system is derived. Based on analysis of optimality system, the stability estimates of optimal solutions are established and efficient numerical algorithm of solving thermal cloaking problems is proposed.
Applicability extent of 2-D heat equation for numerical analysis of a multiphysics problem
NASA Astrophysics Data System (ADS)
Khawaja, H.
2017-01-01
This work focuses on thermal problems, solvable using the heat equation. The fundamental question being answered here is: what are the limits of the dimensions that will allow a 3-D thermal problem to be accurately modelled using a 2-D Heat Equation? The presented work solves 2-D and 3-D heat equations using the Finite Difference Method, also known as the Forward-Time Central-Space (FTCS) method, in MATLAB®. For this study, a cuboidal shape domain with a square cross-section is assumed. The boundary conditions are set such that there is a constant temperature at its center and outside its boundaries. The 2-D and 3-D heat equations are solved in a time dimension to develop a steady state temperature profile. The method is tested for its stability using the Courant-Friedrichs-Lewy (CFL) criteria. The results are compared by varying the thickness of the 3-D domain. The maximum error is calculated, and recommendations are given on the applicability of the 2-D heat equation.
2D resistivity method in delineating subsurface problems in urban area
NASA Astrophysics Data System (ADS)
Nordiana, M. M.; Saad, Rosli; Teh Saufia, A. H. A.; Azwin, I. N.; Ali, Nisa'; Hidayah, Noer El
2013-05-01
2D resistivity is carried out to detect spread saturated zone and subsurface problems cause by the presence of underground river, which resulted from selected urban area at Selangor, Malaysia. Six 2D resistivity survey lines with minimum 5 m electrode spacing were executed using Pole-dipole array. Borehole was carried out at multiple locations in the study area. Subsequently, the borehole was used to verify the 2D resistivity results. Interpretation of 2D resistivity data showed a low resistivity value (< 40 ohm-m), which appears to be a zone that is fully saturated with sandy silt and this could be an influence factor the increasing water level because sandy silt is highly permeable in nature. The borehole, support the results of 2D resistivity method relating a saturated zone in the survey area. There is a good correlation between the 2D resistivity investigations and the results of borehole records.
Use of adaptive walls in 2D tests
NASA Technical Reports Server (NTRS)
Archambaud, J. P.; Chevallier, J. P.
1984-01-01
A new method for computing the wall effects gives precise answers to some questions arising in adaptive wall concept applications: length of adapted regions, fairings with up and downstream regions, residual misadjustments effects, reference conditions. The acceleration of the iterative process convergence and the development of an efficient technology used in CERT T2 wind tunnels give in a single run the required test conditions. Samples taken from CAST 7 tests demonstrate the efficiency of the whole process to obtain significant results with considerations of tridimensional case extension.
Hertz-Mindlin problem for arbitrary oblique 2D loading: General solution by memory diagrams
NASA Astrophysics Data System (ADS)
Aleshin, V.; Van Den Abeele, K.
2012-01-01
In this paper we present a new general solution to the fundamental problem of frictional contact of two elastic spheres, also known as the Hertz-Mindlin (HM) problem. The description of spheres in contact is a central topic in contact mechanics. It became a foundation of many applications, such as the friction of rough surfaces and the mechanics of granular materials and rocks, etc. Moreover, it serves as a theoretical background in modern nonlinear acoustics and elasticity, e.g. seismology and nondestructive testing. However, despite many efforts, a rigorous analytical solution for the general case when arbitrary normal and tangential forces are present is still missing, mainly because the traction distribution within the contact zone is convoluted and hardly tractable, even under relatively simple external action. Here, accepting a number of traditional limitations such as 2D loading and the existence of a functional dependence between normal and tangential forces, we propose an original way of replacing the complex traction distributions by simple graphical counterparts called memory diagrams, and we formulate a procedure that enables initiating and maintaining these memory diagrams following an arbitrary loading history. For each memory diagram, the solution can be expressed by closed-form analytical formulas that we have derived using known techniques suggested by Mindlin, Deresiewicz, and others. So far, to the best of our knowledge, arbitrary loading histories have been treated only numerically. Implementation of the proposed memory diagram method provides an easy-to-use computer-assisted analytical solution with a high level of generality. Examples and results illustrate the variety and richness of effects that can be encountered in a geometrically simple system of two contacting spheres.
Efficient finite element modeling of scattering for 2D and 3D problems
NASA Astrophysics Data System (ADS)
Wilcox, Paul D.; Velichko, Alexander
2010-03-01
The scattering of waves by defects is central to ultrasonic NDE and SHM. In general, scattering problems must be modeled using direct numerical methods such as finite elements (FE), which is very computationally demanding. The most efficient way is to only model the scatterer itself and a minimal region of the surrounding host medium, and this was previously demonstrated for 2-dimensional (2D) bulk wave scattering problems in isotropic media. An encircling array of monopole and dipole sources is used to inject an arbitrary wavefront onto the scatterer and the scattered field is monitored by a second encircling array of monitoring points. From this data, the scattered field can be projected out to any point in space. If the incident wave is chosen to be a plane wave incident from a given angle and the scattered field is projected to distant points in the far-field of the scatterer, the far-field scattering or S-matrix may be obtained, which encodes all the available scattering information. In this paper, the technique is generalized to any elastic wave geometry in both 2D and 3D, where the latter can include guided wave scattering problems. A further refinement enables the technique to be employed with free FE meshes of triangular or tetrahedral elements.
NASA Astrophysics Data System (ADS)
Tucciarelli, T.
2012-12-01
A new methodology for the solution of irrotational 2D flow problems in domains with strongly unstructured meshes is presented. A fractional time step procedure is applied to the original governing equations, solving consecutively a convective prediction system and a diffusive corrective system. The non linear components of the problem are concentrated in the prediction step, while the correction step leads to the solution of a linear system, of the order of the number of computational cells. A MArching in Space and Time (MAST) approach is applied for the solution of the convective prediction step. The major advantages of the model, as well as its ability to maintain the solution monotonicity even in strongly irregular meshes, are briefly described. The algorithm is applied to the solution of diffusive shallow water equations in a simple domain.
Fluctuating Pressure Data from 2-D Nozzle Cold Flow Tests (Dual Bell)
NASA Technical Reports Server (NTRS)
Nesman, Tomas E.
2001-01-01
Rocket engines nozzle performance changes as a vehicle climbs through the atmosphere. An altitude compensating nozzle, ACN, is intended to improve on a fixed geometry bell nozzle that performs at optimum at only one trajectory point. In addition to nozzle performance, nozzle transient loads are an important consideration. Any nozzle experiences large transient toads when shocks pass through the nozzle at start and shutdown. Additional transient toads will occur at transitional flow conditions. The objectives of cold flow nozzle testing at MSFC are CFD benchmark / calibration and Unsteady flow / sideloads. Initial testing performed with 2-D inserts to 14" transonic wind tunnel. Recent review of 2-D data in preparation for nozzle test facility 3-D testing. This presentation shows fluctuating pressure data and some observations from 2-D dual-bell nozzle cold flow tests.
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.
A 2D inverse problem of predicting boiling heat transfer in a long fin
NASA Astrophysics Data System (ADS)
Orzechowski, Tadeusz
2016-10-01
A method for the determination of local values of the heat transfer coefficient on non-isothermal surfaces was analyzed on the example of a long smooth-surfaced fin made of aluminium. On the basis of the experimental data, two cases were taken into consideration: one-dimensional model for Bi < 0.1 and two-dimensional model for thicker elements. In the case when the drop in temperature over the thickness could be omitted, the rejected local values of heat fluxes were calculated from the integral of the equation describing temperature distribution on the fin. The corresponding boiling curve was plotted on the basis of temperature gradient distribution as a function of superheat. For thicker specimens, where Bi > 0.1, the problem was modelled using a 2-D heat conduction equation, for which the boundary conditions were posed on the surface observed with a thermovision camera. The ill-conditioned inverse problem was solved using a method of heat polynomials, which required validation.
Fung, Jimmy; Masser, Thomas; Morgan, Nathaniel R.
2012-06-25
The Sedov test is classically defined as a point blast problem. The Sedov problem has led us to advances in algorithms and in their understanding. Vorticity generation can be physical or numerical. Both play a role in Sedov calculations. The RAGE code (Eulerian) resolves the shock well, but produces vorticity. The source definition matters. For the FLAG code (Lagrange), CCH is superior to SGH by avoiding spurious vorticity generation. FLAG SGH currently has a number of options that improve results over traditional settings. Vorticity production, not shock capture, has driven the Sedov work. We are pursuing treatments with respect to the hydro discretization as well as to artificial viscosity.
2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability
Mayeda, K M; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D S; Morasca, P
2005-07-13
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter
New optimization problems arising in modelling of 2D-crystal lattices
NASA Astrophysics Data System (ADS)
Evtushenko, Yury; Lurie, Sergey; Posypkin, Mikhail
2016-10-01
The paper considers the problem of finding the structure of a fragment of two-dimensional crystal lattice with the minimal energy. Atoms in a lattice reside on parallel lines (layers). The interatomic distances are the same within one layer but can differ for distinct layers. The energy of the piece of material is computed using so-called potential functions. We used Lennard-Jones, Morse and Tersoff potentials. The proposed formulation can serve as a scalable complex non-smooth optimization test. The paper evaluates various optimization techniques for the problem under consideration, compares their performances and draws the conclusion about the best choice of optimization methods for the problem under test. As a result we were able to locate minima meaningful from the physical point of view, e.g. reproducing graphene lattice.
Dynamics and Universality of AN Isothermal Combustion Problem in 2D
NASA Astrophysics Data System (ADS)
Qi, Y. W.
In this paper, the Cauchy problem of the system $$u_{1,t} = \\Delta u_{1} - u_1 u_2^{m}, \\quad u_{2,t} = d \\Delta u_2 + u_1 u_2^{m}$$ is studied, where x ∈ R2, m ≥ 1 and d > 0 is the Lewis number. This system models isothermal combustion (see [7]), and auto-catalytic chemical reaction. We show the global existence and regularity of solutions with non-negative initial values having mild decay as |x| → ∞. More importantly, we establish the exact spatio-temporal profiles for such solutions. In particular, we prove that for m = 1, the exact large time behavior of solutions is characterized by a universal, non-Gaussian spatio-temporal profile, with anomalous exponents, due to the fact that quadratic nonlinearity is critical in 2D. Our approach is a combination of iteration using Renormalization Group method, which has been developed into a very powerful tool in the study of nonlinear PDEs largely by the pioneering works of Bricmont, Kupiainen and Lin [6], Bricmont, Kupiainen and Xin, [7], (see also [9]) and key estimates using the PDE method.
An analytical approach to estimate the number of small scatterers in 2D inverse scattering problems
NASA Astrophysics Data System (ADS)
Fazli, Roohallah; Nakhkash, Mansor
2012-07-01
This paper presents an analytical method to estimate the location and number of actual small targets in 2D inverse scattering problems. This method is motivated from the exact maximum likelihood estimation of signal parameters in white Gaussian noise for the linear data model. In the first stage, the method uses the MUSIC algorithm to acquire all possible target locations and in the next stage, it employs an analytical formula that works as a spatial filter to determine which target locations are associated to the actual ones. The ability of the method is examined for both the Born and multiple scattering cases and for the cases of well-resolved and non-resolved targets. Many numerical simulations using both the coincident and non-coincident arrays demonstrate that the proposed method can detect the number of actual targets even in the case of very noisy data and when the targets are closely located. Using the experimental microwave data sets, we further show that this method is successful in specifying the number of small inclusions.
Analytical solutions for some defect problems in 1D hexagonal and 2D octagonal quasicrystals
NASA Astrophysics Data System (ADS)
Wang, X.; Pan, E.
2008-05-01
We study some typical defect problems in one-dimensional (1D) hexagonal and two-dimensional (2D) octagonal quasicrystals. The first part of this investigation addresses in detail a uniformly moving screw dislocation in a 1D hexagonal piezoelectric quasicrystal with point group 6mm. A general solution is derived in terms of two functions \\varphi_1, \\varphi_2, which satisfy wave equations, and another harmonic function \\varphi_3. Elementary expressions for the phonon and phason displacements, strains, stresses, electric potential, electric fields and electric displacements induced by the moving screw dislocation are then arrived at by employing the obtained general solution. The derived solution is verified by comparison with existing solutions. Also obtained in this part of the investigation is the total energy of the moving screw dislocation. The second part of this investigation is devoted to the study of the interaction of a straight dislocation with a semi-infinite crack in an octagonal quasicrystal. Here the crack penetrates through the solid along the period direction and the dislocation line is parallel to the period direction. We first derive a general solution in terms of four analytic functions for plane strain problem in octagonal quasicrystals by means of differential operator theory and the complex variable method. All the phonon and phason displacements and stresses can be expressed in terms of the four analytic functions. Then we derive the exact solution for a straight dislocation near a semi-infinite crack in an octagonal quasicrystal, and also present the phonon and phason stress intensity factors induced by the straight dislocation and remote loads.
Test problems in radiative transfer calculations
Shestakov, A. I.; Kershaw, D. S.; Zimmerman, G. B.
1989-01-12
Several test problems are presented for evaluating the radiation diffusion equations. For spatial transport schemes, 1-D problems with known analytic solutions are tested on 2-D domains with non-orthogonal meshes. It is shown that a scheme based on the Finite Element Method is insensitive to grid distortions when the diffusion term is dominant. Other test problems deal with Compton scattering, specifically the 1-D Fokker-Planck equation coupled to an equation describing the change in electron temperature. The test problems model the evolution of a Planckian radiation field as it equilibrates with the electrons. In all cases, the numerical results are compared with the analytic ones. 15 refs., 9 figs., 7 tabs.
NASA Astrophysics Data System (ADS)
Szerszeń, Krzysztof; Zieniuk, Eugeniusz
2016-06-01
The paper presents a strategy for numerical solving of parametric integral equation system (PIES) for 2D potential problems without explicit calculation of singular integrals. The values of these integrals will be expressed indirectly in terms of easy to compute non-singular integrals. The effectiveness of the proposed strategy is investigated with the example of potential problem modeled by the Laplace equation. The strategy simplifies the structure of the program with good the accuracy of the obtained solutions.
Criminality and the 2D:4D ratio: testing the prenatal androgen hypothesis.
Ellis, Lee; Hoskin, Anthony W
2015-03-01
A decade old theory hypothesizes that brain exposure to androgens promotes involvement in criminal behavior. General support for this hypothesis has been provided by studies of postpubertal circulating levels of testosterone, at least among males. However, the theory also predicts that for both genders, prenatal androgens will be positively correlated with persistent offending, an idea for which no evidence currently exists. The present study used an indirect measure of prenatal androgen exposure-the relative length of the second and fourth fingers of the right hand (r2D:4D)-to test the hypothesis that elevated prenatal androgens promote criminal tendencies later in life for males and females. Questionnaires were administered to 2,059 college students in Malaysia and 1,291 college students in the United States. Respondents reported their r2D:4D relative finger lengths along with involvement in 13 categories of delinquent and criminal acts. Statistically significant correlations between the commission of most types of offenses and r2D:4D ratios were found for males and females even after controlling for age. It is concluded that high exposure to androgens during prenatal development contributes to most forms of offending following the onset of puberty.
Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao
2016-09-01
Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.
NASA Astrophysics Data System (ADS)
Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao
2016-09-01
Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.
An investigation of DTNS2D for use as an incompressible turbulence modelling test-bed
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
1992-01-01
This paper documents an investigation of a two dimensional, incompressible Navier-Stokes solver for use as a test-bed for turbulence modelling. DTNS2D is the code under consideration for use at the Center for Modelling of Turbulence and Transition (CMOTT). This code was created by Gorski at the David Taylor Research Center and incorporates the pseudo compressibility method. Two laminar benchmark flows are used to measure the performance and implementation of the method. The classical solution of the Blasius boundary layer is used for validating the flat plate flow, while experimental data is incorporated in the validation of backward facing step flow. Velocity profiles, convergence histories, and reattachment lengths are used to quantify these calculations. The organization and adaptability of the code are also examined in light of the role as a numerical test-bed.
NASA Astrophysics Data System (ADS)
Tanaka, Satoyuki; Suzuki, Hirotaka; Sadamoto, Shota; Sannomaru, Shogo; Yu, Tiantang; Bui, Tinh Quoc
2016-08-01
Two-dimensional (2D) in-plane mixed-mode fracture mechanics problems are analyzed employing an efficient meshfree Galerkin method based on stabilized conforming nodal integration (SCNI). In this setting, the reproducing kernel function as meshfree interpolant is taken, while employing the SCNI for numerical integration of stiffness matrix in the Galerkin formulation. The strain components are smoothed and stabilized employing Gauss divergence theorem. The path-independent integral ( J-integral) is solved based on the nodal integration by summing the smoothed physical quantities and the segments of the contour integrals. In addition, mixed-mode stress intensity factors (SIFs) are extracted from the J-integral by decomposing the displacement and stress fields into symmetric and antisymmetric parts. The advantages and features of the present formulation and discretization in evaluation of the J-integral of in-plane 2D fracture problems are demonstrated through several representative numerical examples. The mixed-mode SIFs are evaluated and compared with reference solutions. The obtained results reveal high accuracy and good performance of the proposed meshfree method in the analysis of 2D fracture problems.
2D Control Problem and TVD-Particle Method for Water Treatment System
NASA Astrophysics Data System (ADS)
Louaked, M.; Saïdi, A.
2011-11-01
This work consists on the study of an optimal control problem relating to the water pollution. We analyze various questions: existence, uniqueness, control and the regularized formulation of the initial pointwise control problem. We propose also an implementation of an hybrid numerical scheme associated with an algorithm of descent.
An ant colony optimisation algorithm for the 2D and 3D hydrophobic polar protein folding problem
Shmygelska, Alena; Hoos, Holger H
2005-01-01
Background The protein folding problem is a fundamental problems in computational molecular biology and biochemical physics. Various optimisation methods have been applied to formulations of the ab-initio folding problem that are based on reduced models of protein structure, including Monte Carlo methods, Evolutionary Algorithms, Tabu Search and hybrid approaches. In our work, we have introduced an ant colony optimisation (ACO) algorithm to address the non-deterministic polynomial-time hard (NP-hard) combinatorial problem of predicting a protein's conformation from its amino acid sequence under a widely studied, conceptually simple model – the 2-dimensional (2D) and 3-dimensional (3D) hydrophobic-polar (HP) model. Results We present an improvement of our previous ACO algorithm for the 2D HP model and its extension to the 3D HP model. We show that this new algorithm, dubbed ACO-HPPFP-3, performs better than previous state-of-the-art algorithms on sequences whose native conformations do not contain structural nuclei (parts of the native fold that predominantly consist of local interactions) at the ends, but rather in the middle of the sequence, and that it generally finds a more diverse set of native conformations. Conclusions The application of ACO to this bioinformatics problem compares favourably with specialised, state-of-the-art methods for the 2D and 3D HP protein folding problem; our empirical results indicate that our rather simple ACO algorithm scales worse with sequence length but usually finds a more diverse ensemble of native states. Therefore the development of ACO algorithms for more complex and realistic models of protein structure holds significant promise. PMID:15710037
Dong, Jianping
2014-03-15
The 2D space-fractional Schrödinger equation in the time-independent and time-dependent cases for the scattering problems in the fractional quantum mechanics is studied. We define the Green's functions for the two cases and give the mathematical expression of them in infinite series form and in terms of some special functions. The asymptotic formulas of the Green's functions are also given, and applied to get the approximate wave functions for the fractional quantum scattering problems. These results contain those in the standard (integer) quantum mechanics as special cases, and can be applied to study the complex quantum systems.
National Prociency Testing Result of CYP2D6*10 Genotyping for Adjuvant Tamoxifen Therapy in China
Lin, Guigao; Zhang, Kuo; Yi, Lang; Han, Yanxi; Xie, Jiehong; Li, Jinming
2016-01-01
Tamoxifen has been successfully used for treating breast cancer and preventing cancer recurrence. Cytochrome P450 2D6 (CYP2D6) plays a key role in the process of metabolizing tamoxifen to its active moiety, endoxifen. Patients with variants of the CYP2D6 gene may not receive the full benefit of tamoxifen treatment. The CYP2D6*10 variant (the most common variant in Asians) was analyzed to optimize the prescription of tamoxifen in China. To ensure referring clinicians have accurate information for genotype-guided tamoxifen treatment, the Chinese National Center for Clinical Laboratories (NCCL) organized a national proficiency testing (PT) to evaluate the performance of laboratories providing CYP2D6*10 genotyping. Ten genomic DNA samples with CYP2D6 wild-type or CYP2D6*10 variants were validated by PCR-sequencing and sent to 28 participant laboratories. The genotyping results and pharmacogenomic test reports were submitted and evaluated by NCCL experts. Additional information regarding the number of samples tested, the accreditation/certification status, and detecting technology was also requested. Thirty-one data sets were received, with a corresponding analytical sensitivity of 98.2% (548/558 challenges; 95% confidence interval: 96.7–99.1%) and an analytic specificity of 96.5% (675/682; 95% confidence interval: 97.9–99.5%). Overall, 25/28 participants correctly identified CYP2D6*10 status in 10 samples; however, two laboratories made serious genotyping errors. Most of the essential information was included in the 20 submitted CYP2D6*10 test reports. The majority of Chinese laboratories are reliable for detecting the CYP2D6*10 variant; however, several issues revealed in this study underline the importance of PT schemes in continued external assessment and provision of guidelines. PMID:27603206
NASA Astrophysics Data System (ADS)
Mo, Yike; Greenhalgh, Stewart A.; Robertsson, Johan O. A.; Karaman, Hakki
2015-05-01
Lateral velocity variations and low velocity near-surface layers can produce strong scattered and guided waves which interfere with reflections and lead to severe imaging problems in seismic exploration. In order to investigate these specific problems by laboratory seismic modelling, a simple 2D ultrasonic model facility has been recently assembled within the Wave Propagation Lab at ETH Zurich. The simulated geological structures are constructed from 2 mm thick metal and plastic sheets, cut and bonded together. The experiments entail the use of a piezoelectric source driven by a pulse amplifier at ultrasonic frequencies to generate Lamb waves in the plate, which are detected by piezoelectric receivers and recorded digitally on a National Instruments recording system, under LabVIEW software control. The 2D models employed were constructed in-house in full recognition of the similitude relations. The first heterogeneous model features a flat uniform low velocity near-surface layer and deeper dipping and flat interfaces separating different materials. The second model is comparable but also incorporates two rectangular shaped inserts, one of low velocity, the other of high velocity. The third model is identical to the second other than it has an irregular low velocity surface layer of variable thickness. Reflection as well as transmission experiments (crosshole & vertical seismic profiling) were performed on each model. The two dominant Lamb waves recorded are the fundamental symmetric mode (non-dispersive) and the fundamental antisymmetric (flexural) dispersive mode, the latter normally being absent when the source transducer is located on a model edge but dominant when it is on the flat planar surface of the plate. Experimental group and phase velocity dispersion curves were determined and plotted for both modes in a uniform aluminium plate. For the reflection seismic data, various processing techniques were applied, as far as pre-stack Kirchhoff migration. The
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
OECD/MCCI 2-D Core Concrete Interaction (CCI) tests : final report February 28, 2006.
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
reactor material database for dry cavity conditions is solely one-dimensional. Although the MACE Scoping Test was carried out with a two-dimensional concrete cavity, the interaction was flooded soon after ablation was initiated to investigate debris coolability. Moreover, due to the scoping nature of this test, the apparatus was minimally instrumented and therefore the results are of limited value from the code validation viewpoint. Aside from the MACE program, the COTELS test series also investigated 2-D CCI under flooded cavity conditions. However, the input power density for these tests was quite high relative to the prototypic case. Finally, the BETA test series provided valuable data on 2-D core concrete interaction under dry cavity conditions, but these tests focused on investigating the interaction of the metallic (steel) phase with concrete. Due to these limitations, there is significant uncertainty in the partition of energy dissipated for the ablation of concrete in the lateral and axial directions under dry cavity conditions for the case of a core oxide melt. Accurate knowledge of this 'power split' is important in the evaluation of the consequences of an ex-vessel severe accident; e.g., lateral erosion can undermine containment structures, while axial erosion can penetrate the basemat, leading to ground contamination and/or possible containment bypass. As a result of this uncertainty, there are still substantial differences among computer codes in the prediction of 2-D cavity erosion behavior under both wet and dry cavity conditions. In light of the above issues, the OECD-sponsored Melt Coolability and Concrete Interaction (MCCI) program was initiated at Argonne National Laboratory. The project conducted reactor materials experiments and associated analysis to achieve the following technical objectives: (1) resolve the ex-vessel debris coolability issue through a program that focused on providing both confirmatory evidence and test data for the coolability
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 and B.2.
NASA Astrophysics Data System (ADS)
Stone, James M.; Norman, Michael L.
1992-06-01
In this, the second of a series of three papers, we continue a detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows in astrophysics including a self-consistent treatment of the effects of magnetic fields and radiation transfer. In this paper, we give a detailed description of the magnetohydrodynamical (MHD) algorithms in ZEUS-2D. The recently developed constrained transport (CT) algorithm is implemented for the numerical evolution of the components of the magnetic field for MHD simulations. This formalism guarantees the numerically evolved field components will satisfy the divergence-free constraint at all times. We find, however, that the method used to compute the electromotive forces must be chosen carefully to propagate accurately all modes of MHD wave families (in particular shear Alfvén waves). A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-CT method provides for the accurate evolution of all modes of MHD wave families.
A multiple-scale Pascal polynomial for 2D Stokes and inverse Cauchy-Stokes problems
NASA Astrophysics Data System (ADS)
Liu, Chein-Shan; Young, D. L.
2016-05-01
The polynomial expansion method is a useful tool for solving both the direct and inverse Stokes problems, which together with the pointwise collocation technique is easy to derive the algebraic equations for satisfying the Stokes differential equations and the specified boundary conditions. In this paper we propose two novel numerical algorithms, based on a third-first order system and a third-third order system, to solve the direct and the inverse Cauchy problems in Stokes flows by developing a multiple-scale Pascal polynomial method, of which the scales are determined a priori by the collocation points. To assess the performance through numerical experiments, we find that the multiple-scale Pascal polynomial expansion method (MSPEM) is accurate and stable against large noise.
On the sign problem in 2D lattice super Yang-Mills
NASA Astrophysics Data System (ADS)
Catterall, Simon; Galvez, Richard; Joseph, Anosh; Mehta, Dhagash
2012-01-01
In recent years a new class of supersymmetric lattice theories have been proposed which retain one or more exact supersymmetries for non-zero lattice spacing. Recently there has been some controversy in the literature concerning whether these theories suffer from a sign problem. In this paper we address this issue by conducting simulations of the mathcal{N} = (2, 2) and mathcal{N} = (8, 8) supersymmetric Yang-Mills theories in two dimensions for the U(N ) theories with N = 2, 3, 4, using the new twisted lattice formulations. Our results provide evidence that these theories do not suffer from a sign problem in the continuum limit. These results thus boost confidence that the new lattice formulations can be used successfully to explore non-perturbative aspects of four-dimensional mathcal{N} = 4 supersymmetric Yang-Mills theory.
Validation of the bifurcation diagram in the 2D Ohta–Kawasaki problem
NASA Astrophysics Data System (ADS)
Bouwe van den Berg, Jan; Williams, J. F.
2017-04-01
We develop a rigorous numerical method to compare local minimizers of the Ohta–Kawasaki functional in two dimensions. In particular, we validate the phase diagram identifying regions of parameter space where rolls are favorable, where hexagonally packed spots have lowest energy and finally where the constant mixed state does. More generally, we present a method to rigorously determine such features in problems where optimal domain sizes are not known a priori.
Testing Under Fire: Chicago's Problem.
ERIC Educational Resources Information Center
Byrd, Manford, Jr.
The history and development of city-wide testing programs in Chicago since 1936 are reviewed and placed in context with the impact on testing of Sputnik and the passage of the National Defense Education Act of 1958. Current testing problems include the time lag between events and curricular changes and new test construction, the time lag between…
A Novel Numerical Algorithm of Numerov Type for 2D Quasi-linear Elliptic Boundary Value Problems
NASA Astrophysics Data System (ADS)
Mohanty, R. K.; Kumar, Ravindra
2014-11-01
In this article, using three function evaluations, we discuss a nine-point compact scheme of O(Δ y2 + Δ x4) based on Numerov-type discretization for the solution of 2D quasi-linear elliptic equations with given Dirichlet boundary conditions, where Δy > 0 and Δx > 0 are grid sizes in y- and x-directions, respectively. Iterative methods for diffusion-convection equation are discussed in detail. We use block iterative methods to solve the system of algebraic linear and nonlinear difference equations. Comparative results of some physical problems are given to illustrate the usefulness of the proposed method.
2014-06-11
Test Report for the NRL Ocean Surface Flux (NFLUX) Quality Control and 2D Variational Analysis System Jackie May Neil VaN de Voorde QinetiQ North...OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Validation Test ...1 2.0 VALIDATION TEST DESIGN
A challenge problem for 2D/3D imaging of targets from a volumetric data set in an urban environment
NASA Astrophysics Data System (ADS)
Casteel, Curtis H., Jr.; Gorham, LeRoy A.; Minardi, Michael J.; Scarborough, Steven M.; Naidu, Kiranmai D.; Majumder, Uttam K.
2007-04-01
This paper describes a challenge problem whose scope is the 2D/3D imaging of stationary targets from a volumetric data set of X-band Synthetic Aperture Radar (SAR) data collected in an urban environment. The data for this problem was collected at a scene consisting of numerous civilian vehicles and calibration targets. The radar operated in circular SAR mode and completed 8 circular flight paths around the scene with varying altitudes. Data consists of phase history data, auxiliary data, processing algorithms, processed images, as well as ground truth data. Interest is focused on mitigating the large side lobes in the point spread function. Due to the sparse nature of the elevation aperture, traditional imaging techniques introduce excessive artifacts in the processed images. Further interests include the formation of highresolution 3D SAR images with single pass data and feature extraction for 3D SAR automatic target recognition applications. The purpose of releasing the Gotcha Volumetric SAR Data Set is to provide the community with X-band SAR data that supports the development of new algorithms for high-resolution 2D/3D imaging.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT
NASA Astrophysics Data System (ADS)
Cockmartin, Lesley; Marshall, Nicholas W.; Van Ongeval, Chantal; Aerts, Gwen; Stalmans, Davina; Zanca, Federica; Shaheen, Eman; De Keyzer, Frederik; Dance, David R.; Young, Kenneth C.; Bosmans, Hilde
2015-05-01
This paper introduces a hybrid method for performing detection studies in projection image based modalities, based on image acquisitions of target objects and patients. The method was used to compare 2D mammography and digital breast tomosynthesis (DBT) in terms of the detection performance of spherical densities and microcalcifications. The method starts with the acquisition of spheres of different glandular equivalent densities and microcalcifications of different sizes immersed in a homogeneous breast tissue simulating medium. These target objects are then segmented and the subsequent templates are fused in projection images of patients and processed or reconstructed. This results in hybrid images with true mammographic anatomy and clinically relevant target objects, ready for use in observer studies. The detection study of spherical densities used 108 normal and 178 hybrid 2D and DBT images; 156 normal and 321 hybrid images were used for the microcalcifications. Seven observers scored the presence/absence of the spheres/microcalcifications in a square region via a 5-point confidence rating scale. Detection performance in 2D and DBT was compared via ROC analysis with sub-analyses for the density of the spheres, microcalcification size, breast thickness and z-position. The study was performed on a Siemens Inspiration tomosynthesis system using patient acquisitions with an average age of 58 years and an average breast thickness of 53 mm providing mean glandular doses of 1.06 mGy (2D) and 2.39 mGy (DBT). Study results showed that breast tomosynthesis (AUC = 0.973) outperformed 2D (AUC = 0.831) for the detection of spheres (p < 0.0001) and this applied for all spherical densities and breast thicknesses. By way of contrast, DBT was worse than 2D for microcalcification detection (AUC2D = 0.974, AUCDBT = 0.838, p < 0.0001), with significant differences found for all sizes (150-354 µm), for breast thicknesses above 40 mm and for heights
Cockmartin, Lesley; Marshall, Nicholas W; Van Ongeval, Chantal; Aerts, Gwen; Stalmans, Davina; Zanca, Federica; Shaheen, Eman; De Keyzer, Frederik; Dance, David R; Young, Kenneth C; Bosmans, Hilde
2015-05-21
This paper introduces a hybrid method for performing detection studies in projection image based modalities, based on image acquisitions of target objects and patients. The method was used to compare 2D mammography and digital breast tomosynthesis (DBT) in terms of the detection performance of spherical densities and microcalcifications. The method starts with the acquisition of spheres of different glandular equivalent densities and microcalcifications of different sizes immersed in a homogeneous breast tissue simulating medium. These target objects are then segmented and the subsequent templates are fused in projection images of patients and processed or reconstructed. This results in hybrid images with true mammographic anatomy and clinically relevant target objects, ready for use in observer studies. The detection study of spherical densities used 108 normal and 178 hybrid 2D and DBT images; 156 normal and 321 hybrid images were used for the microcalcifications. Seven observers scored the presence/absence of the spheres/microcalcifications in a square region via a 5-point confidence rating scale. Detection performance in 2D and DBT was compared via ROC analysis with sub-analyses for the density of the spheres, microcalcification size, breast thickness and z-position. The study was performed on a Siemens Inspiration tomosynthesis system using patient acquisitions with an average age of 58 years and an average breast thickness of 53 mm providing mean glandular doses of 1.06 mGy (2D) and 2.39 mGy (DBT). Study results showed that breast tomosynthesis (AUC = 0.973) outperformed 2D (AUC = 0.831) for the detection of spheres (p < 0.0001) and this applied for all spherical densities and breast thicknesses. By way of contrast, DBT was worse than 2D for microcalcification detection (AUC2D = 0.974, AUCDBT = 0.838, p < 0.0001), with significant differences found for all sizes (150-354 µm), for breast thicknesses above 40 mm and for heights
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
NASA Astrophysics Data System (ADS)
Hamm, Peter
2014-11-01
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
Hamm, Peter
2014-11-14
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
OECD 2-D Core Concrete Interaction (CCI) tests : CCI-2 test plan, Rev. 0 January 31, 2004.
Farmer, M. T.; Kilsdonk, D. J.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. The first of these two tests, CCI-1, was conducted on December 19, 2003. This test investigated the interaction of a fully oxidized 400 kg PWR core melt, initially containing 8 wt % calcined siliceous concrete, with a specially designed two
Koneru, Suvarna Vani; Bhavani, Durga S
2015-01-01
A novel approach to Contact Map Overlap (CMO) problem is proposed using the two dimensional clusters present in the contact maps. Each protein is represented as a set of the non-trivial clusters of contacts extracted from its contact map. The approach involves finding matching regions between the two contact maps using approximate 2D-pattern matching algorithm and dynamic programming technique. These matched pairs of small contact maps are submitted in parallel to a fast heuristic CMO algorithm. The approach facilitates parallelization at this level since all the pairs of contact maps can be submitted to the algorithm in parallel. Then, a merge algorithm is used in order to obtain the overall alignment. As a proof of concept, MSVNS, a heuristic CMO algorithm is used for global as well as local alignment. The divide and conquer approach is evaluated for two benchmark data sets that of Skolnick and Ding et al. It is interesting to note that along with achieving saving of time, better overlap is also obtained for certain protein folds.
Fabrication and Testing of Low Cost 2D Carbon-Carbon Nozzle Extensions at NASA/MSFC
NASA Technical Reports Server (NTRS)
Greene, Sandra Elam; Shigley, John K.; George, Russ; Roberts, Robert
2015-01-01
Subscale liquid engine tests were conducted at NASA/MSFC using a 1.2 Klbf engine with liquid oxygen (LOX) and gaseous hydrogen. Testing was performed for main-stage durations ranging from 10 to 160 seconds at a chamber pressure of 550 psia and a mixture ratio of 5.7. Operating the engine in this manner demonstrated a new and affordable test capability for evaluating subscale nozzles by exposing them to long duration tests. A series of 2D C-C nozzle extensions were manufactured, oxidation protection applied and then tested on a liquid engine test facility at NASA/MSFC. The C-C nozzle extensions had oxidation protection applied using three very distinct methods with a wide range of costs and process times: SiC via Polymer Impregnation & Pyrolysis (PIP), Air Plasma Spray (APS) and Melt Infiltration. The tested extensions were about 6" long with an exit plane ID of about 6.6". The test results, material properties and performance of the 2D C-C extensions and attachment features will be discussed.
Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.
2015-02-01
An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking, in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.
Analysis of high Reynolds numbers effects on a wind turbine airfoil using 2D wind tunnel test data
NASA Astrophysics Data System (ADS)
Pires, O.; Munduate, X.; Ceyhan, O.; Jacobs, M.; Snel, H.
2016-09-01
The aerodynamic behaviour of a wind turbine airfoil has been measured in a dedicated 2D wind tunnel test at the DNW High Pressure Wind Tunnel in Gottingen (HDG), Germany. The tests have been performed on the DU00W212 airfoil at different Reynolds numbers: 3, 6, 9, 12 and 15 million, and at low Mach numbers (below 0.1). Both clean and tripped conditions of the airfoil have been measured. An analysis of the impact of a wide Reynolds number variation over the aerodynamic characteristics of this airfoil has been performed.
Altitude testing of a flight weight, self-cooled, 2D thrust vectoring exhaust nozzle
NASA Technical Reports Server (NTRS)
Wooten, W. H.; Blozy, J. T.; Speir, D. W.; Lottig, R. A.
1984-01-01
The Augmented Deflector Exhaust Nozzle (ADEN) was tested in PSL-3 at NASA-Lewis Research Center using an F404 engine. The ADEN is a flight weight Single Expansion Ramp Nozzle with thrust vectoring, an internal cooling system utilizing the available engine fan flow, and a variable area throat controlled by the engine control system. Test conditions included dry and max A/B operation at nozzle pressure ratios from 2.0 to 15.0. High nozzle pressure loading was simulated to verify structural integrity at near maximum design pressure. Nozzle settings covered the full range in throat area and + or - 15 deg deflection angle. Test results demonstrated expected aerodynamic performance, cooling system effectiveness, control system stability, and mechanical integrity.
NASA Astrophysics Data System (ADS)
Pires, O.; Munduate, X.; Ceyhan, O.; Jacobs, M.; Madsen, J.; Schepers, J. G.
2016-09-01
2D wind tunnel tests at high Reynolds numbers have been done within the EU FP7 AVATAR project (Advanced Aerodynamic Tools of lArge Rotors) on the DU00-W-212 airfoil and at two different test facilities: the DNW High Pressure Wind Tunnel in Gottingen (HDG) and the LM Wind Power in-house wind tunnel. Two conditions of Reynolds numbers have been performed in both tests: 3 and 6 million. The Mach number and turbulence intensity values are similar in both wind tunnels at the 3 million Reynolds number test, while they are significantly different at 6 million Reynolds number. The paper presents a comparison of the data obtained from the two wind tunnels, showing good repeatability at 3 million Reynolds number and differences at 6 million Reynolds number that are consistent with the different Mach number and turbulence intensity values.
Ignition problems in scramjet testing
Mitani, Tohru
1995-05-01
Ignition of H{sub 2} in heated air containing H{sub 2}O, radicals, and dust was investigated for scramjet testing. Using a reduced kinetic model for H{sub 2}{minus}O{sub 2} systems, the effects of H{sub 2}O and radicals in nozzles are discussed in relation to engine testing with vitiation heaters. Analysis using linearized rate-equations suggested that the addition of O atoms was 1.5 times more effective than the addition of H atoms for ignition. This result can be applied to the problem of premature ignition caused by residual radicals and to plasma-jet igniters. Thermal and chemical effects of dust, inevitable in storage air heaters, were studied next. The effects of heat capacity and size of dust were expressed in terms of an exponential integral function. It was found that the radical termination on the surface of dust produces an effect equivalent to heat loss. The inhibition of ignition by dust may result, if the mass fraction of dust becomes 10{sup {minus}3}.
NASA Astrophysics Data System (ADS)
Savin, Daniel
by DR. Although DR has long been recognized as an important ISM heating source, our work indicates that important channels have been left out while others have been incorrectly estimated. The data for our halogen chemistry studies were collected using the recently decommissioned heavy ion test storage ring (TSR) at the Max Planck Institute for Nuclear Physics in Heidelberg (MPIK), Germany. Our proposed research addresses NASA’s Strategic Goal 2.4: "Discover how the universe works, explore how it began and evolved, and search for Earth-like planets". More specifically, the expected advances in our knowledge of star formation physics resulting from our proposed work meets NASA’s Objective 2.4.2 "Improve understanding of the many phenomena and processes associated with galaxy, stellar, and planetary system formation and evolution from the earliest epochs to today".
Analytic Grad-Shafranov test criteria and checks of a 1-1/2-D BALDUR code
Seidl, F.G.P.
1986-05-01
As discussed by Shafranov, Solov'ev, and others, two special constraints allow the Grad-Shafranov equation to yield simple analytic solutions. From the simplest solution, formulae are derived for properties of the corresponding toroidally symmetric plasma and for the space profile of poloidal magnetic flux density. These formulae constitute test criteria for code performance once the code is made consistent with the two constraints. Obtaining consistency with the first constraint is straightforward, but with the second it is circumstantial. Moreover, the poloidal flux profile of the analytic solution implies a certain artificial form for the resistivity, which is also derived. These criteria have been used to check a composite code which had been assembled by linking a geometrically generalized 1-D BALDUR transport code with a computationally efficient 2-D equilibrium code. A brief description of the composite code is given as well as of its performance with respect to the Grad-Shafranov test criteria.
Costa, Míriam M; Andrade, Hélida M; Bartholomeu, Daniella C; Freitas, Leandro M; Pires, Simone F; Chapeaurouge, Alexander D; Perales, Jonas; Ferreira, André T; Giusta, Mário S; Melo, Maria N; Gazzinelli, Ricardo T
2011-05-06
Identification of novel antigens is essential for developing new diagnostic tests and vaccines. We used DIGE to compare protein expression in amastigote and promastigote forms of Leishmania chagasi. Nine hundred amastigote and promastigote spots were visualized. Five amastigote-specific, 25 promastigote-specific, and 10 proteins shared by the two parasite stages were identified. Furthermore, 41 proteins were identified in the Western blot employing 2-DE and sera from infected dogs. From these proteins, 3 and 38 were reactive with IgM and total IgG, respectively. The proteins recognized by total IgG presented different patterns in terms of their recognition by IgG1 and/or IgG2 isotypes. All the proteins selected by Western blot were mapped for B-cell epitopes. One hundred and eighty peptides were submitted to SPOT synthesis and immunoassay. A total of 25 peptides were shown of interest for serodiagnosis to visceral leishmaniasis. In addition, all proteins identified in this study were mapped for T cell epitopes by using the NetCTL software, and candidates for vaccine development were selected. Therefore, a large-scale screening of L. chagasi proteome was performed to identify new B and T cell epitopes with potential use for developing diagnostic tests and vaccines.
NASA Astrophysics Data System (ADS)
Pérez-Corona, M.; García, J. A.; Taller, G.; Polgár, D.; Bustos, E.; Plank, Z.
2016-02-01
The purpose of geophysical electrical surveys is to determine the subsurface resistivity distribution by making measurements on the ground surface. From these measurements, the true resistivity of the subsurface can be estimated. The ground resistivity is related to various geological parameters, such as the mineral and fluid content, porosity and degree of water saturation in the rock. Electrical resistivity surveys have been used for many decades in hydrogeological, mining and geotechnical investigations. More recently, they have been used for environmental surveys. To obtain a more accurate subsurface model than is possible with a simple 1-D model, a more complex model must be used. In a 2-D model, the resistivity values are allowed to vary in one horizontal direction (usually referred to as the x direction) but are assumed to be constant in the other horizontal (the y) direction. A more realistic model would be a fully 3-D model where the resistivity values are allowed to change in all three directions. In this research, a simulation of the cone penetration test and 2D imaging resistivity are used as tools to simulate the distribution of hydrocarbons in soil.
Surrogate Guderley Test Problem Definition
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-07-06
The surrogate Guderley problem (SGP) is a 'spherical shock tube' (or 'spherical driven implosion') designed to ease the notoriously subtle initialization of the true Guderley problem, while still maintaining a high degree of fidelity. In this problem (similar to the Guderley problem), an infinitely strong shock wave forms and converges in one-dimensional (1D) cylindrical or spherical symmetry through a polytropic gas with arbitrary adiabatic index {gamma}, uniform density {rho}{sub 0}, zero velocity, and negligible pre-shock pressure and specific internal energy (SIE). This shock proceeds to focus on the point or axis of symmetry at r = 0 (resulting in ostensibly infinite pressure, velocity, etc.) and reflect back out into the incoming perturbed gas.
Medical Tests for Prostate Problems
... walnut-shaped gland that is part of the male reproductive system. It has two or more lobes, or sections, ... treating problems of the urinary tract and the male reproductive system. Abdominal Ultrasound Ultrasound uses a device, called a ...
Test Reviewing: Problems and Prospects.
ERIC Educational Resources Information Center
Weiss, David J.
The Inter-Association Council on Test Reviewing (IACTR) was established in 1967 as an outcome of a committee established two years earlier by the Division of Evaluation and Measurement of the American Psychological Association. Its purpose is to facilitate the dissemination of information on, and reviews of, tests and other measurement…
Problem-Solving Test: Pyrosequencing
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2013-01-01
Terms to be familiar with before you start to solve the test: Maxam-Gilbert sequencing, Sanger sequencing, gel electrophoresis, DNA synthesis reaction, polymerase chain reaction, template, primer, DNA polymerase, deoxyribonucleoside triphosphates, orthophosphate, pyrophosphate, nucleoside monophosphates, luminescence, acid anhydride bond,…
ERIC Educational Resources Information Center
Leighty, Katherine A.; Menzel, Charles R.; Fragaszy, Dorothy M.
2008-01-01
Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees ("Pan troglodytes") and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old…
Inverse problem in nondestructive testing using arrayed eddy current sensors.
Zaoui, Abdelhalim; Menana, Hocine; Feliachi, Mouloud; Berthiau, Gérard
2010-01-01
A fast crack profile reconstitution model in nondestructive testing is developed using an arrayed eddy current sensor. The inverse problem is based on an iterative solving of the direct problem using genetic algorithms. In the direct problem, assuming a current excitation, the incident field produced by all the coils of the arrayed sensor is obtained by the translation and superposition of the 2D axisymmetric finite element results obtained for one coil; the impedance variation of each coil, due to the crack, is obtained by the reciprocity principle involving the dyadic Green's function. For the inverse problem, the surface of the crack is subdivided into rectangular cells, and the objective function is expressed only in terms of the depth of each cell. The evaluation of the dyadic Green's function matrix is made independently of the iterative procedure, making the inversion very fast.
Farmer, M. T.; Kilsdonk, D. J.; Lomperski, S.; Aeschliman, R. W.; Basu, S.
2011-05-23
experiments to address remaining uncertainties related to long-term two-dimensional molten core-concrete interaction. In particular, for both wet and dry cavity conditions, there is uncertainty insofar as evaluating the lateral vs. axial power split during a core-concrete interaction due to a lack of experiment data. As a result, there are differences in the 2-D cavity erosion predicted by codes such as MELCOR, WECHSL, and COSACO. The first step towards generating this data is to produce a test plan for review by the Project Review Group (PRG). The purpose of this document is to provide this plan.
Techniques utilized in the simulated altitude testing of a 2D-CD vectoring and reversing nozzle
NASA Technical Reports Server (NTRS)
Block, H. Bruce; Bryant, Lively; Dicus, John H.; Moore, Allan S.; Burns, Maureen E.; Solomon, Robert F.; Sheer, Irving
1988-01-01
Simulated altitude testing of a two-dimensional, convergent-divergent, thrust vectoring and reversing exhaust nozzle was accomplished. An important objective of this test was to develop test hardware and techniques to properly operate a vectoring and reversing nozzle within the confines of an altitude test facility. This report presents detailed information on the major test support systems utilized, the operational performance of the systems and the problems encountered, and test equipment improvements recommended for future tests. The most challenging support systems included the multi-axis thrust measurement system, vectored and reverse exhaust gas collection systems, and infrared temperature measurement systems used to evaluate and monitor the nozzle. The feasibility of testing a vectoring and reversing nozzle of this type in an altitude chamber was successfully demonstrated. Supporting systems performed as required. During reverser operation, engine exhaust gases were successfully captured and turned downstream. However, a small amount of exhaust gas spilled out the collector ducts' inlet openings when the reverser was opened more than 60 percent. The spillage did not affect engine or nozzle performance. The three infrared systems which viewed the nozzle through the exhaust collection system worked remarkably well considering the harsh environment.
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Frelat, Joël
2014-03-01
It is experimentally well-known that a crack loaded in mode I+III propagates through formation of discrete fracture facets inclined at a certain tilt angle on the original crack plane, depending on the ratio of the mode III to mode I initial stress intensity factors. Pollard et al. (1982) have proposed to calculate this angle by considering the tractions on all possible future infinitesimal facets and assuming shear tractions to be zero on that which will actually develop. In this paper we consider the opposite case of well-developed facets; the stress field near the lateral fronts of such facets becomes independent of the initial crack and essentially 2D in a plane perpendicular to the main direction of crack propagation. To determine this stress field, we solve the model 2D problem of an infinite plate containing an infinite periodic array of cracks inclined at some angle on a straight line, and loaded through uniform stresses at infinity. This is done first analytically, for small values of this angle, by combining Muskhelishvili's (1953) formalism and a first-order perturbation procedure. The formulae found for the 2D stress intensity factors are then extended in an approximate way to larger angles by using another reference solution, and finally assessed through comparison with some finite element results. To finally illustrate the possible future application of these formulae to the prediction of the stationary tilt angle, we introduce the tentative assumption that the 2D mode II stress intensity factor is zero on the lateral fronts of the facets. An approximate formula providing the tilt angle as a function of the ratio of the mode III to mode I stress intensity factors of the initial crack is deduced from there. This formula, which slightly depends on the type of loading imposed, predicts somewhat smaller angles than that of Pollard et al. (1982).
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1994-01-01
This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.
NASA Astrophysics Data System (ADS)
Humair, F.; Matasci, B.; Carrea, D.; Pedrazzini, A.; Loye, A.; Pedrozzi, G.; Nicolet, P.; Jaboyedoff, M.
2012-04-01
account the results of the experimental testing are performed and compared with the a-priori simulations. 3D simulations were performed using a software that takes into account the effect of the forest cover in the blocky trajectory (RockyFor 3D) and an other that neglects this aspect (Rotomap; geo&soft international). 2D simulation (RocFall; Rocscience) profiles were located in the blocks paths deduced from 3D simulations. The preliminary results show that: (1) high speed movies are promising and allow us to track the blocks using video software, (2) the a-priori simulations tend to overestimate the runout distance which is certainly due to an underestimation of the obstacles as well as the breaking of the failing rocks which is not taken into account in the models, (3) the trajectories deduced from both a-priori simulation and real size experiment highlights the major influence of the channelized slope morphology on rock paths as it tends to follow the flow direction. This indicates that the 2D simulation have to be performed along the line of flow direction.
Rua, Francesco; Sadeghi, Sheila J; Castrignanò, Silvia; Valetti, Francesca; Gilardi, Gianfranco
2015-10-01
This work reports for the first time the direct electron transfer of the Canis familiaris cytochrome P450 2D15 on glassy carbon electrodes to provide an analytical tool as an alternative to P450 animal testing in the drug discovery process. Cytochrome P450 2D15, that corresponds to the human homologue P450 2D6, was recombinantly expressed in Escherichia coli and entrapped on glassy carbon electrodes (GC) either with the cationic polymer polydiallyldimethylammonium chloride (PDDA) or in the presence of gold nanoparticles (AuNPs). Reversible electrochemical signals of P450 2D15 were observed with calculated midpoint potentials (E1/2) of −191 ± 5 and −233 ± 4 mV vs. Ag/AgCl for GC/PDDA/2D15 and GC/AuNPs/2D15, respectively. These experiments were then followed by the electro-catalytic activity of the immobilized enzyme in the presence of metoprolol. The latter drug is a beta-blocker used for the treatment of hypertension and is a specific marker of the human P450 2D6 activity. Electrocatalysis data showed that only in the presence of AuNps the expected α-hydroxy-metoprolol product was present as shown by HPLC. The successful immobilization of the electroactive C. familiaris cytochrome P450 2D15 on electrode surfaces addresses the ever increasing demand of developing alternative in vitromethods for amore detailed study of animal P450 enzymes' metabolism, reducing the number of animals sacrificed in preclinical tests.
NASA Technical Reports Server (NTRS)
Miller, Franklin; Bagdanove, paul; Blake, Peter; Canavan, Ed; Cofie, Emmanuel; Crane, J. Allen; Dominquez, Kareny; Hagopian, John; Johnston, John; Madison, Tim; Miller, Dave; Oaks, Darrell; Williams, Pat; Young, Dan; Zukowski, Barbara; Zukowski, Tim
2007-01-01
The James Webb Space Telescope Instrument Support Integration Module (ISIM) is being designed and developed at the Goddard Space Flight Center. The ISM Thermal Distortion Testing (ITDT) program was started with the primary objective to validate the ISM mechanical design process. The ITDT effort seeks to establish confidence and demonstrate the ability to predict thermal distortion in composite structures at cryogenic temperatures using solid element models. This-program's goal is to better ensure that ISIM meets all the mechanical and structural requirements by using test results to verify or improve structural modeling techniques. The first step to accomplish the ITDT objectives was to design, and then construct solid element models of a series 2-D test assemblies that represent critical building blocks of the ISIM structure. Second, the actual test assemblies consisting of composite tubes and invar end fittings were fabricated and tested for thermal distortion. This paper presents the development of the GSFC Cryo Distortion Measurement Facility (CDMF) to meet the requirements of the ISIM 2-D test. assemblies, and other future ISIM testing needs. The CDMF provides efficient cooling with both a single, and two-stage cryo-cooler. Temperature uniformity of the test assemblies during thermal transients and at steady state is accomplished by using sapphire windows for all of the optical ports on the radiation shields and by using .thermal straps to cool the test assemblies. Numerical thermal models of the test assemblies were used to predict the temperature uniformity of the parts during cooldown and at steady state. Results of these models are compared to actual temperature data from the tests. Temperature sensors with a 0.25K precision were used to insure that test assembly gradients did not exceed 2K lateral, and 4K axially. The thermal distortions of two assemblies were measured during six thermal cycles from 320K to 35K using laser interferometers. The standard
Solving Infeasibility Problems in Computerized Test Assembly.
ERIC Educational Resources Information Center
Timminga, Ellen
1998-01-01
Discusses problems of diagnosing and repairing infeasible linear-programming models in computerized test assembly. Demonstrates that it is possible to localize the causes of infeasibility, although this is not always easy. (SLD)
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Franke-Gromberg, Christine; Schüler, Grit; Hermanussen, Michael; Scheffler, Christiane
2010-01-01
The aim of this methodological anthropometric study was to compare direct anthropometry and digital two-dimensional photogrammetry in 18 male and 27 female subjects, aged 24 to 65 years, from Potsdam, Germany. In view of the rising interest in reliable biometric kephalofacial data, we focussed on head and face measurements. Out of 34 classic facial anatomical landmarks, 27 landmarks were investigated both by direct anthropometry and 2D-photogrammetry; 7 landmarks could not be localized by 2D-photogrammetry. Twenty-six kephalofacial distances were analysed both by direct anthropometry and digital 2D-photogrammetry. Kephalofacial distances are on average 7.6% shorter when obtained by direct anthropometry. The difference between the two techniques is particularly evident in total head height (vertex-gnathion) due to the fact that vertex is usually covered by hair and escapes from photogrammetry. Also the distances photographic sellion-gnathion (1.3 cm, i. e. 11.6%) and nasal-gnathion (1.2 cm, i. e. 9.4%) differ by more than one centimetre. Differences below 0.5 cm between the two techniques were found when measuring mucosa-lip-height (2.2%), gonia (3.0%), glabella-stomion (3.9%), and nose height (glabella-subnasal) (4.0%). Only the estimates of forehead width were significantly narrower when obtained by 2D-photogrammetry (-1.4 cm, -13.1%). The methodological differences increased with increasing magnitude of the kephalometric distance. Apart from these limitations, both techniques are similarly valid and may replace each other.
E-2D Advanced Hawkeye Aircraft (E-2D AHE)
2015-12-01
Selected Acquisition Report (SAR) RCS: DD-A&T(Q&A)823-364 E-2D Advanced Hawkeye Aircraft (E-2D AHE) As of FY 2017 President’s Budget Defense...Office Estimate RDT&E - Research, Development, Test, and Evaluation SAR - Selected Acquisition Report SCP - Service Cost Position TBD - To Be Determined
Problem-Solving Test: Tryptophan Operon Mutants
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2010-01-01
This paper presents a problem-solving test that deals with the regulation of the "trp" operon of "Escherichia coli." Two mutants of this operon are described: in mutant A, the operator region of the operon carries a point mutation so that it is unable to carry out its function; mutant B expresses a "trp" repressor protein unable to bind…
Herbild, Louise; Andersen, Stig E; Werge, Thomas; Rasmussen, Henrik B; Jürgens, Gesche
2013-10-01
The effect of pharmacogenetic testing for CYP450 2D6 and 2C19 on treatment costs have not yet been documented. This study used Danish patient registers to calculate healthcare costs of treating patients with diagnoses within the schizophrenic spectrum for 1 year with or without pharmacogenetic testing for polymorphisms in the genes for the CYP2D6 and CYP2C19 enzymes. In a randomized, controlled trial, stratified with respect to metabolizer genotype, 104 patients were assigned to treatment based on pharmacogenetic testing and 103 patients to treatment as usual. Random exclusion of extensive and intermediate metabolizers was used to increase the frequency of extreme metabolizers (poor metabolizers and ultrarapid metabolizers for CYP2D6) to 20% in both groups. Cost differences were analysed at several levels including (i) overall healthcare expenditure, (ii) psychiatric hospital cost (iii) nonpsychiatric hospital cost, (iv) primary care spending and (v) pharmaceuticals. Statistically significant differences in costs of psychiatric care dependent on metabolizer status were found between intervention groups. Pharmacogenetic testing significantly reduced costs among the extreme metabolizers (poor metabolizers and ultrarapid metabolizers) to 28%. Use of primary care services and pharmaceuticals was also affected by the intervention.This study confirms earlier findings that extreme metabolizers (poor and ultrarapid metabolizers) incur higher costs than similar patients with a normal metabolizer genotype. However, this study shows that these excess costs can be reduced by pharmacogenetic testing. Pharmacogenetic testing for CYP2D6 and CYP2C19 could thus be considered as a means of curtailing high psychiatric treatment costs among extreme metabolizers.
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of a third long-term 2-D Core-Concrete Interaction (CCI) experiment designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-3 experiment, which was conducted on September 22, 2005. Test specifications for CCI-3 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 375
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-2 experiment, which was conducted on August 24, 2004. Test specifications for CCI-2 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Farmer, M. T.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten coreconcrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-1 experiment, which was conducted on December 19, 2003. Test specifications for CCI-1 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Motor operated valves problems tests and simulations
Pinier, D.; Haas, J.L.
1996-12-01
An analysis of the two refusals of operation of the EAS recirculation shutoff valves enabled two distinct problems to be identified on the motorized valves: the calculation methods for the operating torques of valves in use in the power plants are not conservative enough, which results in the misadjustement of the torque limiters installed on their motorizations, the second problem concerns the pressure locking phenomenon: a number of valves may entrap a pressure exceeding the in-line pressure between the disks, which may cause a jamming of the valve. EDF has made the following approach to settle the first problem: determination of the friction coefficients and the efficiency of the valve and its actuator through general and specific tests and models, definition of a new calculation method. In order to solve the second problem, EDF has made the following operations: identification of the valves whose technology enables the pressure to be entrapped: the tests and numerical simulations carried out in the Research and Development Division confirm the possibility of a {open_quotes}boiler{close_quotes} effect: determination of the necessary modifications: development and testing of anti-boiler effect systems.
The Finite Deformation Dynamic Sphere Test Problem
Versino, Daniele; Brock, Jerry Steven
2016-09-02
In this manuscript we describe test cases for the dynamic sphere problem in presence of finite deformations. The spherical shell in exam is made of a homogeneous, isotropic or transverse isotropic material and elastic and elastic-plastic material behaviors are considered. Twenty cases, (a) to (t), are thus defined combining material types and boundary conditions. The inner surface radius, the outer surface radius and the material's density are kept constant for all the considered test cases and their values are r_{i} = 10mm, r_{o} = 20mm and p = 1000Kg/m^{3} respectively.
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-09-06
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps.
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-01-01
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps. PMID:27608025
The linear separability problem: some testing methods.
Elizondo, D
2006-03-01
The notion of linear separability is used widely in machine learning research. Learning algorithms that use this concept to learn include neural networks (single layer perceptron and recursive deterministic perceptron), and kernel machines (support vector machines). This paper presents an overview of several of the methods for testing linear separability between two classes. The methods are divided into four groups: Those based on linear programming, those based on computational geometry, one based on neural networks, and one based on quadratic programming. The Fisher linear discriminant method is also presented. A section on the quantification of the complexity of classification problems is included.
Brajša, Karmen; Vujasinović, Ines; Jelić, Dubravko; Trzun, Marija; Zlatar, Ivo; Karminski-Zamola, Grace; Hranjec, Marijana
2016-12-01
Due to a poor clinical predictive power of 2D cell cultures, standard tool for in vitro assays in drug discovery process, there is increasing interest in developing 3D in vitro cell cultures, biologically relevant assay feasible for the development of robust preclinical anti-cancer drug screening platforms. Herein, we tested amidino-substituted benzimidazoles and benzimidazo[1,2-a]quinolines as a small platform for comparison of antitumor activity in 2D and 3D cell culture systems and correlation with structure-activity relationship. 3D cell culture method was applied on a human cancer breast (SK-BR-3, MDA-MB-231, T-47D) and pancreatic cancer cells (MIA PaCa-2, PANC-1). Results obtained in 2D and 3D models were highly comparable, but in some cases we have observed significant disagreement indicating that some prominent compounds can be discarded in early phase of researching because of compounds with false positive result. To confirm which of cell culture systems is more accurate, in vivo profiling is needed.
Radix, P.; Leonard, M.; Papantoniou, C.; Roman, G.; Saouter, E.; Gallotti-Schmitt, S.; Thiebaud, H.; Vasseur, P.
1999-10-01
The Daphnia magna 21-d test may be required by European authorities as a criterion for the assessment of aquatic chronic toxicity for the notification of new substances. However, this test has several drawbacks. It is labor-intensive, relatively expensive, and requires the breeding of test organisms. The Brachionous calyciflorus 2-d test and Microtox chronic 22-h test do not suffer from these disadvantages and could be used as substitutes for the Daphnia 21-d test for screening assays. During this study, the toxicity of 25 chemicals was measured using both the microtox chronic toxicity and B. calyciflorus 2-d tests, and the no-observed-effect concentrations (NOECs) were compared to the D. magna 21-d test. The Brachionus test was slightly less sensitive than the Daphnia test, but the correlation between the two tests was relatively good (r{sup 2} = 0.54). The B. calyciflorus 2-d test, and to a lesser extent the Microtox chronic 22-h test, were able to predict the chronic toxicity values of the Daphnia 21-d test. They constitute promising cost-effective tools for chronic toxicity screening.
NASA Astrophysics Data System (ADS)
Rizzuti, G.; Gisolf, A.
2017-03-01
We study a reconstruction algorithm for the general inverse scattering problem based on the estimate of not only medium properties, as in more conventional approaches, but also wavefields propagating inside the computational domain. This extended set of unknowns is justified as a way to prevent local minimum stagnation, which is a common issue for standard methods. At each iteration of the algorithm, (i) the model parameters are obtained by solution of a convex problem, formulated from a special bilinear relationship of the data with respect to properties and wavefields (where the wavefield is kept fixed), and (ii) a better estimate of the wavefield is calculated, based on the previously reconstructed properties. The resulting scheme is computationally convenient since step (i) can greatly benefit from parallelization and the wavefield update (ii) requires modeling only in the known background model, which can be sped up considerably by factorization-based direct methods. The inversion method is successfully tested on synthetic elastic datasets.
Li, Yan; Zhu, Zhuo R; Ou, Bao C; Wang, Ya Q; Tan, Zhou B; Deng, Chang M; Gao, Yi Y; Tang, Ming; So, Ji H; Mu, Yang L; Zhang, Lan Q
2015-02-15
Major depressive disorder is one of the most prevalent and life-threatening forms of mental illnesses. The traditional antidepressants often take several weeks, even months, to obtain clinical effects. However, recent clinical studies have shown that ketamine, an N-methyl-D-aspartate (NMDA) receptor antagonist, exerts rapid antidepressant effects within 2h and are long-lasting. The aim of the present study was to investigate whether dopaminergic system was involved in the rapid antidepressant effects of ketamine. The acute administration of ketamine (20 mg/kg) significantly reduced the immobility time in the forced swim test. MK-801 (0.1 mg/kg), the more selective NMDA antagonist, also exerted rapid antidepressant-like effects. In contrast, fluoxetine (10 mg/kg) did not significantly reduced the immobility time in the forced swim test after 30 min administration. Notably, pretreatment with haloperidol (0.15 mg/kg, a nonselective dopamine D2/D3 antagonist), but not SCH23390 (0.04 and 0.1 mg/kg, a selective dopamine D1 receptor antagonist), significantly prevented the effects of ketamine or MK-801. Moreover, the administration of sub-effective dose of ketamine (10 mg/kg) in combination with pramipexole (0.3 mg/kg, a dopamine D2/D3 receptor agonist) exerted antidepressant-like effects compared with each drug alone. In conclusion, our results indicated that the dopamine D2/D3 receptors, but not D1 receptors, are involved in the rapid antidepressant-like effects of ketamine.
Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.
2013-01-01
Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.
MacBurn's cylinder test problem
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end and energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.
Zhang, Jun-Yu; Chen, Song-Chang; Chen, Yi-Yao; Li, Shu-Yuan; Zhang, Lan-Lan; Shen, Ying-Hua; Chang, Chun-Xin; Xiang, Yu-Qian; Huang, He-Feng; Xu, Chen-Ming
2017-01-01
X-linked lymphoproliferative disease type 1 (XLP1) is a rare primary immunodeficiency characterized by a clinical triad consisting of severe EBV-induced hemophagocytic lymphohistiocytosis, B-cell lymphoma, and dysgammaglobulinemia. Mutations in SH2D1A gene have been revealed as the cause of XLP1. In this study, a pregnant woman with recurrence history of birthing immunodeficiency was screened for pathogenic variant because the proband sample was unavailable. We aimed to clarify the genetic diagnosis and provide prenatal testing for the family. Next-generation sequencing (NGS)-based multigene panel was used in carrier screening of the pregnant woman. Variants of immunodeficiency related genes were analyzed and prioritized. Candidate variant was verified by using Sanger sequencing. The possible influence of the identified variant was evaluated through RNA assay. Amniocentesis, karyotyping, and Sanger sequencing were performed for prenatal testing. We identified a novel de novo frameshift SH2D1A pathogenic variant (c.251_255delTTTCA) in the pregnant carrier. Peripheral blood RNA assay indicated that the mutant transcript could escape nonsense-mediated mRNA decay (NMD) and might encode a C-terminal truncated protein. Information of the variant led to success prenatal diagnosis of the fetus. In conclusion, our study clarified the genetic diagnosis and altered disease prevention for a pregnant carrier of XLP1.
Chen, Yi-Yao; Li, Shu-Yuan; Zhang, Lan-Lan; Shen, Ying-Hua; Chang, Chun-Xin; Xiang, Yu-Qian; Huang, He-Feng; Xu, Chen-Ming
2017-01-01
X-linked lymphoproliferative disease type 1 (XLP1) is a rare primary immunodeficiency characterized by a clinical triad consisting of severe EBV-induced hemophagocytic lymphohistiocytosis, B-cell lymphoma, and dysgammaglobulinemia. Mutations in SH2D1A gene have been revealed as the cause of XLP1. In this study, a pregnant woman with recurrence history of birthing immunodeficiency was screened for pathogenic variant because the proband sample was unavailable. We aimed to clarify the genetic diagnosis and provide prenatal testing for the family. Next-generation sequencing (NGS)-based multigene panel was used in carrier screening of the pregnant woman. Variants of immunodeficiency related genes were analyzed and prioritized. Candidate variant was verified by using Sanger sequencing. The possible influence of the identified variant was evaluated through RNA assay. Amniocentesis, karyotyping, and Sanger sequencing were performed for prenatal testing. We identified a novel de novo frameshift SH2D1A pathogenic variant (c.251_255delTTTCA) in the pregnant carrier. Peripheral blood RNA assay indicated that the mutant transcript could escape nonsense-mediated mRNA decay (NMD) and might encode a C-terminal truncated protein. Information of the variant led to success prenatal diagnosis of the fetus. In conclusion, our study clarified the genetic diagnosis and altered disease prevention for a pregnant carrier of XLP1. PMID:28231257
Mazella, Anaïs; Albaret, Jean-Michel; Picard, Delphine
2016-01-01
To fill an important gap in the psychometric assessment of children and adolescents with impaired vision, we designed a new battery of haptic tests, called Haptic-2D, for visually impaired and sighted individuals aged five to 18 years. Unlike existing batteries, ours uses only two-dimensional raised materials that participants explore using active touch. It is composed of 11 haptic tests, measuring scanning skills, tactile discrimination skills, spatial comprehension skills, short-term tactile memory, and comprehension of tactile pictures. We administered this battery to 138 participants, half of whom were sighted (n=69), and half visually impaired (blind, n=16; low vision, n=53). Results indicated a significant main effect of age on haptic scores, but no main effect of vision or Age × Vision interaction effect. Reliability of test items was satisfactory (Cronbach's alpha, α=0.51-0.84). Convergent validity was good, as shown by a significant correlation (age partialled out) between total haptic scores and scores on the B101 test (rp=0.51, n=47). Discriminant validity was also satisfactory, as attested by a lower but still significant partial correlation between total haptic scores and the raw score on the verbal WISC (rp=0.43, n=62). Finally, test-retest reliability was good (rs=0.93, n=12; interval of one to two months). This new psychometric tool should prove useful to practitioners working with young people with impaired vision.
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
A class of ejecta transport test problems
Hammerberg, James E; Buttler, William T; Oro, David M; Rousculp, Christopher L; Morris, Christopher; Mariam, Fesseha G
2011-01-31
Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function ofparticulate masses and velocities, f{sub 0}(m, v;t). Some of the properties of this source distribution function have been determined from extensive Taylor and supported wave experiments on shock loaded Sn interfaces of varying surface and subsurface morphology. Such experiments measure the mass moment of f{sub o} under vacuum conditions assuming weak particle-particle interaction and, usually, fully inelastic capture by piezo-electric diagnostic probes. Recently, planar Sn experiments in He, Ar, and Kr gas atmospheres have been carried out to provide transport data both for machined surfaces and for coated surfaces. A hydro code model of ejecta transport usually specifies a criterion for the instantaneous temporal appearance of ejecta with source distribution f{sub 0}(m, v;t{sub 0}). Under the further assumption of separability, f{sub 0}(m,v;t{sub 0}) = f{sub 1}(m)f{sub 2}(v), the motion of particles under the influence of gas dynamic forces is calculated. For the situation of non-interacting particulates, interacting with a gas via drag forces, with the assumption of separability and simplified approximations to the Reynolds number dependence of the drag coefficient, the dynamical equation for the time evolution of the distribution function, f(r,v,m;t), can be resolved as a one-dimensional integral which can be compared to a direct hydro simulation as a test problem. Such solutions can also be used for preliminary analysis of experimental data. We report solutions for several shape dependent drag coefficients and analyze the results of recent planar dsh experiments in Ar and Xe.
Free-spinning-tunnel Tests of a 1/26th Scale Model of the Douglas XTB2D-1 Airplane
NASA Technical Reports Server (NTRS)
Stone, Ralph W; Berman, Theodore
1946-01-01
A spin-tunnel investigation of a 1/26 scale model of the Douglas XTB2D-1 airplane has been conducted in the Langley 20-foot free-spinning tunnel. The effects of control settings and movements upon the erect- and inverted-spin and recovery characteristics of the model were determined for various loading conditions. Tests were also performed to determine the effects of various tail modifications. The investigation included emergency spin-recovery parachute tests as well as crew-escape and rudder- and elevator-force tests. All tests were performed at an equivalent spin altitute of 20,000 feet. The recovery characteristics of the model in the original design were found to be unsatisfactory. Installation of a large ventral fin, installation of tip fins on the horizontal tail, or installation of a small ventral fin in combination with antispan fillets and a spanwise extension of the horizontal-tail surfaces satisfactorily improved the recovery characteristics of the model.
Hoffman, E.L.; Ammerman, D.J.
1993-08-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several finite element simulations of the event. The purpose of the study is to compare the performance of the various analysis codes and element types with respect to a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry.
Testing Developmental Pathways to Antisocial Personality Problems
ERIC Educational Resources Information Center
Diamantopoulou, Sofia; Verhulst, Frank C.; van der Ende, Jan
2010-01-01
This study examined the development of antisocial personality problems (APP) in young adulthood from disruptive behaviors and internalizing problems in childhood and adolescence. Parent ratings of 507 children's (aged 6-8 years) symptoms of attention deficit hyperactivity disorder, oppositional defiant disorder, and anxiety, were linked to…
Voracek, Martin; Bagdonas, Albinas; Dressler, Stefan G
2007-09-01
The second-to-fourth digit ratio (2D:4D) is a sexually dimorphic somatic trait and has been proposed as a biomarker for the organizational, i.e., permanent, effects of prenatal testosterone on the human brain. Accordingly, recent research has related 2D:4D to a variety of sex-dependent, hormonally influenced traits and phenotypes. The geographical variation in typical 2D:4D is marked and presently poorly understood. This study presents the first investigation into the 2D:4D ratio in a Baltic country. A contemporary sample of 109 Lithuanian men and women was compared with data from a historical sample of 100 Lithuanian men and women, collected and published in the 1880s and rediscovered only now. The findings included the following lines of evidence: (i) seen in an international perspective, the average 2D:4D in Lithuania is low; (ii) there was a sex difference in 2D:4D in the expected direction in both samples; (iii) a previously adduced hypothesis of an association of lighter eye and hair color with higher, i.e., more feminized, 2D:4D received no support in both samples; and (iv) the average 2D:4D in the contemporary sample was higher than in the historical sample. In view of a hypothesized increase in 2D:4D in modern populations, owing to increased environmental levels of endocrine disruptors such as xenoestrogens, this latter finding appears to be of particular notice. However, because finger-length measurement methods differed across the samples, it cannot be safely ruled out that the apparent time trend in Lithuanian 2D:4D in truth is an artifact. The puzzling geographical pattern seen in the 2D:4D ratio and the question of possible time trends therein deserve further investigations.
Problems and Issues in Translating International Educational Achievement Tests
ERIC Educational Resources Information Center
Arffman, Inga
2013-01-01
The article reviews research and findings on problems and issues faced when translating international academic achievement tests. The purpose is to draw attention to the problems, to help to develop the procedures followed when translating the tests, and to provide suggestions for further research. The problems concentrate on the following: the…
NASA Astrophysics Data System (ADS)
Morgan, J. P.; de Monserrat, A.; Hall, R.; Taramon, J. M.; Perez-Gussinye, M.
2015-12-01
This work focuses on improving current 2D numerical approaches to modeling the boundary conditions associated with computing accurate deformation and melting associated with continental rifting. Recent models primarily use far-field boundary conditions that have been used for decades with little assessment of their effects on asthenospheric flow beneath the rifting region. All are clearly extremely oversimplified — Huismans and Buiter assume there is no vertical flow into the rifting region, with the asthenosphere flowing uniformly into the rifting region from the sides beneath lithosphere moving in the opposing direction, Armitage et al. and van Wijk use divergent velocities on the upper boundary to impose break-up within a Cartesian box, while other studies generally assume there is uniform horizontal flow away from the center of rifting, with uniform vertical flow replenishing the material pulled out of the sides of the computational region. All are likely to significantly shape the pattern of asthenospheric flow beneath the stretching lithosphere that is associated with pressure-release melting and rift volcanism. Thus while ALL may lead to similar predictions of the effects of crustal stretching and thinning, NONE may lead to accurate determination of the the asthenospheric flow and melting associated with lithospheric stretching and breakup. Here we discuss a suite of numerical experiments that compare these choices to likely more realistic boundary condition choices like the analytical solution for flow associated with two diverging plates stretching over a finite-width region, and a high-resolution 2-D region embedded within a cylindrical annulus 'whole mantle cross-section' at 5% extra numerical problem size. Our initial results imply that the choice of far-field boundary conditions does indeed significantly influence predicted melting distributions and melt volumes associated with continental breakup. For calculations including asthenospheric melting
Errors in Standardized Tests: A Systemic Problem.
ERIC Educational Resources Information Center
Rhoades, Kathleen; Madaus, George
The nature and extent of human error in educational testing over the past 25 years were studied. In contrast to the random measurement error expected in all tests, the presence of human error is unexpected and brings unknown, often harmful, consequences for students and their schools. Using data from a variety of sources, researchers found 103…
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.
2010-01-01
Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…
Computerized Diagnostic Testing: Problems and Possibilities.
ERIC Educational Resources Information Center
McArthur, David L.
The use of computers to build diagnostic inferences is explored in two contexts. In computerized monitoring of liquid oxygen systems for the space shuttle, diagnoses are exact because they can be derived within a world which is closed. In computerized classroom testing of reading comprehension, programs deliver a constrained form of adaptive…
Problem-Solving Test: Southwestern Blotting
ERIC Educational Resources Information Center
Szeberényi, József
2014-01-01
Terms to be familiar with before you start to solve the test: Southern blotting, Western blotting, restriction endonucleases, agarose gel electrophoresis, nitrocellulose filter, molecular hybridization, polyacrylamide gel electrophoresis, proto-oncogene, c-abl, Src-homology domains, tyrosine protein kinase, nuclear localization signal, cDNA,…
Problem-Solving Test: Restriction Endonuclease Mapping
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
The term "restriction endonuclease mapping" covers a number of related techniques used to identify specific restriction enzyme recognition sites on small DNA molecules. A method for restriction endonuclease mapping of a 1,000-basepair (bp)-long DNA molecule is described in the fictitious experiment of this test. The most important fact needed to…
American History's Problem with Standardized Testing
ERIC Educational Resources Information Center
McCoog, Ian J.
2005-01-01
This article looks at current research concerning how students best learn the discipline of history, commentaries both in favor of and against standardized testing, and basic philosophical beliefs about the discipline. It explains methods of how to incorporate differentiated lessons and performance based assessments to NCLB standards and…
Santangelo, Andrea; Provensi, Gustavo; Costa, Alessia; Blandina, Patrizio; Ricca, Valdo; Crescimanno, Giuseppe; Casarrubea, Maurizio; Passani, M Beatrice
2017-02-01
Markers of histaminergic dysregulation were found in several neuropsychiatric disorders characterized by repetitive behaviours, thoughts and stereotypies. We analysed the effect of acute histamine depletion by means of i. c.v. injections of alpha-fluoromethylhistidine, a blocker of histidine decarboxylase, on the temporal organization of motor sequences of CD1 mice behaviour in the open-field test. An ethogram encompassing 9 behavioural components was employed. Durations and frequencies were only slightly affected by treatments. However, as revealed by multivariate t-pattern analysis, histamine depletion was associated with a striking increase in the number of behavioural patterns. We found 42 patterns of different composition occurring, on average, 520.90 ± 50.23 times per mouse in the histamine depleted (HD) group, whereas controls showed 12 different patterns occurring on average 223.30 ± 20.64 times. Exploratory and grooming behaviours clustered separately, and the increased pattern complexity involved exclusively exploratory patterns. To test the hypothesis of a histamine-dopamine interplay on behavioural pattern phenotype, non-sedative doses of the D2/D3 antagonist sulpiride (12.5-25-50 mg/kg) were additionally administered to different groups of HD mice. Sulpiride counterbalanced the enhancement of exploratory patterns of different composition, but it did not affect the mean number of patterns at none of the doses used. Our results provide new insights on the role of histamine on repetitive behavioural sequences of freely moving mice. Histamine deficiency is correlated with a general enhancement of pattern complexity. This study supports a putative involvement of histamine in the pathophysiology of tics and related disorders.
Crash test for the Copenhagen problem.
Nagler, Jan
2004-06-01
The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems.
Group Testing: Four Student Solutions to a Classic Optimization Problem
ERIC Educational Resources Information Center
Teague, Daniel
2006-01-01
This article describes several creative solutions developed by calculus and modeling students to the classic optimization problem of testing in groups to find a small number of individuals who test positive in a large population.
Clue Insensitivity in Remote Associates Test Problem Solving
ERIC Educational Resources Information Center
Smith, Steven M.; Sifonis, Cynthia M.; Angello, Genna
2012-01-01
Does spreading activation from incidentally encountered hints cause incubation effects? We used Remote Associates Test (RAT) problems to examine effects of incidental clues on impasse resolution. When solution words were seen incidentally 3-sec before initially unsolved problems were retested, more problems were resolved (Experiment 1). When…
New Testing Methods to Assess Technical Problem-Solving Ability.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Tests to assess problem-solving ability being provided for the Air Force are described, and some details on the development and validation of these computer-administered diagnostic achievement tests are discussed. Three measurement approaches were employed: (1) sequential problem solving; (2) context-free assessment of fundamental skills and…
NASA Astrophysics Data System (ADS)
Lotsch, Bettina V.
2015-07-01
Graphene's legacy has become an integral part of today's condensed matter science and has equipped a whole generation of scientists with an armory of concepts and techniques that open up new perspectives for the postgraphene area. In particular, the judicious combination of 2D building blocks into vertical heterostructures has recently been identified as a promising route to rationally engineer complex multilayer systems and artificial solids with intriguing properties. The present review highlights recent developments in the rapidly emerging field of 2D nanoarchitectonics from a materials chemistry perspective, with a focus on the types of heterostructures available, their assembly strategies, and their emerging properties. This overview is intended to bridge the gap between two major—yet largely disjunct—developments in 2D heterostructures, which are firmly rooted in solid-state chemistry or physics. Although the underlying types of heterostructures differ with respect to their dimensions, layer alignment, and interfacial quality, there is common ground, and future synergies between the various assembly strategies are to be expected.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Rotation invariance principles in 2D/3D registration
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels C.; Jacob, Augustinus L.; Regazzoni, Pietro; Messmer, Peter
2003-05-01
2D/3D patient-to-computed tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 2D/3D registration is the fast that finding a registration includes sovling a minimization problem in six degrees-of-freedom in motion. This results in considerable time expenses since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations aroudn a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of its original value. The method was implemented and extensively tested on simulated x-ray images of a pelvis. We conclude that this hardware-indepenent optimization of 2D/3D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.
2D semiconductor optoelectronics
NASA Astrophysics Data System (ADS)
Novoselov, Kostya
The advent of graphene and related 2D materials has recently led to a new technology: heterostructures based on these atomically thin crystals. The paradigm proved itself extremely versatile and led to rapid demonstration of tunnelling diodes with negative differential resistance, tunnelling transistors, photovoltaic devices, etc. By taking the complexity and functionality of such van der Waals heterostructures to the next level we introduce quantum wells engineered with one atomic plane precision. Light emission from such quantum wells, quantum dots and polaritonic effects will be discussed.
Group Work Tests for Context-Rich Problems
ERIC Educational Resources Information Center
Meyer, Chris
2016-01-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion…
Development of a Test of Experimental Problem-Solving Skills.
ERIC Educational Resources Information Center
Ross, John A.; Maynes, Florence J.
1983-01-01
Multiple-choice tests were constructed for seven problem-solving skills using learning hierarchies based on expert-novice differences and refined in three phases of field testing. Includes test reliabilities (sufficient for making judgments of group performance but insufficient in single-administration for individual assessment), validity, and…
Problems in Testing the Intonation of Advanced Foreign Learners.
ERIC Educational Resources Information Center
Mendelsohn, David
1978-01-01
It is argued that knowledge about the testing of intonation in English as a foreign language is inadequate; the major problems are outlined and tentative suggestions are given. The basic problem is that the traditional foreign language teacher's conception of intonation is limited. A three-part definition of intonation is favored, with suggestions…
Invitational Conference on Testing Problems (New York, November 2, 1968).
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
The 1968 Invitational Conference on Testing Problems dealt with educational evaluation and the problems of the socially disadvantaged. Papers presented in Session I, Educational Evaluation--Various Levels and Aspects, were: (1) "The Comparative Field Experiment: An Illustration from High school Biology" by Richard C. Anderson; (2) "Evaluation of…
Some Current Problems in Simulator Design, Testing and Use.
ERIC Educational Resources Information Center
Caro, Paul W.
Concerned with the general problem of the effectiveness of simulator training, this report reflects information developed during the conduct of aircraft simulator training research projects sponsored by the Air Force, Army, Navy, and Coast Guard. Problems are identified related to simulator design, testing, and use, all of which impact upon…
Mammalian Toxicology Testing: Problem Definition Study. Capability Modules.
1981-04-01
AD-A112 9 LIFE SYSTEMS INC CLEVELAND OH F/S 6/90 MAM4ALIAN TOXICO.OY TESTING $ PROBLLM DEFINITION STUDY. CAPABL--ETC(U) APR t R A WYNVEEN, R V ALBAN...11111 .25 1-4 1.6 *fl*Ifl- ii, MICROCOPY RESOLUTION TEST CHART NATIONAL BURLAU OF STANDARDS 1963 A AD LSI TR-47-191 MAMMALIAN TOXICOLOGY TESTING : PROBLEM...REPORT & PERIOD COVEREDw MAMMALIAN TOXICOLOGY TESTING : PROBLEM Supporting Document DEFINITION STUDY, CAPABILITY MODULES 15 December 1980-5 April 1981
Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo
2012-01-01
Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches.
Brittle damage models in DYNA2D
Faux, D.R.
1997-09-01
DYNA2D is an explicit Lagrangian finite element code used to model dynamic events where stress wave interactions influence the overall response of the system. DYNA2D is often used to model penetration problems involving ductile-to-ductile impacts; however, with the advent of the use of ceramics in the armor-anti-armor community and the need to model damage to laser optics components, good brittle damage models are now needed in DYNA2D. This report will detail the implementation of four brittle damage models in DYNA2D, three scalar damage models and one tensor damage model. These new brittle damage models are then used to predict experimental results from three distinctly different glass damage problems.
ERIC Educational Resources Information Center
Charalambous, Charalambos; Kyriakides, Leonidas; Philippou, George
2003-01-01
The study reported in this paper is an attempt to develop a comprehensive model of measuring problem solving and posing (PSP) skills based on Marshall's schema theory (ST). A battery of tests on PSP skills was administered to 5th and 6th grade Cypriot students (n=2519). The Rasch model was used and a scale was created for the battery of tests and…
Numerical Testing of Parameterization Schemes for Solving Parameter Estimation Problems
2008-12-01
1 NUMERICAL TESTING OF PARAMETERIZATION SCHEMES FOR SOLVING PARAMETER ESTIMATION PROBLEMS L. Velázquez*, M. Argáez and C. Quintero The...performance computing (HPC). 1. INTRODUCTION In this paper we present the numerical performance of three parameterization approaches, SVD...wavelets, and the combination of wavelet-SVD for solving automated parameter estimation problems based on the SPSA described in previous reports of this
Ability evaluation by binary tests: Problems, challenges & recent advances
NASA Astrophysics Data System (ADS)
Bashkansky, E.; Turetsky, V.
2016-11-01
Binary tests designed to measure abilities of objects under test (OUTs) are widely used in different fields of measurement theory and practice. The number of test items in such tests is usually very limited. The response to each test item provides only one bit of information per OUT. The problem of correct ability assessment is even more complicated, when the levels of difficulty of the test items are unknown beforehand. This fact makes the search for effective ways of planning and processing the results of such tests highly relevant. In recent years, there has been some progress in this direction, generated by both the development of computational tools and the emergence of new ideas. The latter are associated with the use of so-called “scale invariant item response models”. Together with maximum likelihood estimation (MLE) approach, they helped to solve some problems of engineering and proficiency testing. However, several issues related to the assessment of uncertainties, replications scheduling, the use of placebo, as well as evaluation of multidimensional abilities still present a challenge for researchers. The authors attempt to outline the ways to solve the above problems.
2-d Finite Element Code Postprocessor
Sanford, L. A.; Hallquist, J. O.
1996-07-15
ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forces along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Development of a test of experimental problem-solving skills
NASA Astrophysics Data System (ADS)
Ross, John A.; Maynes, Florence J.
The emphasis given to experimental problem-solving skills in science curriculum innovation has not been matched by the development of comparable assessment tools. Multiple-choice tests were constructed for seven skills using learning hierarchies based on expert-novice differences. The instruments were refined in three phases of field testing. The reliabilities of the tests are sufficient for making judgments of group performance, but are insufficient in a single administration for individual assessment. Evidence of the validity of the tests is presented and their worth is discussed within the framework of a theory of instruction.
Group Work Tests for Context-Rich Problems
NASA Astrophysics Data System (ADS)
Meyer, Chris
2016-05-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion phase, when students are given a "number-free" version of the problem. This phase allows students to digest the story-like problem, explore solution ideas, and alleviate some test anxiety. After 10-15 minutes of discussion, students inform the instructor of their readiness for the individual part of the test. What follows next is a pedagogical phase change from lively group discussion to quiet individual work. The group work test is a natural continuation of the group work in our daily physics classes and helps reinforce the importance of collaboration. This method has met with success at York Mills Collegiate Institute, in Toronto, Ontario, where it has been used consistently for unit tests and the final exam of the grade 12 university preparation physics course.
Online Discovery of Search Objectives for Test-Based Problems.
Liskowski, Paweł; Krawiec, Krzysztof
2016-03-08
In test-based problems, commonly approached with competitive coevolutionary algorithms, the fitness of a candidate solution is determined by the outcomes of its interactions with multiple tests. Usually, fitness is a scalar aggregate of interaction outcomes, and as such imposes a complete order on the candidate solutions. However, passing different tests may require unrelated "skills," and candidate solutions may vary with respect to such capabilities. In this study, we provide theoretical evidence that scalar fitness, inherently incapable of capturing such differences, is likely to lead to premature convergence. To mitigate this problem, we propose DISCO, a method that automatically identifies the groups of tests for which the candidate solutions behave similarly and define the above skills. Each such group gives rise to a derived objective, and these objectives together guide the search algorithm in multi-objective fashion. When applied to several well-known test-based problems, the proposed approach significantly outperforms the conventional two-population coevolution. This opens the door to efficient and generic countermeasures to premature convergence for both coevolutionary and evolutionary algorithms applied to problems featuring aggregating fitness functions.
A New Powerful Nonparametric Rank Test for Ordered Alternative Problem
Shan, Guogen; Young, Daniel; Kang, Le
2014-01-01
We propose a new nonparametric test for ordered alternative problem based on the rank difference between two observations from different groups. These groups are assumed to be independent from each other. The exact mean and variance of the test statistic under the null distribution are derived, and its asymptotic distribution is proven to be normal. Furthermore, an extensive power comparison between the new test and other commonly used tests shows that the new test is generally more powerful than others under various conditions, including the same type of distribution, and mixed distributions. A real example from an anti-hypertensive drug trial is provided to illustrate the application of the tests. The new test is therefore recommended for use in practice due to easy calculation and substantial power gain. PMID:25405757
Proceedings of the 1968 Invitational Conference on Testing Problems.
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
The Invitational Conference on Testing Problems has been a major convocation among the various annual meetings of those who are concerned with educational measurement. The first paper by Richard Anderson deals with the evaluation of a small part of an instructional program and deals with it in an experimental fashion. The presentation by Ethna…
Testing the Cue Dependence of Problem-Solving-Induced Forgetting
ERIC Educational Resources Information Center
Storm, Benjamin C.; Koppel, Rebecca H.
2012-01-01
Thinking and remembering can cause forgetting. In the context of remembering, retrieving one item can cause the forgetting of other items (Anderson, Bjork, & Bjork, 1994). A similar phenomenon has been observed in the context of creative problem solving--attempting to generate a target associate in the Remote Associates Test (RAT) can cause…
Simultaneous Testing of McNemar's Problem for Several Populations
ERIC Educational Resources Information Center
Hamdan, M. A.; And Others
1975-01-01
Four different extensions to McNemar's problem concerning the hypothesis of equal probabilities for the unlike pairs of correlated binary variables are considered, each for testing simultaneous equality of proportions of unlike pairs in c independent populations of correlated binary variables, but each under different assumptions and/or additional…
Generates 2D Input for DYNA NIKE & TOPAZ
Hallquist, J. O.; Sanford, Larry
1996-07-15
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
MAZE96. Generates 2D Input for DYNA NIKE & TOPAZ
Sanford, L.; Hallquist, J.O.
1992-02-24
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
[Ethical problems of hygienic tests in occupational medicine].
At'kov, O Yu; Gorokhova, S G
2016-01-01
The authors discuss bioethical problems appearing in usage of genetic tests as a technology of personalized medicine for prevention and early diagnosis of occupational diseases, and connected with question "Who has a right to know results of genetic test?". Analysis covered principles and legal norms, regulating human rights for security of health information, and causes of anxiety about workers' discrimination due to genetic test results. The authors necessitate differentiation between discrimination and reasonable restrictions favorable for workers in cases when work conditions can be a health hazard for person due to genetic predisposition.
Can IRT Solve the Missing Data Problem in Test Equating?
Bolsinova, Maria; Maris, Gunter
2016-01-01
In this paper test equating is considered as a missing data problem. The unobserved responses of the reference population to the new test must be imputed to specify a new cutscore. The proportion of students from the reference population that would have failed the new exam and those having failed the reference exam are made approximately the same. We investigate whether item response theory (IRT) makes it possible to identify the distribution of these missing responses and the distribution of test scores from the observed data without parametric assumptions for the ability distribution. We show that while the score distribution is not fully identifiable, the uncertainty about the score distribution on the new test due to non-identifiability is very small. Moreover, ignoring the non-identifiability issue and assuming a normal distribution for ability may lead to bias in test equating, which we illustrate in simulated and empirical data examples. PMID:26779074
NASA Astrophysics Data System (ADS)
Slanger, T. G.; Cosby, P. C.; Huestis, D. L.
2003-04-01
N(^2D) is an important species in the nighttime ionosphere, as its reaction with O_2 is a principal source of NO. Its modeled concentration peaks near 200 km, at approximately 4 × 10^5 cm-3. Nightglow emission in the optically forbidden lines at 519.8 and 520.0 nm is quite weak, a consequence of the combination of an extremely long radiative lifetime, about 10^5 sec, and quenching by O-atoms, O_2, and N_2. The radiative lifetime is known only from theory, and various calculations lead to a range of possible values for the intensity ratio R = I(519.8)/I(520.0) of 1.5-2.5. On the observational side, Hernandez and Turtle [1969] determined a range of R = 1.3-1.9 in the nightglow, and Sivjee et al. [1981] reported a variable ratio in aurorae, between 1.2 and 1.6. From sky spectra obtained at the Keck II telescope on Mauna Kea, we have accumulated eighty-five 30-60 minute data sets, from March and October, 2000, and April, 2001, over 13 nights of astronomical observations. We find R to have a quite precise value of 1.760± 0.012 (2-σ). There is no difference between the three data sets in terms of the extracted ratio, which therefore seems to be independent of external conditions. At the same time, determination of the O(^1D - ^3P) doublet intensity ratio, I(630.0)/I(636.4), gives a value of 3.03 ± 0.01, the statistical expectation. G. Hernandez and J. P. Turtle, Planet. Space Sci. 17, 675, 1969. G. G. Sivjee, C. S. Deehr, and K. Henricksen, J. Geophys. Res. 86, 1581, 1981.
ERIC Educational Resources Information Center
Marchis, Iuliana
2009-01-01
The results of the Romanian pupils on international tests PISA and TIMSS in Mathematics are below the average. These poor results have many explications. In this article we compare the Mathematics problems given on these international tests with those given on national tests in Romania.
Discuss the testing problems of ultraviolet irradiance meters
NASA Astrophysics Data System (ADS)
Ye, Jun'an; Lin, Fangsheng
2014-09-01
Ultraviolet irradiance meters are widely used in many areas such as medical treatment, epidemic prevention, energy conservation and environment protection, computers, manufacture, electronics, ageing of material and photo-electric effect, for testing ultraviolet irradiance intensity. So the accuracy of value directly affects the sterile control in hospital, treatment, the prevention level of CDC and the control accuracy of curing and aging in manufacturing industry etc. Because the display of ultraviolet irradiance meters is easy to change, in order to ensure the accuracy, it needs to be recalibrated after being used period of time. By the comparison with the standard ultraviolet irradiance meters, which are traceable to national benchmarks, we can acquire the correction factor to ensure that the instruments working under accurate status and giving the accurate measured data. This leads to an important question: what kind of testing device is more accurate and reliable? This article introduces the testing method and problems of the current testing device for ultraviolet irradiance meters. In order to solve these problems, we have developed a new three-dimensional automatic testing device. We introduce structure and working principle of this system and compare the advantages and disadvantages of two devices. In addition, we analyses the errors in the testing of ultraviolet irradiance meters.
Improvement of the 2D/1D Method in MPACT Using the Sub-Plane Scheme
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
2017-01-01
Oak Ridge National Laboratory and the University of Michigan are jointly developing the MPACTcode to be the primary neutron transport code for the Virtual Environment for Reactor Applications (VERA). To solve the transport equation, MPACT uses the 2D/1D method, which decomposes the problem into a stack of 2D planes that are then coupled with a 1D axial calculation. MPACT uses the Method of Characteristics for the 2D transport calculations and P3 for the 1D axial calculations, then accelerates the solution using the 3D Coarse mesh Finite Dierence (CMFD) method. Increasing the number of 2D MOC planes will increase the accuracy of the alculation, but will increase the computational burden of the calculations and can cause slow convergence or instability. To prevent these problems while maintaining accuracy, the sub-plane scheme has been implemented in MPACT. This method sub-divides the MOC planes into sub-planes, refining the 1D P3 and 3D CMFD calculations without increasing the number of 2D MOC planes. To test the sub-plane scheme, three of the VERA Progression Problems were selected: Problem 3, a single assembly problem; Problem 4, a 3x3 assembly problem with control rods and pyrex burnable poisons; and Problem 5, a quarter core problem. These three problems demonstrated that the sub-plane scheme can accurately produce intra-plane axial flux profiles that preserve the accuracy of the fine mesh solution. The eigenvalue dierences are negligibly small, and dierences in 3D power distributions are less than 0.1% for realistic axial meshes. Furthermore, the convergence behavior with the sub-plane scheme compares favorably with the conventional 2D/1D method, and the computational expense is decreased for all calculations due to the reduction in expensive MOC calculations.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Education and the Workforce.
H.R. 2846, a bill to prohibit spending Federal education funds on national testing without explicit and specific legislation was referred to the Committee on Education and the Workforce of the U.S. House of Representatives. The Committee, having reviewed the bill, reports favorably on it in this document, proposes some amendments, and recommends…
A test of the testing effect: acquiring problem-solving skills from worked examples.
van Gog, Tamara; Kester, Liesbeth
2012-01-01
The "testing effect" refers to the finding that after an initial study opportunity, testing is more effective for long-term retention than restudying. The testing effect seems robust and is a finding from the field of cognitive science that has important implications for education. However, it is unclear whether this effect also applies to the acquisition of problem-solving skills, which is important to establish given the key role problem solving plays in, for instance, math and science education. Worked examples are an effective and efficient way of acquiring problem-solving skills. Forty students either only studied worked examples (SSSS) or engaged in testing after studying an example by solving an isomorphic problem (STST). Surprisingly, results showed equal performance in both conditions on an immediate retention test after 5 min, but the SSSS condition outperformed the STST condition on a delayed retention test after 1 week. These findings suggest the testing effect might not apply to acquiring problem-solving skills from worked examples.
DYNA3D Material Model 71 - Solid Element Test Problem
Zywicz, E
2008-01-24
A general phenomenological-based elasto-plastic nonlinear isotropic strain hardening material model was implemented in DYNA3D for use in solid, beam, truss, and shell elements. The constitutive model, Model 71, is based upon conventional J2 plasticity and affords optional temperature and rate dependence (visco-plasticity). The expressions for strain hardening, temperature dependence, and rate dependence allow it to represent a wide variety of material responses. Options to capture temperature changes due to adiabatic heating and thermal straining are incorporated into the constitutive framework as well. The verification problem developed for this constitutive model consists of four uni-axial right cylinders subject to constant true strain-rate boundary conditions. Three of the specimens have different constant strain rates imposed, while the fourth specimen is subjected to several strain rate jumps. The material parameters developed by Fehlmann (2005) for 21-6-9 Nitronic steel are utilized. As demonstrated below, the finite element (FE) simulations are in excellent agreement with the theoretical responses and indicated the model is functioning as desired. Consequently, this problem serves as both a verification problem and regression test problem for DYNA3D.
Phillips, Lawrence M.; Hachamovitch, Rory; Berman, Daniel S.; Iskandrian, Ami E.; Min, James K.; Picard, Michael H.; Kwong, Raymond Y.; Friedrich, Matthias G.; Scherrer-Crosbie, Marielle; Hayes, Sean W.; Sharir, Tali; Gosselin, Gilbert; Mazzanti, Marco; Senior, Roxy; Beanlands, Rob; Smanio, Paola; Goyal, Abhi; Al-Mallah, Mouaz; Reynolds, Harmony; Stone, Gregg W.; Maron, David J.; Shaw, Leslee J.
2014-01-01
There is a preponderance of evidence that, in the setting of an acute coronary syndrome, an invasive approach using coronary revascularization has a morbidity and mortality benefit. However, recent stable ischemic heart disease (SIHD) randomized clinical trials testing whether the addition of coronary revascularization to guideline-directed medical therapy (GDMT) reduces death or major cardiovascular events have been negative. Based on the evidence from these trials, the primary role of GDMT as a front line medical management approach has been clearly defined in the recent SIHD clinical practice guideline; the role of prompt revascularization is less precisely defined. Based on data from observational studies, it has been hypothesized that there is a level of ischemia above which a revascularization strategy might result in benefit regarding cardiovascular events. However, eligibility for recent negative trials in SIHD has mandated at most minimal standards for ischemia. An ongoing randomized trial evaluating the effectiveness of randomization of patients to coronary angiography and revascularization as compared to no coronary angiography and GDMT in patients with moderate-severe ischemia will formally test this hypothesis. The current review will highlight the available evidence including a review of the published and ongoing SIHD trials. PMID:23963599
A class of self-similar hydrodynamics test problems
Ramsey, Scott D; Brown, Lowell S; Nelson, Eric M; Alme, Marv L
2010-12-08
We consider self-similar solutions to the gas dynamics equations. One such solution - a spherical geometry Gaussian density profile - has been analyzed in the existing literature, and a connection between it, a linear velocity profile, and a uniform specific internal energy profile has been identified. In this work, we assume the linear velocity profile to construct an entire class of self-similar sol utions in both cylindrical and spherical geometry, of which the Gaussian form is one possible member. After completing the derivation, we present some results in the context of a test problem for compressible flow codes.
Tests and Problems of the Standard Model in Cosmology
NASA Astrophysics Data System (ADS)
López-Corredoira, Martín
2017-02-01
The main foundations of the standard Λ CDM model of cosmology are that: (1) the redshifts of the galaxies are due to the expansion of the Universe plus peculiar motions; (2) the cosmic microwave background radiation and its anisotropies derive from the high energy primordial Universe when matter and radiation became decoupled; (3) the abundance pattern of the light elements is explained in terms of primordial nucleosynthesis; and (4) the formation and evolution of galaxies can be explained only in terms of gravitation within a inflation + dark matter + dark energy scenario. Numerous tests have been carried out on these ideas and, although the standard model works pretty well in fitting many observations, there are also many data that present apparent caveats to be understood with it. In this paper, I offer a review of these tests and problems, as well as some examples of alternative models.
Tests show production logging problems in horizontal gas wells
Branagan, P. ); Knight, B.L. ); Aslakson, J. ); Middlebrook, M.L. )
1994-01-10
A study has concluded that production logging tools employed to evaluate multiphase horizontal well production behavior should be carefully screened as to their response characteristics in fully-segregated, two-phase flow. The study, performed at Marathon Oil Co.'s petroleum technology center in Littleton, Colo., indicated that gas in highly deviated well bores segregates rapidly in the presence of water, creating a downhole environment that produces sporadic responses from full bore and diverter spinners as well as density and holdup tools. Gas Research Institute (GRI), as part of its horizontal gas well completion technology program, initiated the full-scale laboratory study to determine the severity and consequences of multiphase flow on tool response from horizontal well production. The paper discusses background of the problem, the test objectives, test facility, experimental procedures, single-phase flow, two-phase flow, and recommendations.
Predicting non-square 2D dice probabilities
NASA Astrophysics Data System (ADS)
Pender, G. A. T.; Uhrin, M.
2014-07-01
The prediction of the final state probabilities of a general cuboid randomly thrown onto a surface is a problem that naturally arises in the minds of men and women familiar with regular cubic dice and the basic concepts of probability. Indeed, it was considered by Newton in 1664 (Newton 1967 The Mathematical Papers of Issac Newton vol I (Cambridge: Cambridge University Press) pp 60-1). In this paper we make progress on the 2D problem (which can be realized in 3D by considering a long cuboid, or alternatively a rectangular cross-sectioned dreidel). For the two-dimensional case we suggest that the ratio of the probabilities of landing on each of the two sides is given by \\frac{\\sqrt{{{k}^{2}}+{{l}^{2}}}-k}{\\sqrt{{{k}^{2}}+{{l}^{2}}}-l}\\frac{arctan \\frac{l}{k}}{arctan \\frac{k}{l}} where k and l are the lengths of the two sides. We test this theory both experimentally and computationally, and find good agreement between our theory, experimental and computational results. Our theory is known, from its derivation, to be an approximation for particularly bouncy or ‘grippy’ surfaces where the die rolls through many revolutions before settling. On real surfaces we would expect (and we observe) that the true probability ratio for a 2D die is a somewhat closer to unity than predicted by our theory. This problem may also have wider relevance in the testing of physics engines.
Gallezot, Jean-Dominique; Zheng, Ming-Qiang; Lim, Keunpoong; Lin, Shu-fei; Labaree, David; Matuskey, David; Huang, Yiyun; Ding, Yu-Shin; Carson, Richard E.; Malison, Robert T.
2014-01-01
11C-(+)-PHNO is an agonist radioligand for imaging dopamine D2 and D3 receptors in the human brain with PET. In this study we evaluated the reproducibility of 11C-(+)-PHNO binding parameters using a within-day design and assessed parametric imaging methods. Methods Repeated studies were performed in eight subjects, with simultaneous measurement of the arterial input function and plasma free fraction. Two 11C-(+)-PHNO scans on the same subject were separated by 5.4±0.7 h. After evaluating compartment models, 11C-(+)-PHNO volumes of distribution VT and VT/fP and binding potentials BPND, BPP and BPF were quantified using the multilinear analysis MA1, with the cerebellum as reference region. Parametric images of BPND were also computed using SRTM and SRTM2. Results The test-retest variability of 11C-(+)-PHNO BPND was 9% in D2-rich regions (caudate and putamen). Among D3-rich regions, variability was low in pallidum (6%), but higher in substantia nigra (19%), thalamus (14%) and hypothalamus (21%). No significant mass carry-over effect was observed in D3-rich regions, although a trend in BPND was present in substantia nigra (−14±15%). Due to the relatively fast kinetics, low noise BPND parametric images were obtained with both SRTM and SRTM2 without spatial smoothing. Conclusion 11C-(+)-PHNO can be used to compute low noise parametric images in both D2 and D3 rich regions in humans. PMID:24732151
NASA Astrophysics Data System (ADS)
Jones, Alan G.; Afonso, Juan Carlos; Fullea, Javier; Salajegheh, Farshad
2014-02-01
Modeling the continental lithosphere's physical properties, especially its depth extent, must be done within a self-consistent petrological-geophysical framework; modeling using only one or two data types may easily lead to inconsistencies and erroneous interpretations. Using the LitMod approach for hypothesis testing and first-order modeling, we show how assumptions made about crustal information and the probable compositions of the lithospheric and sub-lithospheric mantle affect particular observables, particularly especially surface topographic elevation. The critical crustal parameter is density, leading to ca. 600 m error in topography for 50 kg m- 3 imprecision. The next key parameter is crustal thickness, and uncertainties in its definition lead to around ca. 4 km uncertainty in LAB for every 1 km of variation in Moho depth. Possible errors in the other assumed crustal parameters introduce a few kilometers of uncertainty in the depth to the LAB. We use Ireland as a natural laboratory to demonstrate the approach. From first-order arguments and given reasonable assumptions, a topographic elevation in the range of 50-100 m, which is the average across Ireland, requires that the lithosphere-asthenosphere boundary (LAB) beneath most of Ireland must lie in the range 90-115 km. A somewhat shallower (to 85 km) LAB is permitted, but the crust must be thinned (< 29 km) to compensate. The observations, especially topography, are inconsistent with suggestions, based on interpretation of S-to-P receiver functions, that the LAB thins from 85 km in southern Ireland to 55 km in central northern Ireland over a distance of < 150 km. Such a thin lithosphere would result in over 1000 m of uplift, and such rapid thinning by 30 km over less than 150 km would yield significant north-south variations in topographic elevation, Bouguer anomaly, and geoid height, none of which are observed. Even juxtaposing the most extreme probable depleted composition for the lithospheric mantle
Leak testing of cryogenic components — problems and solutions
NASA Astrophysics Data System (ADS)
Srivastava, S. P.; Pandarkar, S. P.; Unni, T. G.; Sinha, A. K.; Mahajan, K.; Suthar, R. L.
2008-05-01
moderator pot was driving the MSLD out of range. Since it was very difficult to locate the leak by Tracer Probe Method, some other technique was ventured to solve the problem of leak location. Finally, it was possible to locate the leak by observing the change in Helium background reading of MSLD during masking/unmasking of the welded joints. This paper, in general describes the design and leak testing aspects of cryogenic components of Cold Neutron Source and in particular, the problems and solutions for leak testing of transfer lines and moderator pot.
Optoelectronics with 2D semiconductors
NASA Astrophysics Data System (ADS)
Mueller, Thomas
2015-03-01
Two-dimensional (2D) atomic crystals, such as graphene and layered transition-metal dichalcogenides, are currently receiving a lot of attention for applications in electronics and optoelectronics. In this talk, I will review our research activities on electrically driven light emission, photovoltaic energy conversion and photodetection in 2D semiconductors. In particular, WSe2 monolayer p-n junctions formed by electrostatic doping using a pair of split gate electrodes, type-II heterojunctions based on MoS2/WSe2 and MoS2/phosphorene van der Waals stacks, 2D multi-junction solar cells, and 3D/2D semiconductor interfaces will be presented. Upon optical illumination, conversion of light into electrical energy occurs in these devices. If an electrical current is driven, efficient electroluminescence is obtained. I will present measurements of the electrical characteristics, the optical properties, and the gate voltage dependence of the device response. In the second part of my talk, I will discuss photoconductivity studies of MoS2 field-effect transistors. We identify photovoltaic and photoconductive effects, which both show strong photoconductive gain. A model will be presented that reproduces our experimental findings, such as the dependence on optical power and gate voltage. We envision that the efficient photon conversion and light emission, combined with the advantages of 2D semiconductors, such as flexibility, high mechanical stability and low costs of production, could lead to new optoelectronic technologies.
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6*15 and *35 Genotyping
Riffel, Amanda K.; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C.; Leeder, J. Steven; Rosenblatt, Kevin P.; Gaedigk, Andrea
2016-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6*15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6*15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6*35) which is also located in exon 1. Although alternative CYP2D6*15 and *35 assays resolved the issue, we discovered a novel CYP2D6*15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6*15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6*43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer and/or probe regions can impact
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6 (*) 15 and (*) 35 Genotyping.
Riffel, Amanda K; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C; Leeder, J Steven; Rosenblatt, Kevin P; Gaedigk, Andrea
2015-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6 (*) 15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6 (*) 15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6 (*) 35) which is also located in exon 1. Although alternative CYP2D6 (*) 15 and (*) 35 assays resolved the issue, we discovered a novel CYP2D6 (*) 15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6 (*) 15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6 (*) 43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer
Efficient p-value estimation in massively parallel testing problems
Kustra, Rafal; Shi, Xiaofei; Murdoch, Duncan J.; Greenwood, Celia M. T.; Rangrej, Jagadish
2008-01-01
We present a new method to efficiently estimate very large numbers of p-values using empirically constructed null distributions of a test statistic. The need to evaluate a very large number of p-values is increasingly common with modern genomic data, and when interaction effects are of interest, the number of tests can easily run into billions. When the asymptotic distribution is not easily available, permutations are typically used to obtain p-values but these can be computationally infeasible in large problems. Our method constructs a prediction model to obtain a first approximation to the p-values and uses Bayesian methods to choose a fraction of these to be refined by permutations. We apply and evaluate our method on the study of association between 2-way interactions of genetic markers and colorectal cancer using the data from the first phase of a large, genome-wide case–control study. The results show enormous computational savings as compared to evaluating a full set of permutations, with little decrease in accuracy. PMID:18304995
A Markov chain representation of the multiple testing problem.
Cabras, Stefano
2016-03-16
The problem of multiple hypothesis testing can be represented as a Markov process where a new alternative hypothesis is accepted in accordance with its relative evidence to the currently accepted one. This virtual and not formally observed process provides the most probable set of non null hypotheses given the data; it plays the same role as Markov Chain Monte Carlo in approximating a posterior distribution. To apply this representation and obtain the posterior probabilities over all alternative hypotheses, it is enough to have, for each test, barely defined Bayes Factors, e.g. Bayes Factors obtained up to an unknown constant. Such Bayes Factors may either arise from using default and improper priors or from calibrating p-values with respect to their corresponding Bayes Factor lower bound. Both sources of evidence are used to form a Markov transition kernel on the space of hypotheses. The approach leads to easy interpretable results and involves very simple formulas suitable to analyze large datasets as those arising from gene expression data (microarray or RNA-seq experiments).
Highly crystalline 2D superconductors
NASA Astrophysics Data System (ADS)
Saito, Yu; Nojima, Tsutomu; Iwasa, Yoshihiro
2016-12-01
Recent advances in materials fabrication have enabled the manufacturing of ordered 2D electron systems, such as heterogeneous interfaces, atomic layers grown by molecular beam epitaxy, exfoliated thin flakes and field-effect devices. These 2D electron systems are highly crystalline, and some of them, despite their single-layer thickness, exhibit a sheet resistance more than an order of magnitude lower than that of conventional amorphous or granular thin films. In this Review, we explore recent developments in the field of highly crystalline 2D superconductors and highlight the unprecedented physical properties of these systems. In particular, we explore the quantum metallic state (or possible metallic ground state), the quantum Griffiths phase observed in out-of-plane magnetic fields and the superconducting state maintained in anomalously large in-plane magnetic fields. These phenomena are examined in the context of weakened disorder and/or broken spatial inversion symmetry. We conclude with a discussion of how these unconventional properties make highly crystalline 2D systems promising platforms for the exploration of new quantum physics and high-temperature superconductors.
Sevrin, A.
1993-06-01
After reviewing some aspects of gravity in two dimensions, I show that non-trivial embeddings of sl(2) in a semi-simple (super) Lie algebra give rise to a very large class of extensions of 2D gravity. The induced action is constructed as a gauged WZW model and an exact expression for the effective action is given.
Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.
Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William
2016-07-01
Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.
Testing problem solving in turkey vultures (Cathartes aura) using the string-pulling test.
Ellison, Anne Margaret; Watson, Jane; Demers, Eric
2015-01-01
To examine problem solving in turkey vultures (Cathartes aura), six captive vultures were presented with a string-pulling task, which involved drawing a string up to access food. This test has been used to assess cognition in many bird species. A small piece of meat suspended by a string was attached to a perch. Two birds solved the problem without apparent trial-and-error learning; a third bird solved the problem after observing a successful bird, suggesting that this individual learned from the other vulture. The remaining birds failed to complete the task. The successful birds significantly reduced the time needed to solve the task from early trials compared to late trials, suggesting that they had learned to solve the problem and improved their technique. The successful vultures solved the problem in a novel way: they pulled the string through their beak with their tongue, and may have gathered the string in their crop until the food was in reach. In contrast, ravens, parrots and finches use a stepwise process; they pull the string up, tuck it under foot, and reach down to pull up another length. As scavengers, turkey vultures use their beak for tearing and ripping at carcasses, but possess large, flat, webbed feet that are ill-suited to pulling or grasping. The ability to solve this problem and the novel approach used by the turkey vultures in this study may be a result of the unique evolutionary pressures imposed on this scavenging species.
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false What problems always cause an alcohol test to be... TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.267 What problems always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 1 2012-10-01 2012-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
Energy Efficiency of D2D Multi-User Cooperation.
Zhang, Zufan; Wang, Lu; Zhang, Jie
2017-03-28
The Device-to-Device (D2D) communication system is an important part of heterogeneous networks. It has great potential to improve spectrum efficiency, throughput and energy efficiency cooperation of multiple D2D users with the advantage of direct communication. When cooperating, D2D users expend extraordinary energy to relay data to other D2D users. Hence, the remaining energy of D2D users determines the life of the system. This paper proposes a cooperation scheme for multiple D2D users who reuse the orthogonal spectrum and are interested in the same data by aiming to solve the energy problem of D2D users. Considering both energy availability and the Signal to Noise Ratio (SNR) of each D2D user, the Kuhn-Munkres algorithm is introduced in the cooperation scheme to solve relay selection problems. Thus, the cooperation issue is transformed into a maximum weighted matching (MWM) problem. In order to enhance energy efficiency without the deterioration of Quality of Service (QoS), the link outage probability is derived according to the Shannon Equation by considering the data rate and delay. The simulation studies the relationships among the number of cooperative users, the length of shared data, the number of data packets and energy efficiency.
Chimpanzee Problem-Solving: A Test for Comprehension.
ERIC Educational Resources Information Center
Premack, David; Woodruff, Guy
1978-01-01
Investigates a chimpanzee's capacity to recognize representations of problems and solutions, as well as its ability to perceive the relationship between each type of problem and its appropriate solutions using televised programs and photographic solutions. (HM)
Material behavior and materials problems in TFTR (Tokamak Fusion Test Reactor)
Dylla, H.F.; Ulrickson, M.A.; Owens, D.K.; Heifetz, D.B.; Mills, B.E.; Pontau, A.E.; Wampler, W.R.; Doyle, B.L.; Lee, S.R.; Watson, R.D.; Croessmann, C.D.
1988-05-01
This paper reviews the experience with first-wall materials over a 20-month period of operation spanning 1985--1987. Experience with the axisymmetric inner wall limiter, constructed of graphite tiles, will be described including the necessary conditioning procedures needed for impurity and particle control of high power ({le}20 MW) neutral injection experiments. The thermal effects in disruptions have been quantified and no significant damage to the bumper limiter has occurred as a result of disruptions. Carbon and metal impurity redeposition effects have been quantified through surface analysis of wall samples. Estimates of the tritium retention in the graphite limiter tiles and redeposited carbon films have been made based on analysis of deuterium retention in removed graphite tiles and wall samples. New limiter structures have been designed using a 2D carbon/carbon (C/C) composite material for RF antenna protection. Laboratory tests of the important thermal, mechanical and vacuum properties of C/C materials will be described. Finally, the last series of experiments in TFTR with in-situ Zr/Al surface pumps will be described. Problems with Ar/Al embrittlement have led to the removal of the getter material from the in-torus environment. 53 refs., 8 figs., 3 tabs.
Codon Constraints on Closed 2D Shapes,
2014-09-26
19843$ CODON CONSTRAINTS ON CLOSED 2D SHAPES Go Whitman Richards "I Donald D. Hoffman’ D T 18 Abstract: Codons are simple primitives for describing plane...RSONAL AUT"ORtIS) Richards, Whitman & Hoffman, Donald D. 13&. TYPE OF REPORT 13b. TIME COVERED N/A P8 AT F RRrT t~r. Ago..D,) is, PlE COUNT Reprint...outlines, if figure and ground are ignored. Later, we will address the problem of indexing identical codon descriptors that have different figure
Numerical Evaluation of 2D Ground States
NASA Astrophysics Data System (ADS)
Kolkovska, Natalia
2016-02-01
A ground state is defined as the positive radial solution of the multidimensional nonlinear problem
Observed-Score Equating as a Test Assembly Problem.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Luecht, Richard M.
1998-01-01
Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)
Giaddui, T; Yu, J; Xiao, Y; Jacobs, P; Manfredi, D; Linnemann, N
2015-06-15
Purpose: 2D-2D kV image guided radiation therapy (IGRT) credentialing evaluation for clinical trial qualification was historically qualitative through submitting screen captures of the fusion process. However, as quantitative DICOM 2D-2D and 2D-3D image registration tools are implemented in clinical practice for better precision, especially in centers that treat patients with protons, better IGRT credentialing techniques are needed. The aim of this work is to establish methodologies for quantitatively reviewing IGRT submissions based on DICOM 2D-2D and 2D-3D image registration and to test the methodologies in reviewing 2D-2D and 2D-3D IGRT submissions for RTOG/NRG Oncology clinical trials qualifications. Methods: DICOM 2D-2D and 2D-3D automated and manual image registration have been tested using the Harmony tool in MIM software. 2D kV orthogonal portal images are fused with the reference digital reconstructed radiographs (DRR) in the 2D-2D registration while the 2D portal images are fused with DICOM planning CT image in the 2D-3D registration. The Harmony tool allows alignment of the two images used in the registration process and also calculates the required shifts. Shifts calculated using MIM are compared with those submitted by institutions for IGRT credentialing. Reported shifts are considered to be acceptable if differences are less than 3mm. Results: Several tests have been performed on the 2D-2D and 2D-3D registration. The results indicated good agreement between submitted and calculated shifts. A workflow for reviewing these IGRT submissions has been developed and will eventually be used to review IGRT submissions. Conclusion: The IROC Philadelphia RTQA center has developed and tested a new workflow for reviewing DICOM 2D-2D and 2D-3D IGRT credentialing submissions made by different cancer clinical centers, especially proton centers. NRG Center for Innovation in Radiation Oncology (CIRO) and IROC RTQA center continue their collaborative efforts to enhance
Protein profiling using two-dimensional difference gel electrophoresis (2-D DIGE).
Feret, Renata; Lilley, Kathryn S
2014-02-03
2-D DIGE relies on pre-electrophoretic labeling of samples with one of three spectrally distinct fluorescent dyes, followed by electrophoresis of all samples in one 2-D gel. The dye-labeled samples are then viewed individually by scanning the gel at different wavelengths, which circumvents problems with gel-to-gel variation and spot matching between gels. Image analysis programs are used to generate volume ratios for each spot, which essentially describe the intensity of a particular spot in each test sample, and thus enable protein abundance level changes to be identified and quantified. This unit describes the 2-D DIGE procedure including sample preparation from various cell types, labeling of proteins, and points to consider in the downstream processing of fluorescently labeled samples.
2D quasiperiodic plasmonic crystals
Bauer, Christina; Kobiela, Georg; Giessen, Harald
2012-01-01
Nanophotonic structures with irregular symmetry, such as quasiperiodic plasmonic crystals, have gained an increasing amount of attention, in particular as potential candidates to enhance the absorption of solar cells in an angular insensitive fashion. To examine the photonic bandstructure of such systems that determines their optical properties, it is necessary to measure and model normal and oblique light interaction with plasmonic crystals. We determine the different propagation vectors and consider the interaction of all possible waveguide modes and particle plasmons in a 2D metallic photonic quasicrystal, in conjunction with the dispersion relations of a slab waveguide. Using a Fano model, we calculate the optical properties for normal and inclined light incidence. Comparing measurements of a quasiperiodic lattice to the modelled spectra for angle of incidence variation in both azimuthal and polar direction of the sample gives excellent agreement and confirms the predictive power of our model. PMID:23209871
NASA Astrophysics Data System (ADS)
Schaibley, John R.; Yu, Hongyi; Clark, Genevieve; Rivera, Pasqual; Ross, Jason S.; Seyler, Kyle L.; Yao, Wang; Xu, Xiaodong
2016-11-01
Semiconductor technology is currently based on the manipulation of electronic charge; however, electrons have additional degrees of freedom, such as spin and valley, that can be used to encode and process information. Over the past several decades, there has been significant progress in manipulating electron spin for semiconductor spintronic devices, motivated by potential spin-based information processing and storage applications. However, experimental progress towards manipulating the valley degree of freedom for potential valleytronic devices has been limited until very recently. We review the latest advances in valleytronics, which have largely been enabled by the isolation of 2D materials (such as graphene and semiconducting transition metal dichalcogenides) that host an easily accessible electronic valley degree of freedom, allowing for dynamic control.
Georgi, Howard; Kats, Yevgeny
2008-09-26
We discuss what can be learned about unparticle physics by studying simple quantum field theories in one space and one time dimension. We argue that the exactly soluble 2D theory of a massless fermion coupled to a massive vector boson, the Sommerfield model, is an interesting analog of a Banks-Zaks model, approaching a free theory at high energies and a scale-invariant theory with nontrivial anomalous dimensions at low energies. We construct a toy standard model coupling to the fermions in the Sommerfield model and study how the transition from unparticle behavior at low energies to free particle behavior at high energies manifests itself in interactions with the toy standard model particles.
On Estimation and Hypothesis Testing Problems for Correlation Coefficients
ERIC Educational Resources Information Center
Kraemer, Helena Chmura
1975-01-01
A selection of statistical problems commonly encountered in psychological or psychiatric research concerning correlation coefficients are re-evaluated in the light of recently developed simplifications in the forms of the distribution theory of the intraclass correlation coefficient, of the product-moment correlation coefficient, and the Spearman…
Dominant 2D magnetic turbulence in the solar wind
NASA Technical Reports Server (NTRS)
Bieber, John W.; Wanner, Wolfgang; Matthaeus, William H.
1995-01-01
There have been recent suggestions that solar wind magnetic turbulence may be a composite of slab geometry (wavevector aligned with the mean magnetic field) and 2D geometry (wavevectors perpendicular to the mean field). We report results of two new tests of this hypothesis using Helios measurements of inertial ranged magnetic spectra in the solar wind. The first test is based upon a characteristic difference between perpendicular and parallel reduced power spectra which is expected for the 2D component but not for the slab component. The second test examines the dependence of power spectrum density upon the magnetic field angle (i.e., the angle between the mean magnetic field and the radial direction), a relationship which is expected to be in opposite directions for the slab and 2D components. Both tests support the presence of a dominant (approximately 85 percent by energy) 2D component in solar wind magnetic turbulence.
Multi-Dimensional, Non-Pyrolyzing Ablation Test Problems
NASA Technical Reports Server (NTRS)
Risch, Tim; Kostyk, Chris
2016-01-01
Non-pyrolyzingcarbonaceous materials represent a class of candidate material for hypersonic vehicle components providing both structural and thermal protection system capabilities. Two problems relevant to this technology are presented. The first considers the one-dimensional ablation of a carbon material subject to convective heating. The second considers two-dimensional conduction in a rectangular block subject to radiative heating. Surface thermochemistry for both problems includes finite-rate surface kinetics at low temperatures, diffusion limited ablation at intermediate temperatures, and vaporization at high temperatures. The first problem requires the solution of both the steady-state thermal profile with respect to the ablating surface and the transient thermal history for a one-dimensional ablating planar slab with temperature-dependent material properties. The slab front face is convectively heated and also reradiates to a room temperature environment. The back face is adiabatic. The steady-state temperature profile and steady-state mass loss rate should be predicted. Time-dependent front and back face temperature, surface recession and recession rate along with the final temperature profile should be predicted for the time-dependent solution. The second problem requires the solution for the transient temperature history for an ablating, two-dimensional rectangular solid with anisotropic, temperature-dependent thermal properties. The front face is radiatively heated, convectively cooled, and also reradiates to a room temperature environment. The back face and sidewalls are adiabatic. The solution should include the following 9 items: final surface recession profile, time-dependent temperature history of both the front face and back face at both the centerline and sidewall, as well as the time-dependent surface recession and recession rate on the front face at both the centerline and sidewall. The results of the problems from all submitters will be
Quantum coherence selective 2D Raman–2D electronic spectroscopy
Spencer, Austin P.; Hutson, William O.; Harel, Elad
2017-01-01
Electronic and vibrational correlations report on the dynamics and structure of molecular species, yet revealing these correlations experimentally has proved extremely challenging. Here, we demonstrate a method that probes correlations between states within the vibrational and electronic manifold with quantum coherence selectivity. Specifically, we measure a fully coherent four-dimensional spectrum which simultaneously encodes vibrational–vibrational, electronic–vibrational and electronic–electronic interactions. By combining near-impulsive resonant and non-resonant excitation, the desired fifth-order signal of a complex organic molecule in solution is measured free of unwanted lower-order contamination. A critical feature of this method is electronic and vibrational frequency resolution, enabling isolation and assignment of individual quantum coherence pathways. The vibronic structure of the system is then revealed within an otherwise broad and featureless 2D electronic spectrum. This method is suited for studying elusive quantum effects in which electronic transitions strongly couple to phonons and vibrations, such as energy transfer in photosynthetic pigment–protein complexes. PMID:28281541
Quantum coherence selective 2D Raman-2D electronic spectroscopy
NASA Astrophysics Data System (ADS)
Spencer, Austin P.; Hutson, William O.; Harel, Elad
2017-03-01
Electronic and vibrational correlations report on the dynamics and structure of molecular species, yet revealing these correlations experimentally has proved extremely challenging. Here, we demonstrate a method that probes correlations between states within the vibrational and electronic manifold with quantum coherence selectivity. Specifically, we measure a fully coherent four-dimensional spectrum which simultaneously encodes vibrational-vibrational, electronic-vibrational and electronic-electronic interactions. By combining near-impulsive resonant and non-resonant excitation, the desired fifth-order signal of a complex organic molecule in solution is measured free of unwanted lower-order contamination. A critical feature of this method is electronic and vibrational frequency resolution, enabling isolation and assignment of individual quantum coherence pathways. The vibronic structure of the system is then revealed within an otherwise broad and featureless 2D electronic spectrum. This method is suited for studying elusive quantum effects in which electronic transitions strongly couple to phonons and vibrations, such as energy transfer in photosynthetic pigment-protein complexes.
Quantum coherence selective 2D Raman-2D electronic spectroscopy.
Spencer, Austin P; Hutson, William O; Harel, Elad
2017-03-10
Electronic and vibrational correlations report on the dynamics and structure of molecular species, yet revealing these correlations experimentally has proved extremely challenging. Here, we demonstrate a method that probes correlations between states within the vibrational and electronic manifold with quantum coherence selectivity. Specifically, we measure a fully coherent four-dimensional spectrum which simultaneously encodes vibrational-vibrational, electronic-vibrational and electronic-electronic interactions. By combining near-impulsive resonant and non-resonant excitation, the desired fifth-order signal of a complex organic molecule in solution is measured free of unwanted lower-order contamination. A critical feature of this method is electronic and vibrational frequency resolution, enabling isolation and assignment of individual quantum coherence pathways. The vibronic structure of the system is then revealed within an otherwise broad and featureless 2D electronic spectrum. This method is suited for studying elusive quantum effects in which electronic transitions strongly couple to phonons and vibrations, such as energy transfer in photosynthetic pigment-protein complexes.
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth; Dirkx, Kim; Hoogerheide, Vincent; Boerboom, Joris; Verkoeijen, Peter P. J. L.
2015-01-01
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n?=?120) showed no beneficial effects of testing consisting of "isomorphic" problem solving or "example recall" on final test performance, which consisted of isomorphic problem…
Transition to turbulence: 2D directed percolation
NASA Astrophysics Data System (ADS)
Chantry, Matthew; Tuckerman, Laurette; Barkley, Dwight
2016-11-01
The transition to turbulence in simple shear flows has been studied for well over a century, yet in the last few years has seen major leaps forward. In pipe flow, this transition shows the hallmarks of (1 + 1) D directed percolation, a universality class of continuous phase transitions. In spanwisely confined Taylor-Couette flow the same class is found, suggesting the phenomenon is generic to shear flows. However in plane Couette flow the largest simulations and experiments to-date find evidence for a discrete transition. Here we study a planar shear flow, called Waleffe flow, devoid of walls yet showing the fundamentals of planar transition to turbulence. Working with a quasi-2D yet Navier-Stokes derived model of this flow we are able to attack the (2 + 1) D transition problem. Going beyond the system sizes previously possible we find all of the required scalings of directed percolation and thus establish planar shears flow in this class.
Language Testing in the Military: Problems, Politics and Progress
ERIC Educational Resources Information Center
Green, Rita; Wall, Dianne
2005-01-01
There appears to be little literature available -- either descriptive or research-related -- on language testing in the military. This form of specific purposes assessment affects both military personnel and civilians working within the military structure in terms of posting, promotion and remuneration, and it could be argued that it has serious…
Common Problems of Mobile Applications for Foreign Language Testing
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez
2011-01-01
As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…
Problem-Solving Test: Submitochondrial Localization of Proteins
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
Mitochondria are surrounded by two membranes (outer and inner mitochondrial membrane) that separate two mitochondrial compartments (intermembrane space and matrix). Hundreds of proteins are distributed among these submitochondrial components. A simple biochemical/immunological procedure is described in this test to determine the localization of…
Problem-Solving Test: Expression Cloning of the Erythropoietin Receptor
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
Terms to be familiar with before you start to solve the test: cytokines, cytokine receptors, cDNA library, cDNA synthesis, poly(A)[superscript +] RNA, primer, template, reverse transcriptase, restriction endonucleases, cohesive ends, expression vector, promoter, Shine-Dalgarno sequence, poly(A) signal, DNA helicase, DNA ligase, topoisomerases,…
Problem-Solving Test: The Mechanism of Protein Synthesis
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: protein synthesis, ribosomes, amino acids, peptides, peptide bond, polypeptide chain, N- and C-terminus, hemoglobin, [alpha]- and [beta]-globin chains, radioactive labeling, [[to the third power]H] and [[to the fourteenth power]C]leucine, cytosol, differential centrifugation, density…
Problem-Solving Test: Real-Time Polymerase Chain Reaction
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: polymerase chain reaction, DNA amplification, electrophoresis, breast cancer, "HER2" gene, genomic DNA, "in vitro" DNA synthesis, template, primer, Taq polymerase, 5[prime][right arrow]3[prime] elongation activity, 5[prime][right arrow]3[prime] exonuclease activity, deoxyribonucleoside…
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Development and Implementation of Radiation-Hydrodynamics Verification Test Problems
Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.
2012-08-22
Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.
Crash test for the restricted three-body problem.
Nagler, Jan
2005-02-01
The restricted three-body problem serves to investigate the chaotic behavior of a small body under the gravitational influence of two heavy primary bodies. We analyze numerically the phase space mixing of bounded motion, escape, and crash in this simple model of (chaotic) celestial mechanics. The presented extensive numerical analysis reveals a high degree of complexity. We extend the recently presented findings for the Copenhagen case of equal main masses to the general case of different primary body masses. Collisions of the small body onto the primaries are comparatively frequent, and their probability displays a scale-free dependence on the size of the primaries as shown for the Copenhagen case. Interpreting the crash as leaking in phase space the results are related to both chaotic scattering and the theory of leaking Hamiltonian systems.
Test Problems for Large-Scale Multiobjective and Many-Objective Optimization.
Cheng, Ran; Jin, Yaochu; Olhofer, Markus; Sendhoff, Bernhard
2016-08-26
The interests in multiobjective and many-objective optimization have been rapidly increasing in the evolutionary computation community. However, most studies on multiobjective and many-objective optimization are limited to small-scale problems, despite the fact that many real-world multiobjective and many-objective optimization problems may involve a large number of decision variables. As has been evident in the history of evolutionary optimization, the development of evolutionary algorithms (EAs) for solving a particular type of optimization problems has undergone a co-evolution with the development of test problems. To promote the research on large-scale multiobjective and many-objective optimization, we propose a set of generic test problems based on design principles widely used in the literature of multiobjective and many-objective optimization. In order for the test problems to be able to reflect challenges in real-world applications, we consider mixed separability between decision variables and nonuniform correlation between decision variables and objective functions. To assess the proposed test problems, six representative evolutionary multiobjective and many-objective EAs are tested on the proposed test problems. Our empirical results indicate that although the compared algorithms exhibit slightly different capabilities in dealing with the challenges in the test problems, none of them are able to efficiently solve these optimization problems, calling for the need for developing new EAs dedicated to large-scale multiobjective and many-objective optimization.
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
Phonon thermal conduction in novel 2D materials
NASA Astrophysics Data System (ADS)
Xu, Xiangfan; Chen, Jie; Li, Baowen
2016-12-01
Recently, there has been increasing interest in phonon thermal transport in low-dimensional materials, due to the crucial importance of dissipating and managing heat in micro- and nano-electronic devices. Significant progress has been achieved for one-dimensional (1D) systems, both theoretically and experimentally. However, the study of heat conduction in two-dimensional (2D) systems is still in its infancy due to the limited availability of 2D materials and the technical challenges of fabricating suspended samples that are suitable for thermal measurements. In this review, we outline different experimental techniques and theoretical approaches for phonon thermal transport in 2D materials, discuss the problems and challenges of phonon thermal transport measurements and provide a comparison between existing experimental data. Special attention will be given to the effects of size, dimensionality, anisotropy and mode contributions in novel 2D systems, including graphene, boron nitride, MoS2, black phosphorous and silicene.
Phonon thermal conduction in novel 2D materials.
Xu, Xiangfan; Chen, Jie; Li, Baowen
2016-12-07
Recently, there has been increasing interest in phonon thermal transport in low-dimensional materials, due to the crucial importance of dissipating and managing heat in micro- and nano-electronic devices. Significant progress has been achieved for one-dimensional (1D) systems, both theoretically and experimentally. However, the study of heat conduction in two-dimensional (2D) systems is still in its infancy due to the limited availability of 2D materials and the technical challenges of fabricating suspended samples that are suitable for thermal measurements. In this review, we outline different experimental techniques and theoretical approaches for phonon thermal transport in 2D materials, discuss the problems and challenges of phonon thermal transport measurements and provide a comparison between existing experimental data. Special attention will be given to the effects of size, dimensionality, anisotropy and mode contributions in novel 2D systems, including graphene, boron nitride, MoS2, black phosphorous and silicene.
ORION96. 2-d Finite Element Code Postprocessor
Sanford, L.A.; Hallquist, J.O.
1992-02-02
ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forces along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Proof test of the computer program BUCKY for plasticity problems
NASA Astrophysics Data System (ADS)
Smith, James P.
1994-01-01
A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.
Assessing corrosion problems in photovoltaic cells via electrochemical stress testing
NASA Technical Reports Server (NTRS)
Shalaby, H.
1985-01-01
A series of accelerated electrochemical experiments to study the degradation properties of polyvinylbutyral-encapsulated silicon solar cells has been carried out. The cells' electrical performance with silk screen-silver and nickel-solder contacts was evaluated. The degradation mechanism was shown to be electrochemical corrosion of the cell contacts; metallization elements migrate into the encapsulating material, which acts as an ionic conducting medium. The corrosion products form a conductive path which results in a gradual loss of the insulation characteristics of the encapsulant. The precipitation of corrosion products in the encapsulant also contributes to its discoloration which in turn leads to a reduction in its transparency and the consequent optical loss. Delamination of the encapsulating layers could be attributed to electrochemical gas evolution reactions. The usefulness of the testing technique in qualitatively establishing a reliability difference between metallizations and antireflection coating types is demonstrated.
ERIC Educational Resources Information Center
Needham, Martha Elaine
2010-01-01
This research compares differences between standardized test scores in problem-based learning (PBL) classrooms and a traditional classroom for 6th grade students using a mixed-method, quasi-experimental and qualitative design. The research shows that problem-based learning is as effective as traditional teaching methods on standardized tests. The…
MULTICATEGORICAL EVALUATION OF PERFORMANCE IN CLINICAL PROBLEM-SOLVING TESTS. FINAL REPORT.
ERIC Educational Resources Information Center
WILDS, PRESTON L.; ZACHERT, VIRGINIA
THIS PROJECT ATTEMPTED TO DETERMINE IF NUMERICAL SCORING SYSTEMS FOR CLINICAL PROBLEM-SOLVING TESTS COULD BE DEVELOPED WHICH WOULD MEASURE THE EFFECTIVENESS OF DIFFERENT INSTRUCTIONAL METHODS IN TEACHING CLINICAL PROBLEM-SOLVING SKILLS. THE PROJECT WAS TO VALIDATE AND CROSS-VALIDATE THE SCORING SYSTEMS BY TESTS OF POPULATION SAMPLES OF KNOWN…
An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
A faster method for 3D/2D medical image registration—a simulation study
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Claudius Gellrich, Niels; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter
2003-08-01
3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(°) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(°) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.
A faster method for 3D/2D medical image registration--a simulation study.
Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels Claudius; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter
2003-08-21
3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(degrees) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(degrees) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.
Formal analysis, hardness, and algorithms for extracting internal structure of test-based problems.
Jaśkowski, Wojciech; Krawiec, Krzysztof
2011-01-01
Problems in which some elementary entities interact with each other are common in computational intelligence. This scenario, typical for coevolving artificial life agents, learning strategies for games, and machine learning from examples, can be formalized as a test-based problem and conveniently embedded in the common conceptual framework of coevolution. In test-based problems, candidate solutions are evaluated on a number of test cases (agents, opponents, examples). It has been recently shown that every test of such problem can be regarded as a separate objective, and the whole problem as multi-objective optimization. Research on reducing the number of such objectives while preserving the relations between candidate solutions and tests led to the notions of underlying objectives and internal problem structure, which can be formalized as a coordinate system that spatially arranges candidate solutions and tests. The coordinate system that spans the minimal number of axes determines the so-called dimension of a problem and, being an inherent property of every problem, is of particular interest. In this study, we investigate in-depth the formalism of a coordinate system and its properties, relate them to properties of partially ordered sets, and design an exact algorithm for finding a minimal coordinate system. We also prove that this problem is NP-hard and come up with a heuristic which is superior to the best algorithm proposed so far. Finally, we apply the algorithms to three abstract problems and demonstrate that the dimension of the problem is typically much lower than the number of tests, and for some problems converges to the intrinsic parameter of the problem--its a priori dimension.
ERIC Educational Resources Information Center
Keating, Xiaofen Deng
2003-01-01
This paper aims to examine current nationwide youth fitness test programs, address problems embedded in the programs, and possible solutions. The current Fitnessgram, President's Challenge, and YMCA youth fitness test programs were selected to represent nationwide youth fitness test programs. Sponsors of the nationwide youth fitness test programs…
Reflectometric reading of dry test for vitamin C determinations in juices: problems and difficulties
NASA Astrophysics Data System (ADS)
Chwojnowski, Andrzej; Lukowska, E.; Dudzinski, K.
2003-04-01
In the paper are shown the analytical problems of ascorbic acid determinations in juice with utilization of the dry test technique. Various types of juices are resulted necessity applications different tests types. The separations of sediments on semipermeable membranes and the separations of dark dye by chromatopgraphy type tests make possible test erading and ascorbic acid determinations.
NKG2D ligands as therapeutic targets
Spear, Paul; Wu, Ming-Ru; Sentman, Marie-Louise; Sentman, Charles L.
2013-01-01
The Natural Killer Group 2D (NKG2D) receptor plays an important role in protecting the host from infections and cancer. By recognizing ligands induced on infected or tumor cells, NKG2D modulates lymphocyte activation and promotes immunity to eliminate ligand-expressing cells. Because these ligands are not widely expressed on healthy adult tissue, NKG2D ligands may present a useful target for immunotherapeutic approaches in cancer. Novel therapies targeting NKG2D ligands for the treatment of cancer have shown preclinical success and are poised to enter into clinical trials. In this review, the NKG2D receptor and its ligands are discussed in the context of cancer, infection, and autoimmunity. In addition, therapies targeting NKG2D ligands in cancer are also reviewed. PMID:23833565
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
Medical Physics: Forming and testing solutions to clinical problems.
Tsapaki, Virginia; Bayford, Richard
2015-11-01
According to the European Federation of Organizations for Medical Physics (EFOMP) policy statement No. 13, "The rapid advance in the use of highly sophisticated equipment and procedures in the medical field increasingly depends on information and communication technology. In spite of the fact that the safety and quality of such technology is vigorously tested before it is placed on the market, it often turns out that the safety and quality is not sufficient when used under hospital working conditions. To improve safety and quality for patient and users, additional safeguards and related monitoring, as well as measures to enhance quality, are required. Furthermore a large number of accidents and incidents happen every year in hospitals and as a consequence a number of patients die or are injured. Medical Physicists are well positioned to contribute towards preventing these kinds of events". The newest developments related to this increasingly important medical speciality were presented during the 8th European Conference of Medical Physics 2014 which was held in Athens, 11-13 September 2014 and hosted by the Hellenic Association of Medical Physicists (HAMP) in collaboration with the EFOMP and are summarized in this issue.
2D Electrostatic Actuation of Microshutter Arrays
NASA Technical Reports Server (NTRS)
Burns, Devin E.; Oh, Lance H.; Li, Mary J.; Jones, Justin S.; Kelly, Daniel P.; Zheng, Yun; Kutyrev, Alexander S.; Moseley, Samuel H.
2015-01-01
An electrostatically actuated microshutter array consisting of rotational microshutters (shutters that rotate about a torsion bar) were designed and fabricated through the use of models and experiments. Design iterations focused on minimizing the torsional stiffness of the microshutters, while maintaining their structural integrity. Mechanical and electromechanical test systems were constructed to measure the static and dynamic behavior of the microshutters. The torsional stiffness was reduced by a factor of four over initial designs without sacrificing durability. Analysis of the resonant behavior of the microshutter arrays demonstrates that the first resonant mode is a torsional mode occurring around 3000 Hz. At low vacuum pressures, this resonant mode can be used to significantly reduce the drive voltage necessary for actuation requiring as little as 25V. 2D electrostatic latching and addressing was demonstrated using both a resonant and pulsed addressing scheme.
Canard configured aircraft with 2-D nozzle
NASA Technical Reports Server (NTRS)
Child, R. D.; Henderson, W. P.
1978-01-01
A closely-coupled canard fighter with vectorable two-dimensional nozzle was designed for enhanced transonic maneuvering. The HiMAT maneuver goal of a sustained 8g turn at a free-stream Mach number of 0.9 and 30,000 feet was the primary design consideration. The aerodynamic design process was initiated with a linear theory optimization minimizing the zero percent suction drag including jet effects and refined with three-dimensional nonlinear potential flow techniques. Allowances were made for mutual interference and viscous effects. The design process to arrive at the resultant configuration is described, and the design of a powered 2-D nozzle model to be tested in the LRC 16-foot Propulsion Wind Tunnel is shown.
NASA Astrophysics Data System (ADS)
Raskin, Cody; Owen, J. Michael
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.
2-D Finite Element Cable and Box IEMP Analysis
Scivner, G.J.; Turner, C.D.
1998-12-17
A 2-D finite element code has been developed for the solution of arbitrary geometry cable SGEMP and box IEMP problems. The quasi- static electric field equations with radiation- induced charge deposition and radiation-induced conductivity y are numerically solved on a triangular mesh. Multiple regions of different dielectric materials and multiple conductors are permitted.
2D signature for detection and identification of drugs
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Shen, Jingling; Zhang, Cunlin; Zhou, Qingli; Shi, Yulei
2011-06-01
The method of spectral dynamics analysis (SDA-method) is used for obtaining the2D THz signature of drugs. This signature is used for the detection and identification of drugs with similar Fourier spectra by transmitted THz signal. We discuss the efficiency of SDA method for the identification problem of pure methamphetamine (MA), methylenedioxyamphetamine (MDA), 3, 4-methylenedioxymethamphetamine (MDMA) and Ketamine.
Some Problems of Computer-Aided Testing and "Interview-Like Tests"
ERIC Educational Resources Information Center
Smoline, D.V.
2008-01-01
Computer-based testing--is an effective teacher's tool, intended to optimize course goals and assessment techniques in a comparatively short time. However, this is accomplished only if we deal with high-quality tests. It is strange, but despite the 100-year history of Testing Theory (see, Anastasi, A., Urbina, S. (1997). Psychological testing.…
Extreme Growth of Enstrophy on 2D Bounded Domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Sliwiak, Adam
2016-11-01
We study the vortex states responsible for the largest instantaneous growth of enstrophy possible in viscous incompressible flow on 2D bounded domain. The goal is to compare these results with estimates obtained using mathematical analysis. This problem is closely related to analogous questions recently considered in the periodic setting on 1D, 2D and 3D domains. In addition to systematically characterizing the most extreme behavior, these problems are also closely related to the open question of the finite-time singularity formation in the 3D Navier-Stokes system. We demonstrate how such extreme vortex states can be found as solutions of constrained variational optimization problems which in the limit of small enstrophy reduce to eigenvalue problems. Computational results will be presented for circular and square domains emphasizing the effect of geometric singularities (corners of the domain) on the structure of the extreme vortex states. Supported by an NSERC (Canada) Discovery Grant.
Quantitative 2D liquid-state NMR.
Giraudeau, Patrick
2014-06-01
Two-dimensional (2D) liquid-state NMR has a very high potential to simultaneously determine the absolute concentration of small molecules in complex mixtures, thanks to its capacity to separate overlapping resonances. However, it suffers from two main drawbacks that probably explain its relatively late development. First, the 2D NMR signal is strongly molecule-dependent and site-dependent; second, the long duration of 2D NMR experiments prevents its general use for high-throughput quantitative applications and affects its quantitative performance. Fortunately, the last 10 years has witnessed an increasing number of contributions where quantitative approaches based on 2D NMR were developed and applied to solve real analytical issues. This review aims at presenting these recent efforts to reach a high trueness and precision in quantitative measurements by 2D NMR. After highlighting the interest of 2D NMR for quantitative analysis, the different strategies to determine the absolute concentrations from 2D NMR spectra are described and illustrated by recent applications. The last part of the manuscript concerns the recent development of fast quantitative 2D NMR approaches, aiming at reducing the experiment duration while preserving - or even increasing - the analytical performance. We hope that this comprehensive review will help readers to apprehend the current landscape of quantitative 2D NMR, as well as the perspectives that may arise from it.
A Test of the Testing Effect: Acquiring Problem-Solving Skills from Worked Examples
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth
2012-01-01
The "testing effect" refers to the finding that after an initial study opportunity, testing is more effective for long-term retention than restudying. The testing effect seems robust and is a finding from the field of cognitive science that has important implications for education. However, it is unclear whether this effect also applies…
Quantum process tomography by 2D fluorescence spectroscopy
Pachón, Leonardo A.; Marcus, Andrew H.; Aspuru-Guzik, Alán
2015-06-07
Reconstruction of the dynamics (quantum process tomography) of the single-exciton manifold in energy transfer systems is proposed here on the basis of two-dimensional fluorescence spectroscopy (2D-FS) with phase-modulation. The quantum-process-tomography protocol introduced here benefits from, e.g., the sensitivity enhancement ascribed to 2D-FS. Although the isotropically averaged spectroscopic signals depend on the quantum yield parameter Γ of the doubly excited-exciton manifold, it is shown that the reconstruction of the dynamics is insensitive to this parameter. Applications to foundational and applied problems, as well as further extensions, are discussed.
Testing foreign language impact on engineering students' scientific problem-solving performance
NASA Astrophysics Data System (ADS)
Tatzl, Dietmar; Messnarz, Bernd
2013-12-01
This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in the Degree Programme of Aviation at the FH JOANNEUM University of Applied Sciences, Graz, Austria. Half of each test group were given a set of 12 physics problems described in German, the other half received the same set of problems described in English. It was the goal to test linguistic reading comprehension necessary for scientific problem solving instead of physics knowledge as such. The results imply that written undergraduate English-medium engineering tests and examinations may not require additional examination time or language-specific aids for students who have reached university-entrance proficiency in English as a foreign language.
Annotated Bibliography of EDGE2D Use
J.D. Strachan and G. Corrigan
2005-06-24
This annotated bibliography is intended to help EDGE2D users, and particularly new users, find existing published literature that has used EDGE2D. Our idea is that a person can find existing studies which may relate to his intended use, as well as gain ideas about other possible applications by scanning the attached tables.
Staring 2-D hadamard transform spectral imager
Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.
2006-02-07
A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.
gpuSPHASE-A shared memory caching implementation for 2D SPH using CUDA
NASA Astrophysics Data System (ADS)
Winkler, Daniel; Meister, Michael; Rezavand, Massoud; Rauch, Wolfgang
2017-04-01
Smoothed particle hydrodynamics (SPH) is a meshless Lagrangian method that has been successfully applied to computational fluid dynamics (CFD), solid mechanics and many other multi-physics problems. Using the method to solve transport phenomena in process engineering requires the simulation of several days to weeks of physical time. Based on the high computational demand of CFD such simulations in 3D need a computation time of years so that a reduction to a 2D domain is inevitable. In this paper gpuSPHASE, a new open-source 2D SPH solver implementation for graphics devices, is developed. It is optimized for simulations that must be executed with thousands of frames per second to be computed in reasonable time. A novel caching algorithm for Compute Unified Device Architecture (CUDA) shared memory is proposed and implemented. The software is validated and the performance is evaluated for the well established dambreak test case.
Raybould, Alan
2006-01-01
Environmental risk assessments can provide high confidence of minimal risk by testing theories, "risk hypotheses", that predict the likelihood of unacceptable harmful events. The creation of risk hypotheses and a plan to test them is called problem formulation. Effective problem formulation seeks to maximize the possibility of detecting effects that indicate potential risk; if such effects are not detected, minimal risk is indicated with high confidence. Two important implications are that artificial test conditions can increase confidence, whereas prescriptive data requirements can reduce confidence (increase uncertainty) if they constrain problem formulation. Poor problem formulation can increase environmental risk because it leads to the collection of superfluous data that may delay or prevent the introduction of environmentally beneficial products.
A COMPARISON OF CONFIDENCE INTERVAL PROCEDURES IN CENSORED LIFE TESTING PROBLEMS.
Obtaining a confidence interval for a parameter lambda of an exponential distribution is a frequent occurrence in life testing problems. Oftentimes...the test plan used is one in which all the observations are censored at the same time point. Several approximate confidence interval procedures are
Problem-Solving Test: RNA and Protein Synthesis in Bacteriophage-Infected "E. coli" Cells
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
The classic experiment presented in this problem-solving test was designed to identify the template molecules of translation by analyzing the synthesis of phage proteins in "Escherichia coli" cells infected with bacteriophage T4. The work described in this test led to one of the most seminal discoveries of early molecular biology: it dealt a…
ELLIPT2D: A Flexible Finite Element Code Written Python
Pletzer, A.; Mollis, J.C.
2001-03-22
The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.
Use of laboratory and field testing to identify potential production problems in the Troll field
Hartley, R.; Jadid, M.B.
1989-02-01
The areal extent of the oil found in Troll made it clear at a very early stage in the field's appraisal that subsea wells would be required if the oil were developed. Owing to cooling in the subsea flowline, subsea wells can be expected to pose more production chemistry problems than would be expected with conventional platform wells. Consequently, a number of laboratory tests were carried out during the appraisal campaign to identify problems to be expected with scaling, foaming, emulsification, wax deposition, and hydrates. Dehydration and wax deposition tests were also carried out offshore during appraisal-well testing. These tests are described, together with the methods subsequently adopted to minimize future production problems.
Ginsparg, P.
1991-01-01
These are introductory lectures for a general audience that give an overview of the subject of matrix models and their application to random surfaces, 2d gravity, and string theory. They are intentionally 1.5 years out of date.
Ginsparg, P.
1991-12-31
These are introductory lectures for a general audience that give an overview of the subject of matrix models and their application to random surfaces, 2d gravity, and string theory. They are intentionally 1.5 years out of date.
NASA Astrophysics Data System (ADS)
Dekker, T.; de Zwart, S. T.; Willemsen, O. H.; Hiddink, M. G. H.; IJzerman, W. L.
2006-02-01
A prerequisite for a wide market acceptance of 3D displays is the ability to switch between 3D and full resolution 2D. In this paper we present a robust and cost effective concept for an auto-stereoscopic switchable 2D/3D display. The display is based on an LCD panel, equipped with switchable LC-filled lenticular lenses. We will discuss 3D image quality, with the focus on display uniformity. We show that slanting the lenticulars in combination with a good lens design can minimize non-uniformities in our 20" 2D/3D monitors. Furthermore, we introduce fractional viewing systems as a very robust concept to further improve uniformity in the case slanting the lenticulars and optimizing the lens design are not sufficient. We will discuss measurements and numerical simulations of the key optical characteristics of this display. Finally, we discuss 2D image quality, the switching characteristics and the residual lens effect.
Evaluation of 2D ceramic matrix composites in aeroconvective environments
NASA Technical Reports Server (NTRS)
Riccitiello, Salvatore R.; Love, Wendell L.; Balter-Peterson, Aliza
1992-01-01
An evaluation is conducted of a novel ceramic-matrix composite (CMC) material system for use in the aeroconvective-heating environments encountered by the nose caps and wing leading edges of such aerospace vehicles as the Space Shuttle, during orbit-insertion and reentry from LEO. These CMCs are composed of an SiC matrix that is reinforced with Nicalon, Nextel, or carbon refractory fibers in a 2D architecture. The test program conducted for the 2D CMCs gave attention to their subsurface oxidation.
Chemical Approaches to 2D Materials.
Samorì, Paolo; Palermo, Vincenzo; Feng, Xinliang
2016-08-01
Chemistry plays an ever-increasing role in the production, functionalization, processing and applications of graphene and other 2D materials. This special issue highlights a selection of enlightening chemical approaches to 2D materials, which nicely reflect the breadth of the field and convey the excitement of the individuals involved in it, who are trying to translate graphene and related materials from the laboratory into a real, high-impact technology.
A New 2D-Transport, 1D-Diffusion Approximation of the Boltzmann Transport equation
Larsen, Edward
2013-06-17
The work performed in this project consisted of the derivation, implementation, and testing of a new, computationally advantageous approximation to the 3D Boltz- mann transport equation. The solution of the Boltzmann equation is the neutron flux in nuclear reactor cores and shields, but solving this equation is difficult and costly. The new “2D/1D” approximation takes advantage of a special geometric feature of typical 3D reactors to approximate the neutron transport physics in a specific (ax- ial) direction, but not in the other two (radial) directions. The resulting equation is much less expensive to solve computationally, and its solutions are expected to be sufficiently accurate for many practical problems. In this project we formulated the new equation, discretized it using standard methods, developed a stable itera- tion scheme for solving the equation, implemented the new numerical scheme in the MPACT code, and tested the method on several realistic problems. All the hoped- for features of this new approximation were seen. For large, difficult problems, the resulting 2D/1D solution is highly accurate, and is calculated about 100 times faster than a 3D discrete ordinates simulation.
2D FEM Heat Transfer & E&M Field Code
1992-04-02
TOPAZ and TOPAZ2D are two-dimensional implicit finite element computer codes for heat transfer analysis. TOPAZ2D can also be used to solve electrostatic and magnetostatic problems. The programs solve for the steady-state or transient temperature or electrostatic and magnetostatic potential field on two-dimensional planar or axisymmetric geometries. Material properties may be temperature or potential-dependent and either isotropic or orthotropic. A variety of time and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functional representation of boundary conditions and internal heat generation. The programs can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.
Jaarsveld, Saskia; Lachmann, Thomas
2017-01-01
This paper discusses the importance of three features of psychometric tests for cognition research: construct definition, problem space, and knowledge domain. Definition of constructs, e.g., intelligence or creativity, forms the theoretical basis for test construction. Problem space, being well or ill-defined, is determined by the cognitive abilities considered to belong to the constructs, e.g., convergent thinking to intelligence, divergent thinking to creativity. Knowledge domain and the possibilities it offers cognition are reflected in test results. We argue that (a) comparing results of tests with different problem spaces is more informative when cognition operates in both tests on an identical knowledge domain, and (b) intertwining of abilities related to both constructs can only be expected in tests developed to instigate such a process. Test features should guarantee that abilities can contribute to self-generated and goal-directed processes bringing forth solutions that are both new and applicable. We propose and discuss a test example that was developed to address these issues. PMID:28220098
A multigroup radiation diffusion test problem: Comparison of code results with analytic solution
Shestakov, A I; Harte, J A; Bolstad, J H; Offner, S R
2006-12-21
We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.
2014-09-09
computerized tomography, synthetic aperture radar , geophysical prospecting and nondestructive test- ing. Since the solution of any inverse problem is...domain in order to handle limited aperture data. The main accomplishments during the period of this report were: 1. The derivation of new methods...in nondestructive testing using the theory of transmission eigenvalues. 2. The introduction and investigation of a new class of inverse scattering
Development of a MEMS 2D separations device
NASA Astrophysics Data System (ADS)
Bloschock, Kristen P.; Flyer, Jonathan N.; Schneider, Thomas W.; Hussam, Abul; Van Keuren, Edward R.
2004-12-01
A polymer based biochip for rapid 2D separations of peptides, proteins, and other biomedically relevant molecules was designed and fabricated. Like traditional 2D polyacrylamide gel electrophoresis (2D-PAGE) methods, the device will allow molecules to separate based on isoelectric point (pI) and molecular weight (MW). Our design, however, integrates both an initial capillary isoelectric focusing (cIEF) step followed by capillary electrophoresis (CE) in multiple parallel channels, all on a single microfluidic chip. Not only is the "lab-on-a-chip" design easier to use and less expensive, but the miniaturization of the device produces very rapid separations. Compared to traditional 2D-PAGE, which can take hours to complete, we estimate separation times on the order of seconds. Fluorescence detection will be used in the preliminary stages of testing, but the device also is equipped with integrated electrodes in the electrophoresis channels to perform multiplexed electrochemical detection for quantitative analysis. We will present preliminary results of the chip development and testing.
Rowley-Neale, Samuel J; Fearn, Jamie M; Brownson, Dale A C; Smith, Graham C; Ji, Xiaobo; Banks, Craig E
2016-08-21
Two-dimensional molybdenum disulphide nanosheets (2D-MoS2) have proven to be an effective electrocatalyst, with particular attention being focused on their use towards increasing the efficiency of the reactions associated with hydrogen fuel cells. Whilst the majority of research has focused on the Hydrogen Evolution Reaction (HER), herein we explore the use of 2D-MoS2 as a potential electrocatalyst for the much less researched Oxygen Reduction Reaction (ORR). We stray from literature conventions and perform experiments in 0.1 M H2SO4 acidic electrolyte for the first time, evaluating the electrochemical performance of the ORR with 2D-MoS2 electrically wired/immobilised upon several carbon based electrodes (namely; Boron Doped Diamond (BDD), Edge Plane Pyrolytic Graphite (EPPG), Glassy Carbon (GC) and Screen-Printed Electrodes (SPE)) whilst exploring a range of 2D-MoS2 coverages/masses. Consequently, the findings of this study are highly applicable to real world fuel cell applications. We show that significant improvements in ORR activity can be achieved through the careful selection of the underlying/supporting carbon materials that electrically wire the 2D-MoS2 and utilisation of an optimal mass of 2D-MoS2. The ORR onset is observed to be reduced to ca. +0.10 V for EPPG, GC and SPEs at 2D-MoS2 (1524 ng cm(-2) modification), which is far closer to Pt at +0.46 V compared to bare/unmodified EPPG, GC and SPE counterparts. This report is the first to demonstrate such beneficial electrochemical responses in acidic conditions using a 2D-MoS2 based electrocatalyst material on a carbon-based substrate (SPEs in this case). Investigation of the beneficial reaction mechanism reveals the ORR to occur via a 4 electron process in specific conditions; elsewhere a 2 electron process is observed. This work offers valuable insights for those wishing to design, fabricate and/or electrochemically test 2D-nanosheet materials towards the ORR.
Orthotropic Piezoelectricity in 2D Nanocellulose
NASA Astrophysics Data System (ADS)
García, Y.; Ruiz-Blanco, Yasser B.; Marrero-Ponce, Yovani; Sotomayor-Torres, C. M.
2016-10-01
The control of electromechanical responses within bonding regions is essential to face frontier challenges in nanotechnologies, such as molecular electronics and biotechnology. Here, we present Iβ-nanocellulose as a potentially new orthotropic 2D piezoelectric crystal. The predicted in-layer piezoelectricity is originated on a sui-generis hydrogen bonds pattern. Upon this fact and by using a combination of ab-initio and ad-hoc models, we introduce a description of electrical profiles along chemical bonds. Such developments lead to obtain a rationale for modelling the extended piezoelectric effect originated within bond scales. The order of magnitude estimated for the 2D Iβ-nanocellulose piezoelectric response, ~pm V‑1, ranks this material at the level of currently used piezoelectric energy generators and new artificial 2D designs. Such finding would be crucial for developing alternative materials to drive emerging nanotechnologies.
Orthotropic Piezoelectricity in 2D Nanocellulose
García, Y.; Ruiz-Blanco, Yasser B.; Marrero-Ponce, Yovani; Sotomayor-Torres, C. M.
2016-01-01
The control of electromechanical responses within bonding regions is essential to face frontier challenges in nanotechnologies, such as molecular electronics and biotechnology. Here, we present Iβ-nanocellulose as a potentially new orthotropic 2D piezoelectric crystal. The predicted in-layer piezoelectricity is originated on a sui-generis hydrogen bonds pattern. Upon this fact and by using a combination of ab-initio and ad-hoc models, we introduce a description of electrical profiles along chemical bonds. Such developments lead to obtain a rationale for modelling the extended piezoelectric effect originated within bond scales. The order of magnitude estimated for the 2D Iβ-nanocellulose piezoelectric response, ~pm V−1, ranks this material at the level of currently used piezoelectric energy generators and new artificial 2D designs. Such finding would be crucial for developing alternative materials to drive emerging nanotechnologies. PMID:27708364
Orthotropic Piezoelectricity in 2D Nanocellulose.
García, Y; Ruiz-Blanco, Yasser B; Marrero-Ponce, Yovani; Sotomayor-Torres, C M
2016-10-06
The control of electromechanical responses within bonding regions is essential to face frontier challenges in nanotechnologies, such as molecular electronics and biotechnology. Here, we present Iβ-nanocellulose as a potentially new orthotropic 2D piezoelectric crystal. The predicted in-layer piezoelectricity is originated on a sui-generis hydrogen bonds pattern. Upon this fact and by using a combination of ab-initio and ad-hoc models, we introduce a description of electrical profiles along chemical bonds. Such developments lead to obtain a rationale for modelling the extended piezoelectric effect originated within bond scales. The order of magnitude estimated for the 2D Iβ-nanocellulose piezoelectric response, ~pm V(-1), ranks this material at the level of currently used piezoelectric energy generators and new artificial 2D designs. Such finding would be crucial for developing alternative materials to drive emerging nanotechnologies.
2D microwave imaging reflectometer electronics
Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.
2014-11-15
A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.
Large Area Synthesis of 2D Materials
NASA Astrophysics Data System (ADS)
Vogel, Eric
Transition metal dichalcogenides (TMDs) have generated significant interest for numerous applications including sensors, flexible electronics, heterostructures and optoelectronics due to their interesting, thickness-dependent properties. Despite recent progress, the synthesis of high-quality and highly uniform TMDs on a large scale is still a challenge. In this talk, synthesis routes for WSe2 and MoS2 that achieve monolayer thickness uniformity across large area substrates with electrical properties equivalent to geological crystals will be described. Controlled doping of 2D semiconductors is also critically required. However, methods established for conventional semiconductors, such as ion implantation, are not easily applicable to 2D materials because of their atomically thin structure. Redox-active molecular dopants will be demonstrated which provide large changes in carrier density and workfunction through the choice of dopant, treatment time, and the solution concentration. Finally, several applications of these large-area, uniform 2D materials will be described including heterostructures, biosensors and strain sensors.
2D microwave imaging reflectometer electronics.
Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C
2014-11-01
A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.
Assessing 2D electrophoretic mobility spectroscopy (2D MOSY) for analytical applications.
Fang, Yuan; Yushmanov, Pavel V; Furó, István
2016-12-08
Electrophoretic displacement of charged entity phase modulates the spectrum acquired in electrophoretic NMR experiments, and this modulation can be presented via 2D FT as 2D mobility spectroscopy (MOSY) spectra. We compare in various mixed solutions the chemical selectivity provided by 2D MOSY spectra with that provided by 2D diffusion-ordered spectroscopy (DOSY) spectra and demonstrate, under the conditions explored, a superior performance of the former method. 2D MOSY compares also favourably with closely related LC-NMR methods. The shape of 2D MOSY spectra in complex mixtures is strongly modulated by the pH of the sample, a feature that has potential for areas such as in drug discovery and metabolomics. Copyright © 2016 The Authors. Magnetic Resonance in Chemistry published by John Wiley & Sons Ltd. StartCopTextCopyright © 2016 The Authors. Magnetic Resonance in Chemistry published by John Wiley & Sons Ltd.
Summary of Documentation for DYNA3D-ParaDyn's Software Quality Assurance Regression Test Problems
Zywicz, Edward
2016-08-18
The Software Quality Assurance (SQA) regression test suite for DYNA3D (Zywicz and Lin, 2015) and ParaDyn (DeGroot, et al., 2015) currently contains approximately 600 problems divided into 21 suites, and is a required component of ParaDyn’s SQA plan (Ferencz and Oliver, 2013). The regression suite allows developers to ensure that software modifications do not unintentionally alter the code response. The entire regression suite is run prior to permanently incorporating any software modification or addition. When code modifications alter test problem results, the specific cause must be determined and fully understood before the software changes and revised test answers can be incorporated. The regression suite is executed on LLNL platforms using a Python script and an associated data file. The user specifies the DYNA3D or ParaDyn executable, number of processors to use, test problems to run, and other options to the script. The data file details how each problem and its answer extraction scripts are executed. For each problem in the regression suite there exists an input deck, an eight-processor partition file, an answer file, and various extraction scripts. These scripts assemble a temporary answer file in a specific format from the simulation results. The temporary and stored answer files are compared to a specific level of numerical precision, and when differences are detected the test problem is flagged as failed. Presently, numerical results are stored and compared to 16 digits. At this accuracy level different processor types, compilers, number of partitions, etc. impact the results to various degrees. Thus, for consistency purposes the regression suite is run with ParaDyn using 8 processors on machines with a specific processor type (currently the Intel Xeon E5530 processor). For non-parallel regression problems, i.e., the two XFEM problems, DYNA3D is used instead. When environments or platforms change, executables using the current source code and the new
Rosso, C; Furlan, P M; Rolle, L; Fontana, D
1994-09-01
The Authors analyze scientific community's attitude towards VSS test. Related psychological problems are examined: the doctor's fear to be accused of voyeurism, the doctor's inadequacy in "invading" patient's field of intimacy, the doctor's relationship with his own erotic fantasies and with pornography, the "castrating" aspects in hospital environment and so on.
Testing Foreign Language Impact on Engineering Students' Scientific Problem-Solving Performance
ERIC Educational Resources Information Center
Tatzl, Dietmar; Messnarz, Bernd
2013-01-01
This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in…
ERIC Educational Resources Information Center
Smith, Mike U.
Both teachers and students alike acknowledge that genetics and genetics problem-solving are extremely difficult to learn and to teach. Therefore, a number of recommendations for teaching college genetics are offered. Although few of these ideas have as yet been tested in controlled experiments, they are supported by research and experience and may…
The Relationship between Test Anxiety, Epistemological Beliefs and Problem Solving among Students
ERIC Educational Resources Information Center
Mehdinezhad, Vali; Bamari, Zeinab
2015-01-01
The purpose of this study was to investigate the test anxiety, epistemological beliefs and problem solving among students. The target population of the current research was all the students of University of Sistan and Baluchestan in the academic year 2013-2014 and the number of the sample was 375. They were selected using a classified and simple…
Problem-solving deficits in alcoholics: evidence from the California Card Sorting Test.
Beatty, W W; Katzung, V M; Nixon, S J; Moreland, V J
1993-11-01
In an attempt to clarify the nature of the problem-solving deficits exhibited by chronic alcoholics, the California Card Sorting Test (CCST) and other measures of abstraction and problem solving were administered to 23 alcoholics and 16 nonalcoholic controls, equated for age, education and vocabulary. On the CCST, the alcoholics exhibited three types of deficits which appeared to be relatively independent. First, the alcoholics generated and identified fewer correct concepts than controls, although they executed concepts normally when cued by the examiner. Second, the alcoholics made more perseverative sorting responses and perseverative verbal explanations for their sorting behavior than did controls. Third, alcoholics provided less complete verbal explanations of the concepts that they correctly generated or identified. The differential importance of these factors on various measures of problem solving may help to explain the varied patterns of inefficient problem solving exhibited by alcoholics.
2D Distributed Sensing Via TDR
2007-11-02
plate VEGF CompositeSensor Experimental Setup Air 279 mm 61 78 VARTM profile: slope RTM profile: rectangle 22 1 Jul 2003© 2003 University of Delaware...2003 University of Delaware All rights reserved Vision: Non-contact 2D sensing ü VARTM setup constructed within TL can be sensed by its EM field: 2D...300.0 mm/ns. 1 2 1 Jul 2003© 2003 University of Delaware All rights reserved Model Validation “ RTM Flow” TDR Response to 139 mm VEGC
Inkjet printing of 2D layered materials.
Li, Jiantong; Lemme, Max C; Östling, Mikael
2014-11-10
Inkjet printing of 2D layered materials, such as graphene and MoS2, has attracted great interests for emerging electronics. However, incompatible rheology, low concentration, severe aggregation and toxicity of solvents constitute critical challenges which hamper the manufacturing efficiency and product quality. Here, we introduce a simple and general technology concept (distillation-assisted solvent exchange) to efficiently overcome these challenges. By implementing the concept, we have demonstrated excellent jetting performance, ideal printing patterns and a variety of promising applications for inkjet printing of 2D layered materials.
Emotional Intelligence and Problem Solving Strategy: Comparative Study Basedon "Tower of Hanoi" Test
Arefnasab, Zahra; Zare, Hosein; Babamahmoodi, Abdolreza
2012-01-01
Objective: The aim of this study was to compare problem solving strategies between peoples with high and low emotional intelligence (EI). Methods: This study is a cross sectional descriptive study.The sample groups include senior BS& BA between 20-30 years old into two with high and low emotional intelligence, each group had 30 subjects.Data was analyzed with non-parametric chi square test for main dependent variable (problem solving strategies) and accessory dependent variables(manner of starting and fulfillmentof the test).The Independent two group T-test was used for analyzing other accessory dependent variables(Number of errors and total time used for fulfillment of the test). Results: There was a significant difference between two groups in “number of errors” (t=-3.67,p=0) and “total time used for fulfillment of the test”(-6.17,p=0) and there was significant relation between EI and “problem solving strategies” (χ2=25.71, p<0.01) and (Cramer's v = 0.65, p<0.01) .Also there was significant relation between EI and “fulfillment of test” (χ2=20.31, p<0.01) and (φ=0.58, p<0.01). But the relation between EI and "manner of starting the test" was not significant (χ2=1.11, p=0.29). Subjects with high EI used more “insightful” strategy and subjects with low EI used more “trial- error” strategy. The first group completed the test more rapidlyand with fewer errors, compared with the second group. In addition the first group was more successful in performing the test than the second one. Conclusion: People with high EI significantly solve problems better than people with lowEI. PMID:24644484
[Micronucleus test of human oral buccal epithelium: problems, progress and prospects].
Kalaev, V N; Artiukhov, V G; Nechaeva, M S
2014-01-01
The articles by russian and foreign authors for the period from 2000 to 2012, devoted to the problems of application, analysis and interpretation of the results of micronucleus test in human buccal epithelium has been analyzed in the review. Nuclear abnormality founding in the cells of the oral mucosa has been described. The paper summarizes works devoted to the analysis of the influence of the micronucleus test methods (painting, taking scrapings) to its results. Modern opinions about the factors of different etiology (sex, age, genotype, psycho-physiological characteristics, immune status, diseases of different etiology, man-made pollution, climatic and geographical conditions, ionizing and nonionizing radiation, chemical compounds (drugs, dietary supplements, androgenic steroids, etc.), dental fillings, occupational exposures, alcohol, using tobacco blends) inducing the estimation of nuclear aberration has been summarized as a scheme. The problems and unresolved issues related to the peculiarities of micronucleus test has been noted.
Simulator test to study hot-flow problems related to a gas cooled reactor
NASA Technical Reports Server (NTRS)
Poole, J. W.; Freeman, M. P.; Doak, K. W.; Thorpe, M. L.
1973-01-01
An advance study of materials, fuel injection, and hot flow problems related to the gas core nuclear rocket is reported. The first task was to test a previously constructed induction heated plasma GCNR simulator above 300 kW. A number of tests are reported operating in the range of 300 kW at 10,000 cps. A second simulator was designed but not constructed for cold-hot visualization studies using louvered walls. A third task was a paper investigation of practical uranium feed systems, including a detailed discussion of related problems. The last assignment resulted in two designs for plasma nozzle test devices that could be operated at 200 atm on hydrogen.
NASA Astrophysics Data System (ADS)
Di Fiore, V.; Cavuoto, G.; Tarallo, D.; Punzo, M.; Evangelista, L.
2016-05-01
A joint analysis of down-hole (DH) and multichannel analysis of surface waves (MASW) measurements offers a complete evaluation of shear wave velocity profiles, especially for sites where a strong lateral variability is expected, such as archeological sites. In this complex stratigraphic setting, the high "subsoil anisotropy" (i.e., sharp lithological changes due to the presence of anthropogenic backfill deposits and/or buried man-made structures) implies a different role for DH and MASW tests. This paper discusses some results of a broad experimental program conducted on the Palatine Hill, one of the most ancient areas of the city of Rome (Italy). The experiments were part of a project on seismic microzoning and consisted of 20 MASW and 11 DH tests. The main objective of this study was to examine the difficulties related to the interpretation of the DH and MASW tests and the reliability limits inherent in the application of the noninvasive method in complex stratigraphic settings. As is well known, DH tests provide good determinations of shear wave velocities (Vs) for different lithologies and man-made materials, whereas MASW tests provide average values for the subsoil volume investigated. The data obtained from each method with blind tests were compared and were correlated to site-specific subsurface conditions, including lateral variability. Differences between punctual (DH) and global (MASW) Vs measurements are discussed, quantifying the errors by synthetic comparison and by site response analyses. This study demonstrates that, for archeological sites, VS profiles obtained from the DH and MASW methods differ by more than 15 %. However, the local site effect showed comparable results in terms of natural frequencies, whereas the resolution of the inverted shear wave velocity was influenced by the fundamental mode of propagation.
Active exterior cloaking for the 2D Laplace and Helmholtz equations.
Vasquez, Fernando Guevara; Milton, Graeme W; Onofrei, Daniel
2009-08-14
A new cloaking method is presented for 2D quasistatics and the 2D Helmholtz equation that we speculate extends to other linear wave equations. For 2D quasistatics it is proven how a single active exterior cloaking device can be used to shield an object from surrounding fields, yet produce very small scattered fields. The problem is reduced to finding a polynomial which is close to 1 in a disk and close to 0 in another disk, and such a polynomial is constructed. For the 2D Helmholtz equation it is numerically shown that three exterior cloaking devices placed around the object suffice to hide it.
Parallel Stitching of 2D Materials.
Ling, Xi; Lin, Yuxuan; Ma, Qiong; Wang, Ziqiang; Song, Yi; Yu, Lili; Huang, Shengxi; Fang, Wenjing; Zhang, Xu; Hsu, Allen L; Bie, Yaqing; Lee, Yi-Hsien; Zhu, Yimei; Wu, Lijun; Li, Ju; Jarillo-Herrero, Pablo; Dresselhaus, Mildred; Palacios, Tomás; Kong, Jing
2016-03-23
Diverse parallel stitched 2D heterostructures, including metal-semiconductor, semiconductor-semiconductor, and insulator-semiconductor, are synthesized directly through selective "sowing" of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. The methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.
Beckett, Phil
2012-01-01
The technique of two-dimensional (2D) gel electrophoresis is a powerful tool for separating complex mixtures of proteins, but since its inception in the mid 1970s, it acquired the stigma of being a very difficult application to master and was generally used to its best effect by experts. The introduction of commercially available immobilized pH gradients in the early 1990s provided enhanced reproducibility and easier protocols, leading to a pronounced increase in popularity of the technique. However gel-to-gel variation was still difficult to control without the use of technical replicates. In the mid 1990s (at the same time as the birth of "proteomics"), the concept of multiplexing fluorescently labeled proteins for 2D gel separation was realized by Jon Minden's group and has led to the ability to design experiments to virtually eliminate gel-to-gel variation, resulting in biological replicates being used for statistical analysis with the ability to detect very small changes in relative protein abundance. This technology is referred to as 2D difference gel electrophoresis (2D DIGE).
Parallel stitching of 2D materials
Ling, Xi; Wu, Lijun; Lin, Yuxuan; ...
2016-01-27
Diverse parallel stitched 2D heterostructures, including metal–semiconductor, semiconductor–semiconductor, and insulator–semiconductor, are synthesized directly through selective “sowing” of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. Lastly, the methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.
NASA High-Speed 2D Photogrammetric Measurement System
NASA Technical Reports Server (NTRS)
Dismond, Harriett R.
2012-01-01
The object of this report is to provide users of the NASA high-speed 2D photogrammetric measurement system with procedures required to obtain drop-model trajectory and impact data for full-scale and sub-scale models. This guide focuses on use of the system for vertical drop testing at the NASA Langley Landing and Impact Research (LandIR) Facility.
NASA Astrophysics Data System (ADS)
Rowley-Neale, Samuel J.; Fearn, Jamie M.; Brownson, Dale A. C.; Smith, Graham C.; Ji, Xiaobo; Banks, Craig E.
2016-08-01
Two-dimensional molybdenum disulphide nanosheets (2D-MoS2) have proven to be an effective electrocatalyst, with particular attention being focused on their use towards increasing the efficiency of the reactions associated with hydrogen fuel cells. Whilst the majority of research has focused on the Hydrogen Evolution Reaction (HER), herein we explore the use of 2D-MoS2 as a potential electrocatalyst for the much less researched Oxygen Reduction Reaction (ORR). We stray from literature conventions and perform experiments in 0.1 M H2SO4 acidic electrolyte for the first time, evaluating the electrochemical performance of the ORR with 2D-MoS2 electrically wired/immobilised upon several carbon based electrodes (namely; Boron Doped Diamond (BDD), Edge Plane Pyrolytic Graphite (EPPG), Glassy Carbon (GC) and Screen-Printed Electrodes (SPE)) whilst exploring a range of 2D-MoS2 coverages/masses. Consequently, the findings of this study are highly applicable to real world fuel cell applications. We show that significant improvements in ORR activity can be achieved through the careful selection of the underlying/supporting carbon materials that electrically wire the 2D-MoS2 and utilisation of an optimal mass of 2D-MoS2. The ORR onset is observed to be reduced to ca. +0.10 V for EPPG, GC and SPEs at 2D-MoS2 (1524 ng cm-2 modification), which is far closer to Pt at +0.46 V compared to bare/unmodified EPPG, GC and SPE counterparts. This report is the first to demonstrate such beneficial electrochemical responses in acidic conditions using a 2D-MoS2 based electrocatalyst material on a carbon-based substrate (SPEs in this case). Investigation of the beneficial reaction mechanism reveals the ORR to occur via a 4 electron process in specific conditions; elsewhere a 2 electron process is observed. This work offers valuable insights for those wishing to design, fabricate and/or electrochemically test 2D-nanosheet materials towards the ORR.Two-dimensional molybdenum disulphide nanosheets
Application of 2D Non-Graphene Materials and 2D Oxide Nanostructures for Biosensing Technology
Shavanova, Kateryna; Bakakina, Yulia; Burkova, Inna; Shtepliuk, Ivan; Viter, Roman; Ubelis, Arnolds; Beni, Valerio; Starodub, Nickolaj; Yakimova, Rositsa; Khranovskyy, Volodymyr
2016-01-01
The discovery of graphene and its unique properties has inspired researchers to try to invent other two-dimensional (2D) materials. After considerable research effort, a distinct “beyond graphene” domain has been established, comprising the library of non-graphene 2D materials. It is significant that some 2D non-graphene materials possess solid advantages over their predecessor, such as having a direct band gap, and therefore are highly promising for a number of applications. These applications are not limited to nano- and opto-electronics, but have a strong potential in biosensing technologies, as one example. However, since most of the 2D non-graphene materials have been newly discovered, most of the research efforts are concentrated on material synthesis and the investigation of the properties of the material. Applications of 2D non-graphene materials are still at the embryonic stage, and the integration of 2D non-graphene materials into devices is scarcely reported. However, in recent years, numerous reports have blossomed about 2D material-based biosensors, evidencing the growing potential of 2D non-graphene materials for biosensing applications. This review highlights the recent progress in research on the potential of using 2D non-graphene materials and similar oxide nanostructures for different types of biosensors (optical and electrochemical). A wide range of biological targets, such as glucose, dopamine, cortisol, DNA, IgG, bisphenol, ascorbic acid, cytochrome and estradiol, has been reported to be successfully detected by biosensors with transducers made of 2D non-graphene materials. PMID:26861346
Application of 2D Non-Graphene Materials and 2D Oxide Nanostructures for Biosensing Technology.
Shavanova, Kateryna; Bakakina, Yulia; Burkova, Inna; Shtepliuk, Ivan; Viter, Roman; Ubelis, Arnolds; Beni, Valerio; Starodub, Nickolaj; Yakimova, Rositsa; Khranovskyy, Volodymyr
2016-02-06
The discovery of graphene and its unique properties has inspired researchers to try to invent other two-dimensional (2D) materials. After considerable research effort, a distinct "beyond graphene" domain has been established, comprising the library of non-graphene 2D materials. It is significant that some 2D non-graphene materials possess solid advantages over their predecessor, such as having a direct band gap, and therefore are highly promising for a number of applications. These applications are not limited to nano- and opto-electronics, but have a strong potential in biosensing technologies, as one example. However, since most of the 2D non-graphene materials have been newly discovered, most of the research efforts are concentrated on material synthesis and the investigation of the properties of the material. Applications of 2D non-graphene materials are still at the embryonic stage, and the integration of 2D non-graphene materials into devices is scarcely reported. However, in recent years, numerous reports have blossomed about 2D material-based biosensors, evidencing the growing potential of 2D non-graphene materials for biosensing applications. This review highlights the recent progress in research on the potential of using 2D non-graphene materials and similar oxide nanostructures for different types of biosensors (optical and electrochemical). A wide range of biological targets, such as glucose, dopamine, cortisol, DNA, IgG, bisphenol, ascorbic acid, cytochrome and estradiol, has been reported to be successfully detected by biosensors with transducers made of 2D non-graphene materials.
Willenberg, Ina; Meschede, Anna K; Schebb, Nils Helge
2015-04-24
Cyclooxygenase-2 (COX-2) catalyzes the formation of PGH2 from arachidonic acid. PGH2 is further converted to different prostaglandins (PG), such as PGE2, PGD2 and TxB2. In this study a rapid online-SPE-LC-MS method for the simultaneous quantification of PGE2, PGD2 and TxB2 streamlined for COX-2 enzyme assays is presented. Baseline separation of all analytes was achieved in only 7.1 min per sample, including sample preparation by online SPE. The method showed high sensitivity (LODs of 0.65-1.25 fmol on column) and accuracy (89-113%) in protein containing media. Because of online-SPE, no manual sample preparation was required, except for addition of IS solution, allowing to use the approach as rapid read-out in COX-2 activity assays. This was demonstrated by applying the method on three in vitro test systems: a cell-free enzyme assay, an assay using HCA-7 cells constitutively expressing COX-2 and primary human monocytes. In these assays, the potency of three popular drugs celecoxib, indomethacin and dexamethasone was successfully characterized with the new online-LC-MS method. The comparison of the results showed that the inhibitory effects of PG formation strongly depend on the test system. Thus we suggest that the modulation of COX-2 activity of a test compound should be at least characterized in two assay systems. With the online-SPE-LC-MS described in here we present a versatile tool as read-out for these types of assays.
A scanning-mode 2D shear wave imaging (s2D-SWI) system for ultrasound elastography.
Qiu, Weibao; Wang, Congzhi; Li, Yongchuan; Zhou, Juan; Yang, Ge; Xiao, Yang; Feng, Ge; Jin, Qiaofeng; Mu, Peitian; Qian, Ming; Zheng, Hairong
2015-09-01
Ultrasound elastography is widely used for the non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) is a quantitative method for assessing tissue stiffness. SWI has been demonstrated to be less operator dependent than quasi-static elastography, and has the ability to acquire quantitative elasticity information in contrast with acoustic radiation force impulse (ARFI) imaging. However, traditional SWI implementations cannot acquire two dimensional (2D) quantitative images of the tissue elasticity distribution. This study proposes and evaluates a scanning-mode 2D SWI (s2D-SWI) system. The hardware and image processing algorithms are presented in detail. Programmable devices are used to support flexible control of the system and the image processing algorithms. An analytic signal based cross-correlation method and a Radon transformation based shear wave speed determination method are proposed, which can be implemented using parallel computation. Imaging of tissue mimicking phantoms, and in vitro, and in vivo imaging test are conducted to demonstrate the performance of the proposed system. The s2D-SWI system represents a new choice for the quantitative mapping of tissue elasticity, and has great potential for implementation in commercial ultrasound scanners.
NASA Astrophysics Data System (ADS)
Bolotina, I.; Bulavinov, A.; Pinchuk, R.; Salchak, Y.
2016-04-01
The paper considers the problems of ultrasonic nondestructive testing of products intended for mechanical engineering. The functional and electronic circuits of an ultrasonic tomograph are presented. The function of signal radiation from the clocked multielement apparatus is described, the cross-functional flowchart of the prototype of a US tomograph is considered. The development trends of ultrasonic tomography for near-term outlook are demonstrated.
Compatible embedding for 2D shape animation.
Baxter, William V; Barla, Pascal; Anjyo, Ken-Ichi
2009-01-01
We present new algorithms for the compatible embedding of 2D shapes. Such embeddings offer a convenient way to interpolate shapes having complex, detailed features. Compared to existing techniques, our approach requires less user input, and is faster, more robust, and simpler to implement, making it ideal for interactive use in practical applications. Our new approach consists of three parts. First, our boundary matching algorithm locates salient features using the perceptually motivated principles of scale-space and uses these as automatic correspondences to guide an elastic curve matching algorithm. Second, we simplify boundaries while maintaining their parametric correspondence and the embedding of the original shapes. Finally, we extend the mapping to shapes' interiors via a new compatible triangulation algorithm. The combination of our algorithms allows us to demonstrate 2D shape interpolation with instant feedback. The proposed algorithms exhibit a combination of simplicity, speed, and accuracy that has not been achieved in previous work.
Schottky diodes from 2D germanane
NASA Astrophysics Data System (ADS)
Sahoo, Nanda Gopal; Esteves, Richard J.; Punetha, Vinay Deep; Pestov, Dmitry; Arachchige, Indika U.; McLeskey, James T.
2016-07-01
We report on the fabrication and characterization of a Schottky diode made using 2D germanane (hydrogenated germanene). When compared to germanium, the 2D structure has higher electron mobility, an optimal band-gap, and exceptional stability making germanane an outstanding candidate for a variety of opto-electronic devices. One-atom-thick sheets of hydrogenated puckered germanium atoms have been synthesized from a CaGe2 framework via intercalation and characterized by XRD, Raman, and FTIR techniques. The material was then used to fabricate Schottky diodes by suspending the germanane in benzonitrile and drop-casting it onto interdigitated metal electrodes. The devices demonstrate significant rectifying behavior and the outstanding potential of this material.
Extrinsic Cation Selectivity of 2D Membranes
2017-01-01
From a systematic study of the concentration driven diffusion of positive and negative ions across porous 2D membranes of graphene and hexagonal boron nitride (h-BN), we prove their cation selectivity. Using the current–voltage characteristics of graphene and h-BN monolayers separating reservoirs of different salt concentrations, we calculate the reversal potential as a measure of selectivity. We tune the Debye screening length by exchanging the salt concentrations and demonstrate that negative surface charge gives rise to cation selectivity. Surprisingly, h-BN and graphene membranes show similar characteristics, strongly suggesting a common origin of selectivity in aqueous solvents. For the first time, we demonstrate that the cation flux can be increased by using ozone to create additional pores in graphene while maintaining excellent selectivity. We discuss opportunities to exploit our scalable method to use 2D membranes for applications including osmotic power conversion. PMID:28157333
Static & Dynamic Response of 2D Solids
Lin, Jerry
1996-07-15
NIKE2D is an implicit finite-element code for analyzing the finite deformation, static and dynamic response of two-dimensional, axisymmetric, plane strain, and plane stress solids. The code is fully vectorized and available on several computing platforms. A number of material models are incorporated to simulate a wide range of material behavior including elasto-placicity, anisotropy, creep, thermal effects, and rate dependence. Slideline algorithms model gaps and sliding along material interfaces, including interface friction, penetration and single surface contact. Interactive-graphics and rezoning is included for analyses with large mesh distortions. In addition to quasi-Newton and arc-length procedures, adaptive algorithms can be defined to solve the implicit equations using the solution language ISLAND. Each of these capabilities and more make NIKE2D a robust analysis tool.
Explicit 2-D Hydrodynamic FEM Program
Lin, Jerry
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.
Quasiparticle interference in unconventional 2D systems
NASA Astrophysics Data System (ADS)
Chen, Lan; Cheng, Peng; Wu, Kehui
2017-03-01
At present, research of 2D systems mainly focuses on two kinds of materials: graphene-like materials and transition-metal dichalcogenides (TMDs). Both of them host unconventional 2D electronic properties: pseudospin and the associated chirality of electrons in graphene-like materials, and spin-valley-coupled electronic structures in the TMDs. These exotic electronic properties have attracted tremendous interest for possible applications in nanodevices in the future. Investigation on the quasiparticle interference (QPI) in 2D systems is an effective way to uncover these properties. In this review, we will begin with a brief introduction to 2D systems, including their atomic structures and electronic bands. Then, we will discuss the formation of Friedel oscillation due to QPI in constant energy contours of electron bands, and show the basic concept of Fourier-transform scanning tunneling microscopy/spectroscopy (FT-STM/STS), which can resolve Friedel oscillation patterns in real space and consequently obtain the QPI patterns in reciprocal space. In the next two parts, we will summarize some pivotal results in the investigation of QPI in graphene and silicene, in which systems the low-energy quasiparticles are described by the massless Dirac equation. The FT-STM experiments show there are two different interference channels (intervalley and intravalley scattering) and backscattering suppression, which associate with the Dirac cones and the chirality of quasiparticles. The monolayer and bilayer graphene on different substrates (SiC and metal surfaces), and the monolayer and multilayer silicene on a Ag(1 1 1) surface will be addressed. The fifth part will introduce the FT-STM research on QPI in TMDs (monolayer and bilayer of WSe2), which allow us to infer the spin texture of both conduction and valence bands, and present spin-valley coupling by tracking allowed and forbidden scattering channels.
Compact 2-D graphical representation of DNA
NASA Astrophysics Data System (ADS)
Randić, Milan; Vračko, Marjan; Zupan, Jure; Novič, Marjana
2003-05-01
We present a novel 2-D graphical representation for DNA sequences which has an important advantage over the existing graphical representations of DNA in being very compact. It is based on: (1) use of binary labels for the four nucleic acid bases, and (2) use of the 'worm' curve as template on which binary codes are placed. The approach is illustrated on DNA sequences of the first exon of human β-globin and gorilla β-globin.
2D Metals by Repeated Size Reduction.
Liu, Hanwen; Tang, Hao; Fang, Minghao; Si, Wenjie; Zhang, Qinghua; Huang, Zhaohui; Gu, Lin; Pan, Wei; Yao, Jie; Nan, Cewen; Wu, Hui
2016-10-01
A general and convenient strategy for manufacturing freestanding metal nanolayers is developed on large scale. By the simple process of repeatedly folding and calendering stacked metal sheets followed by chemical etching, free-standing 2D metal (e.g., Ag, Au, Fe, Cu, and Ni) nanosheets are obtained with thicknesses as small as 1 nm and with sizes of the order of several micrometers.
Realistic and efficient 2D crack simulation
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek
2010-04-01
Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
TOPAZ2D validation status report, August 1990
Davis, B.
1990-08-01
Analytic solutions to two heat transfer problems were used to partially evaluate the performance TOPAZ, and LLNL finite element heat transfer code. The two benchmark analytic solutions were for: 2D steady state slab, with constant properties, constant uniform temperature boundary conditions on three sides, and constant temperature distribution according to a sine function on the fourth side; 1D transient non-linear, with temperature dependent conductivity and specific heat (varying such that the thermal diffusivity remained constant), constant heat flux on the front face and adiabatic conditions on the other face. The TOPAZ solution converged to the analytic solution in both the transient and the steady state problem. Consistent mass matrix type of analysis yielded best performance for the transient problem, in the late-time response; but notable unnatural anomalies were observed in the early-time temperature response at nodal locations near the front face. 5 refs., 22 figs.
Engineering light outcoupling in 2D materials.
Lien, Der-Hsien; Kang, Jeong Seuk; Amani, Matin; Chen, Kevin; Tosun, Mahmut; Wang, Hsin-Ping; Roy, Tania; Eggleston, Michael S; Wu, Ming C; Dubey, Madan; Lee, Si-Chen; He, Jr-Hau; Javey, Ali
2015-02-11
When light is incident on 2D transition metal dichalcogenides (TMDCs), it engages in multiple reflections within underlying substrates, producing interferences that lead to enhancement or attenuation of the incoming and outgoing strength of light. Here, we report a simple method to engineer the light outcoupling in semiconducting TMDCs by modulating their dielectric surroundings. We show that by modulating the thicknesses of underlying substrates and capping layers, the interference caused by substrate can significantly enhance the light absorption and emission of WSe2, resulting in a ∼11 times increase in Raman signal and a ∼30 times increase in the photoluminescence (PL) intensity of WSe2. On the basis of the interference model, we also propose a strategy to control the photonic and optoelectronic properties of thin-layer WSe2. This work demonstrates the utilization of outcoupling engineering in 2D materials and offers a new route toward the realization of novel optoelectronic devices, such as 2D LEDs and solar cells.
Irreversibility-inversions in 2D turbulence
NASA Astrophysics Data System (ADS)
Bragg, Andrew; de Lillo, Filippo; Boffetta, Guido
2016-11-01
We consider a recent theoretical prediction that for inertial particles in 2D turbulence, the nature of the irreversibility of their pair dispersion inverts when the particle inertia exceeds a certain value. In particular, when the particle Stokes number, St , is below a certain value, the forward-in-time (FIT) dispersion should be faster than the backward-in-time (BIT) dispersion, but for St above this value, this should invert so that BIT becomes faster than FIT dispersion. This non-trivial behavior arises because of the competition between two physically distinct irreversibility mechanisms that operate in different regimes of St . In 3D turbulence, both mechanisms act to produce faster BIT than FIT dispersion, but in 2D, the two mechanisms have opposite effects because of the inverse energy cascade in the turbulent velocity field. We supplement the qualitative argument given by Bragg et al. by deriving quantitative predictions of this effect in the short-time dispersion limit. These predictions are then confirmed by results of inertial particle dispersion in a direct numerical simulation of 2D turbulence.
2D superconductivity by ionic gating
NASA Astrophysics Data System (ADS)
Iwasa, Yoshi
2D superconductivity is attracting a renewed interest due to the discoveries of new highly crystalline 2D superconductors in the past decade. Superconductivity at the oxide interfaces triggered by LaAlO3/SrTiO3 has become one of the promising routes for creation of new 2D superconductors. Also, the MBE grown metallic monolayers including FeSe are also offering a new platform of 2D superconductors. In the last two years, there appear a variety of monolayer/bilayer superconductors fabricated by CVD or mechanical exfoliation. Among these, electric field induced superconductivity by electric double layer transistor (EDLT) is a unique platform of 2D superconductivity, because of its ability of high density charge accumulation, and also because of the versatility in terms of materials, stemming from oxides to organics and layered chalcogenides. In this presentation, the following issues of electric filed induced superconductivity will be addressed; (1) Tunable carrier density, (2) Weak pinning, (3) Absence of inversion symmetry. (1) Since the sheet carrier density is quasi-continuously tunable from 0 to the order of 1014 cm-2, one is able to establish an electronic phase diagram of superconductivity, which will be compared with that of bulk superconductors. (2) The thickness of superconductivity can be estimated as 2 - 10 nm, dependent on materials, and is much smaller than the in-plane coherence length. Such a thin but low resistance at normal state results in extremely weak pinning beyond the dirty Boson model in the amorphous metallic films. (3) Due to the electric filed, the inversion symmetry is inherently broken in EDLT. This feature appears in the enhancement of Pauli limit of the upper critical field for the in-plane magnetic fields. In transition metal dichalcogenide with a substantial spin-orbit interactions, we were able to confirm the stabilization of Cooper pair due to its spin-valley locking. This work has been supported by Grant-in-Aid for Specially
Gold-standard performance for 2D hydrodynamic modeling
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; MacVicar, B. J.
2013-12-01
Two-dimensional, depth-averaged hydrodynamic (2D) models are emerging as an increasingly useful tool for environmental water resources engineering. One of the remaining technical hurdles to the wider adoption and acceptance of 2D modeling is the lack of standards for 2D model performance evaluation when the riverbed undulates, causing lateral flow divergence and convergence. The goal of this study was to establish a gold-standard that quantifies the upper limit of model performance for 2D models of undulating riverbeds when topography is perfectly known and surface roughness is well constrained. A review was conducted of published model performance metrics and the value ranges exhibited by models thus far for each one. Typically predicted velocity differs from observed by 20 to 30 % and the coefficient of determination between the two ranges from 0.5 to 0.8, though there tends to be a bias toward overpredicting low velocity and underpredicting high velocity. To establish a gold standard as to the best performance possible for a 2D model of an undulating bed, two straight, rectangular-walled flume experiments were done with no bed slope and only different bed undulations and water surface slopes. One flume tested model performance in the presence of a porous, homogenous gravel bed with a long flat section, then a linear slope down to a flat pool bottom, and then the same linear slope back up to the flat bed. The other flume had a PVC plastic solid bed with a long flat section followed by a sequence of five identical riffle-pool pairs in close proximity, so it tested model performance given frequent undulations. Detailed water surface elevation and velocity measurements were made for both flumes. Comparing predicted versus observed velocity magnitude for 3 discharges with the gravel-bed flume and 1 discharge for the PVC-bed flume, the coefficient of determination ranged from 0.952 to 0.987 and the slope for the regression line was 0.957 to 1.02. Unsigned velocity
Pareto Joint Inversion of 2D Magnetotelluric and Gravity Data — Towards Practical Applications
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-10-01
In this paper, a Pareto inversion based global optimization approach, to obtain results of joint inversion of two types of geophysical data sets, is formulated. 2D magnetotelluric and gravity data were used for tests, but presented solution is flexible enough to be used for combination of any kind of two or more target functions, as long as misfits can be calculated and forward problems solved. To minimize dimensionality of the solution, space and introduce straightforward regularization Sharp Boundary Interface (SBI) method was applied. As a main optimization engine, Particle Swarm Optimization (PSO) was used. Synthetic examples based on a real geological model were used to test proposed approach and show its usefulness in practical applications.
Fracture surfaces of heterogeneous materials: A 2D solvable model
NASA Astrophysics Data System (ADS)
Katzav, E.; Adda-Bedia, M.; Derrida, B.
2007-05-01
Using an elastostatic description of crack growth based on the Griffith criterion and the principle of local symmetry, we present a stochastic model describing the propagation of a crack tip in a 2D heterogeneous brittle material. The model ensures the stability of straight cracks and allows for the study of the roughening of fracture surfaces. When neglecting the effect of the nonsingular stress, the problem becomes exactly solvable and yields analytic predictions for the power spectrum of the paths. This result suggests an alternative to the conventional power law analysis often used in the analysis of experimental data.
Interplay between Anderson and Stark Localization in 2D Lattices
Kolovsky, A. R.
2008-11-07
This Letter studies the dynamics of a quantum particle in 2D lattices with on-site disorder in the presence of a static field. It is shown that the particle is localized along the field direction, while in the orthogonal direction to the field it shows diffusive dynamics for algebraically large times. For weak disorder an analytical expression for the diffusion coefficient is obtained by mapping the problem to a band random matrix. This expression is confirmed by numerical simulations of the particle's dynamics, which also indicate the existence of a universal equation for the diffusion coefficient, valid for an arbitrary disorder strength.
McDermott, K B; Roediger, H L
1996-03-01
Three experiments examined whether a conceptual implicit memory test (specifically, category instance generation) would exhibit repetition effects similar to those found in free recall. The transfer appropriate processing account of dissociations among memory tests led us to predict that the tests would show parallel effects; this prediction was based upon the theory's assumption that conceptual tests will behave similarly as a function of various independent variables. In Experiment 1, conceptual repetition (i.e., following a target word [e.g., puzzles] with an associate [e.g., jigsaw]) did not enhance priming on the instance generation test relative to the condition of simply presenting the target word once, although this manipulation did affect free recall. In Experiment 2, conceptual repetition was achieved by following a picture with its corresponding word (or vice versa). In this case, there was an effect of conceptual repetition on free recall but no reliable effect on category instance generation or category cued recall. In addition, we obtained a picture superiority effect in free recall but not in category instance generation. In the third experiment, when the same study sequence was used as in Experiment 1, but with instructions that encouraged relational processing, priming on the category instance generation task was enhanced by conceptual repetition. Results demonstrate that conceptual memory tests can be dissociated and present problems for Roediger's (1990) transfer appropriate processing account of dissociations between explicit and implicit tests.
Strength design with 2-d triaxial braid textile composites
Smith, L.V.; Swanson, S.R.
1994-12-31
Textile preforms are currently being considered as a possible means for reducing the cost of advanced fiber composites. This paper presents a methodology for strength design of carbon/epoxy 2-d braid fiber composites under general conditions of biaxial stress loading. A comprehensive investigation into the in-plane strength properties of 2-d braids has been carried out, using tubular specimens of AS4/1895 carbon fiber/epoxy made with the RTM process. The biaxial loadings involved both compression-compression and tension-tension biaxial tests. The results showed that failure under biaxial loading could be based on procedures similar to those developed for laminates, using critical strain values in the axial and braid direction fibers, but with degraded strength properties because of the undulating nature of -the fiber paths. A significant loss of strength was observed in the braid directions.
Enhanced automated platform for 2D characterization of RFID communications
NASA Astrophysics Data System (ADS)
Vuza, Dan Tudor; Vlǎdescu, Marian
2016-12-01
The characterization of the quality of communication between an RFID reader and a transponder at all expected positions of the latter on the reader antenna is of primal importance for the evaluation of performance of an RFID system. Continuing the line of instruments developed for this purpose by the authors, the present work proposes an enhanced version of a previously introduced automated platform for 2D evaluation. By featuring higher performance in terms of mechanical speed, the new version allows to obtain 2D maps of communication with a higher resolution that would have been prohibitive in terms of test duration with the previous version. The list of measurement procedures that can be executed with the platform is now enlarged with additional ones, such as the determination of the variation of the magnetic coupling between transponder and antenna across the antenna surface and the utilization of transponder simulators for evaluation of the quality of communication.
FPCAS2D user's guide, version 1.0
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.
1994-01-01
The FPCAS2D computer code has been developed for aeroelastic stability analysis of bladed disks such as those in fans, compressors, turbines, propellers, or propfans. The aerodynamic analysis used in this code is based on the unsteady two-dimensional full potential equation which is solved for a cascade of blades. The structural analysis is based on a two degree-of-freedom rigid typical section model for each blade. Detailed explanations of the aerodynamic analysis, the numerical algorithms, and the aeroelastic analysis are not given in this report. This guide can be used to assist in the preparation of the input data required by the FPCAS2D code. A complete description of the input data is provided in this report. In addition, four test cases, including inputs and outputs, are provided.
Content validity of a clinical problem solving test for use in recruitment to the acute specialties.
Crossingham, Gemma; Gale, Thomas; Roberts, Martin; Carr, Alison; Langton, Jeremy; Anderson, Ian
2011-02-01
Clinical problem solving tests (CPSTs) have been shown to be reliable and valid for recruitment to general practice (GP) training programmes. This article presents the results from a Department of Health-funded pilot into the use of a CPST designed for recruitment to the acute specialties (AS). The pilot paper consisted of 99 items from the validated GP question bank and 40 new items aimed specifically at topics of relevance to AS training. The CPST successfully differentiated between applicants. The overall test and the GP section showed high internal reliability, whereas the AS pilot section performed less well. A detailed item analysis revealed that the AS pilot items were, on average, more difficult and of poorer quality than the GP items. Important issues that need to be addressed in the early development phase of a test used for high stakes selection to specialty training programmes are discussed.
Robust non-parametric tests for complex-repeated measures problems in ophthalmology.
Brombin, Chiara; Midena, Edoardo; Salmaso, Luigi
2013-12-01
The NonParametric Combination methodology (NPC) of dependent permutation tests allows the experimenter to face many complex multivariate testing problems and represents a convincing and powerful alternative to standard parametric methods. The main advantage of this approach lies in its flexibility in handling any type of variable (categorical and quantitative, with or without missing values) while at the same time taking dependencies among those variables into account without the need of modelling them. NPC methodology enables to deal with repeated measures, paired data, restricted alternative hypotheses, missing data (completely at random or not), high-dimensional and small sample size data. Hence, NPC methodology can offer a significant contribution to successful research in biomedical studies with several endpoints, since it provides reasonably efficient solutions and clear interpretations of inferential results. Pesarin F. Multivariate permutation tests: with application in biostatistics. Chichester-New York: John Wiley &Sons, 2001; Pesarin F, Salmaso L. Permutation tests for complex data: theory, applications and software. Chichester, UK: John Wiley &Sons, 2010. We focus on non-parametric permutation solutions to two real-case studies in ophthalmology, concerning complex-repeated measures problems. For each data set, different analyses are presented, thus highlighting characteristic aspects of the data structure itself. Our goal is to present different solutions to multivariate complex case studies, guiding researchers/readers to choose, from various possible interpretations of a problem, the one that has the highest flexibility and statistical power under a set of less stringent assumptions. MATLAB code has been implemented to carry out the analyses.
Flexible and Scalable Full‐Length CYP2D6 Long Amplicon PacBio Sequencing
Vossen, Rolf H.A.M.; Anvar, Seyed Yahya; Allard, William G.; Guchelaar, Henk‐Jan; White, Stefan J.; den Dunnen, Johan T.; Swen, Jesse J.; van der Straaten, Tahar
2017-01-01
ABSTRACT Cytochrome P450 2D6 (CYP2D6) is among the most important genes involved in drug metabolism. Specific variants are associated with changes in the enzyme's amount and activity. Multiple technologies exist to determine these variants, like the AmpliChip CYP450 test, Taqman qPCR, or Second‐Generation Sequencing, however, sequence homology between cytochrome P450 genes and pseudogene CYP2D7 impairs reliable CYP2D6 genotyping, and variant phasing cannot accurately be determined using these assays. To circumvent this, we sequenced CYP2D6 using the Pacific Biosciences RSII and obtained high‐quality, full‐length, phased CYP2D6 sequences, enabling accurate variant calling and haplotyping of the entire gene‐locus including exonic, intronic, and upstream and downstream regions. Unphased diplotypes (Roche AmpliChip CYP450 test) were confirmed for 24 of the 25 samples, including gene duplications. Cases with gene deletions required additional specific assays to resolve. In total, 61 unique variants were detected, including variants that had not previously been associated with specific haplotypes. To further aid genomic analysis using standard reference sequences, we have established an LOVD‐powered CYP2D6 gene‐variant database, and added all reference haplotypes and data reported here. We conclude that our CYP2D6 genotyping approach produces reliable CYP2D6 diplotypes and reveals information about additional variants, including phasing and copy‐number variation. PMID:28044414
Is my lung function really that good? Flow-type spirometer problems that elevate test results.
Townsend, Mary C; Hankinson, John L; Lindesmith, Larry A; Slivka, William A; Stiver, Gregg; Ayres, Gerald T
2004-05-01
Most spirometry errors reduce test results, and it is widely assumed that measurement accuracy is guaranteed by frequent spirometer calibrations or calibration checks. However, zero errors and changes in flow-type spirometer sensors may occur during testing that significantly elevate test results, even though the spirometer was calibrated recently. To draw attention to these often-unrecognized problems, this report presents anomalous spirograms and test results obtained from occupational medicine clinics and hospital pulmonary function laboratories during quality assurance spirogram reviews. The spurious results appear to have been caused by inaccurate zeroing of the flow sensor, or by condensation, mucus deposition, or unstable calibration of various flow-type spirometers. These errors elevated some FVCs to 144 to 204% of predicted and probably caused 40% of 121 middle-aged working men in respirator medical clearance programs to record both FVC and FEV1 > 120% of predicted. Since spirometers report the largest values from a test, these errors must be recognized and deleted to avoid false-negative interpretations. Flow-type spirometer users at all levels, from the technician to the interpreter of test results, should be aware of the potential for and the appearance of these errors in spirograms.
Diagnosing problems with imputation models using the Kolmogorov-Smirnov test: a simulation study
2013-01-01
Background Multiple imputation (MI) is becoming increasingly popular as a strategy for handling missing data, but there is a scarcity of tools for checking the adequacy of imputation models. The Kolmogorov-Smirnov (KS) test has been identified as a potential diagnostic method for assessing whether the distribution of imputed data deviates substantially from that of the observed data. The aim of this study was to evaluate the performance of the KS test as an imputation diagnostic. Methods Using simulation, we examined whether the KS test could reliably identify departures from assumptions made in the imputation model. To do this we examined how the p-values from the KS test behaved when skewed and heavy-tailed data were imputed using a normal imputation model. We varied the amount of missing data, the missing data models and the amount of skewness, and evaluated the performance of KS test in diagnosing issues with the imputation models under these different scenarios. Results The KS test was able to flag differences between the observations and imputed values; however, these differences did not always correspond to problems with MI inference for the regression parameter of interest. When there was a strong missing at random dependency, the KS p-values were very small, regardless of whether or not the MI estimates were biased; so that the KS test was not able to discriminate between imputed variables that required further investigation, and those that did not. The p-values were also sensitive to sample size and the proportion of missing data, adding to the challenge of interpreting the results from the KS test. Conclusions Given our study results, it is difficult to establish guidelines or recommendations for using the KS test as a diagnostic tool for MI. The investigation of other imputation diagnostics and their incorporation into statistical software are important areas for future research. PMID:24252653
Assessment test before the reporting phase of tutorial session in problem-based learning
Bestetti, Reinaldo B; Couto, Lucélio B; Restini, Carolina BA; Faria, Milton; Romão, Gustavo S
2017-01-01
Purpose In our context, problem-based learning is not used in the preuniversity environment. Consequently, students have a great deal of difficulty adapting to this method, particularly regarding self-study before the reporting phase of a tutorial session. Accordingly, the aim of this study was to assess if the application of an assessment test (multiple choice questions) before the reporting phase of a tutorial session would improve the academic achievement of students at the preclinical stage of our medical course. Methods A test consisting of five multiple choice questions, prepared by tutors of the module at hand and related to the problem-solving process of each tutorial session, was applied following the self-study phase and immediately before the reporting phase of all tutorial sessions. The questions were based on the previously established student learning goals. The assessment was applied to all modules from the fifth to the eighth semesters. The final scores achieved by students in the end-of-module tests were compared. Results Overall, the mean test score was 65.2±0.7% before and 68.0±0.7% after the introduction of an assessment test before the reporting phase (P<0.05). Students in the sixth semester scored 67.6±1.6% compared to 63.9±2.2% when they were in the fifth semester (P<0.05). Students in the seventh semester achieved a similar score to their sixth semester score (64.6±2.6% vs 63.3±2%, respectively, P>0.05). Students in the eighth semester scored 71.8±2.3% compared to 70±2% when they were in the seventh semester (P>0.05). Conclusion In our medical course, the application of an assessment test (a multiple choice test) before the reporting phase of the problem-based learning tutorial process increases the overall academic achievement of students, especially of those in the sixth semester in comparison with when they were in the fifth semester. PMID:28280404
SNARK09 - a software package for reconstruction of 2D images from 1D projections.
Klukowska, Joanna; Davidi, Ran; Herman, Gabor T
2013-06-01
The problem of reconstruction of slices and volumes from 1D and 2D projections has arisen in a large number of scientific fields (including computerized tomography, electron microscopy, X-ray microscopy, radiology, radio astronomy and holography). Many different methods (algorithms) have been suggested for its solution. In this paper we present a software package, SNARK09, for reconstruction of 2D images from their 1D projections. In the area of image reconstruction, researchers often desire to compare two or more reconstruction techniques and assess their relative merits. SNARK09 provides a uniform framework to implement algorithms and evaluate their performance. It has been designed to treat both parallel and divergent projection geometries and can either create test data (with or without noise) for use by reconstruction algorithms or use data collected by another software or a physical device. A number of frequently-used classical reconstruction algorithms are incorporated. The package provides a means for easy incorporation of new algorithms for their testing, comparison and evaluation. It comes with tools for statistical analysis of the results and ten worked examples.
ERIC Educational Resources Information Center
Wainer, Howard; And Others
Four researchers at the Educational Testing Service describe what they consider some of the most vexing research problems they face. While these problems are not completely statistical, they all have major statistical components. Following the introduction (section 1), in section 2, "Problems with the Simultaneous Estimation of Many True…
Periodically sheared 2D Yukawa systems
Kovács, Anikó Zsuzsa; Hartmann, Peter; Donkó, Zoltán
2015-10-15
We present non-equilibrium molecular dynamics simulation studies on the dynamic (complex) shear viscosity of a 2D Yukawa system. We have identified a non-monotonic frequency dependence of the viscosity at high frequencies and shear rates, an energy absorption maximum (local resonance) at the Einstein frequency of the system at medium shear rates, an enhanced collective wave activity, when the excitation is near the plateau frequency of the longitudinal wave dispersion, and the emergence of significant configurational anisotropy at small frequencies and high shear rates.
ENERGY LANDSCAPE OF 2D FLUID FORMS
Y. JIANG; ET AL
2000-04-01
The equilibrium states of 2D non-coarsening fluid foams, which consist of bubbles with fixed areas, correspond to local minima of the total perimeter. (1) The authors find an approximate value of the global minimum, and determine directly from an image how far a foam is from its ground state. (2) For (small) area disorder, small bubbles tend to sort inwards and large bubbles outwards. (3) Topological charges of the same sign repel while charges of opposite sign attract. (4) They discuss boundary conditions and the uniqueness of the pattern for fixed topology.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
Remarks on thermalization in 2D CFT
NASA Astrophysics Data System (ADS)
de Boer, Jan; Engelhardt, Dalit
2016-12-01
We revisit certain aspects of thermalization in 2D conformal field theory (CFT). In particular, we consider similarities and differences between the time dependence of correlation functions in various states in rational and non-rational CFTs. We also consider the distinction between global and local thermalization and explain how states obtained by acting with a diffeomorphism on the ground state can appear locally thermal, and we review why the time-dependent expectation value of the energy-momentum tensor is generally a poor diagnostic of global thermalization. Since all 2D CFTs have an infinite set of commuting conserved charges, generic initial states might be expected to give rise to a generalized Gibbs ensemble rather than a pure thermal ensemble at late times. We construct the holographic dual of the generalized Gibbs ensemble and show that, to leading order, it is still described by a Banados-Teitelboim-Zanelli black hole. The extra conserved charges, while rendering c <1 theories essentially integrable, therefore seem to have little effect on large-c conformal field theories.
Microwave Assisted 2D Materials Exfoliation
NASA Astrophysics Data System (ADS)
Wang, Yanbin
Two-dimensional materials have emerged as extremely important materials with applications ranging from energy and environmental science to electronics and biology. Here we report our discovery of a universal, ultrafast, green, solvo-thermal technology for producing excellent-quality, few-layered nanosheets in liquid phase from well-known 2D materials such as such hexagonal boron nitride (h-BN), graphite, and MoS2. We start by mixing the uniform bulk-layered material with a common organic solvent that matches its surface energy to reduce the van der Waals attractive interactions between the layers; next, the solutions are heated in a commercial microwave oven to overcome the energy barrier between bulk and few-layers states. We discovered the minutes-long rapid exfoliation process is highly temperature dependent, which requires precise thermal management to obtain high-quality inks. We hypothesize a possible mechanism of this proposed solvo-thermal process; our theory confirms the basis of this novel technique for exfoliation of high-quality, layered 2D materials by using an as yet unknown role of the solvent.
Meshfree natural vibration analysis of 2D structures
NASA Astrophysics Data System (ADS)
Kosta, Tomislav; Tsukanov, Igor
2014-02-01
Determination of resonance frequencies and vibration modes of mechanical structures is one of the most important tasks in the product design procedure. The main goal of this paper is to describe a pioneering application of the solution structure method (SSM) to 2D structural natural vibration analysis problems and investigate the numerical properties of the method. SSM is a meshfree method which enables construction of the solutions to the engineering problems that satisfy exactly all prescribed boundary conditions. This method is capable of using spatial meshes that do not conform to the shape of a geometric model. Instead of using the grid nodes to enforce boundary conditions, it employs distance fields to the geometric boundaries and combines them with the basis functions and prescribed boundary conditions at run time. This defines unprecedented geometric flexibility of the SSM as well as the complete automation of the solution procedure. In the paper we will explain the key points of the SSM as well as investigate the accuracy and convergence of the proposed approach by comparing our results with the ones obtained using analytical methods or traditional finite element analysis. Despite in this paper we are dealing with 2D in-plane vibrations, the proposed approach has a straightforward generalization to model vibrations of 3D structures.
2D Radiative Transfer in Magnetically Confined Structures
NASA Astrophysics Data System (ADS)
Heinzel, P.; Anzer, U.
2003-01-01
Magnetically confined structures in the solar atmosphere exhibit a large complexity in their shapes and physical conditions. As an example, we show the case of so-called magnetic dips in prominences which are in magnetohydrostatic equilibria. For such models we solve 2D non-LTE multilevel problem for hydrogen with PRD in Lyman resonance lines. The iterative technique used is based on the MALI approach with simple diagonal ALO and SC formal solver. To compute the hydrogen ionization balance, the preconditioned MALI equations are linearized with respect to atomic level populations and electron density and solved iteratively using the Newton-Raphson scheme. Two additional problems are addressed: (i) an adequate iteration method for cases when the column-mass scale is used in one of the two dimensions but varies along the other dimension (which has a geometrical scaling); and (ii) a possibility of using AMR (Adaptive Mesh Refinement) algorithms to account for steep 2D gradients of selected variables (temperature, density, etc.).
2D Hilbert transform for phase retrieval of speckle fields
NASA Astrophysics Data System (ADS)
Gorsky, M. P.; Ryabyi, P. A.; Ivanskyi, D. I.
2016-09-01
The paper presents principal approaches to diagnosing the structure forming skeleton of the complex optical field. An analysis of optical field singularity algorithms depending on intensity discretization and image resolution has been carried out. An optimal approach is chosen, which allows to bring much closer the solution of the phase problem of localization speckle-field special points. The use of a "window" 2D Hilbert transform for reconstruction of the phase distribution of the intensity of a speckle field is proposed. It is shown that the advantage of this approach consists in the invariance of a phase map to a change of the position of the kernel of transformation and in a possibility to reconstruct the structure-forming elements of the skeleton of an optical field, including singular points and saddle points. We demonstrate the possibility to reconstruct the equi-phase lines within a narrow confidence interval, and introduce an additional algorithm for solving the phase problem for random 2D intensity distributions.
Roach, Victoria A; Fraser, Graham M; Kryklywy, James H; Mitchell, Derek G V; Wilson, Timothy D
2017-03-30
Individuals with an aptitude for interpreting spatial information (high mental rotation ability: HMRA) typically master anatomy with more ease, and more quickly, than those with low mental rotation ability (LMRA). This article explores how visual attention differs with time limits on spatial reasoning tests. Participants were assorted to two groups based on their mental rotation ability scores and their eye movements were collected during these tests. Analysis of salience during testing revealed similarities between MRA groups in untimed conditions but significant differences between the groups in the timed one. Question-by-question analyses demonstrate that HMRA individuals were more consistent across the two timing conditions (κ = 0.25), than the LMRA (κ = 0.013). It is clear that the groups respond to time limits differently and their apprehension of images during spatial problem solving differs significantly. Without time restrictions, salience analysis suggests LMRA individuals attended to similar aspects of the images as HMRA and their test scores rose concomitantly. Under timed conditions however, LMRA diverge from HMRA attention patterns, adopting inflexible approaches to visual search and attaining lower test scores. With this in mind, anatomical educators may wish to revisit some evaluations and teaching approaches in their own practice. Although examinations need to evaluate understanding of anatomical relationships, the addition of time limits may induce an unforeseen interaction of spatial reasoning and anatomical knowledge. Anat Sci Educ. © 2017 American Association of Anatomists.
New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems
Brown, Forrest B.
2016-06-17
Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP^{®} for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.
Grid generation for general 2-D regions using hyperbolic equations
NASA Technical Reports Server (NTRS)
Cordova, Jeffrey Q.; Barth, Timothy J.
1988-01-01
A method for applying a hyperbolic grid generation scheme to the construction of meshes in general 2-D regions has been developed. This approach, which follows the theory developed by Steger and Chaussee (1980) and the algorithm outlined by Kinsey and Barth (1984), is based on improving local grid control. This is accomplished by adding an angle control source term to the equations and using a new algorithm for computing the volume source term. These modifications lead to superior methods for fixing the 'local' problems of hyperbolic grid generation, namely, propagation of initial discontinuities and formation of grid shocks (crossing grid lines). More importantly, a method for solving the global problem of constraining the grid with more than one boundary (internal grid generation) has been developed. These algorithms have been implemented in an interactive grid generation program and the results for several geometries are presented and discussed.
Statistical analysis of quiet stance sway in 2-D.
Bakshi, Avijit; DiZio, Paul; Lackner, James R
2014-04-01
Subjects exposed to a rotating environment that perturbs their postural sway show adaptive changes in their voluntary spatially directed postural motion to restore accurate movement paths but do not exhibit any obvious learning during passive stance. We have found, however, that a variable known to characterize the degree of stochasticity in quiet stance can also reveal subtle learning phenomena in passive stance. We extended Chow and Collins (Phys Rev E 52(1):909-912, 1995) one-dimensional pinned-polymer model (PPM) to two dimensions (2-D) and then evaluated the model's ability to make analytical predictions for 2-D quiet stance. To test the model, we tracked center of mass and centers of foot pressures, and compared and contrasted stance sway for the anterior-posterior versus medio-lateral directions before, during, and after exposure to rotation at 10 rpm. Sway of the body during rotation generated Coriolis forces that acted perpendicular to the direction of sway. We found significant adaptive changes for three characteristic features of the mean square displacement (MSD) function: the exponent of the power law defined at short time scales, the proportionality constant of the power law, and the saturation plateau value defined at longer time scales. The exponent of the power law of MSD at a short time scale lies within the bounds predicted by the 2-D PPM. The change in MSD during exposure to rotation also had a power-law exponent in the range predicted by the theoretical model. We discuss the Coriolis force paradigm for studying postural and movement control and the applicability of the PPM model in 2-D for studying postural adaptation.
Fully automated 2D-3D registration and verification.
Varnavas, Andreas; Carrell, Tom; Penney, Graeme
2015-12-01
Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.
Cuberos-Urbano, Gustavo; Caracuel, Alfonso; Vilar-López, Raquel; Valls-Serrano, Carlos; Bateman, Andrew; Verdejo-García, Antonio
2013-01-01
The"dysexecutive syndrome" is composed of a range of cognitive, emotional, and behavioral deficits that are difficult to evaluate using traditional neuropsychological tests. The Multiple Errands Test (MET) was originally developed to systematize the assessment of the more elusive manifestations of the dysexecutive syndrome. The aims of this study were to examining the reliability of the MET and to investigate the predictive ability of its indices to explain a range of "dysexecutive"-related symptoms in everyday life. Thirty patients with acquired brain injury participated in this study. The MET showed an adequate inter-rater reliability and ecological validity. The main performance indices from the MET were able to significantly predict severity of everyday life executive problems, with different indices predicting particular manifestations of different components of executive functions.
ERIC Educational Resources Information Center
Almquist, Alan J.; Cronin, John E.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the origin of man. Part I, The Biochemical Evidence for Human Evolution, contains four sections: (1) Introduction; (2) Macromolecular Data; (3)…
A selective role of NKG2D in inflammatory and autoimmune diseases.
Guerra, Nadia; Pestal, Kathleen; Juarez, Tiffany; Beck, Jennifer; Tkach, Karen; Wang, Lin; Raulet, David H
2013-12-01
The NKG2D activating receptor has been implicated in numerous autoimmune diseases. We tested the role of NKG2D in models of autoimmunity and inflammation using NKG2D knockout mice and antibody blockade experiments. The severity of experimental autoimmune encephalitis (EAE) was decreased in NKG2D-deficient mice when the disease was induced with a limiting antigen dose, but unchanged with an optimal antigen dose. Surprisingly, however, NKG2D deficiency had no detectable effect in several other models, including two models of type 1 diabetes, and a model of intestinal inflammation induced by poly(I:C). NKG2D antibody blockade in normal mice also failed to inhibit disease in the NOD diabetes model or the intestinal inflammation model. Published evidence using NKG2D knockout mice demonstrated a role for NKG2D in mouse models of atherosclerosis and liver inflammation, as well as in chronic obstructive pulmonary disease. Therefore, our results suggest that NKG2D plays selective roles in inflammatory diseases.
Targeting multiple types of tumors using NKG2D-coated iron oxide nanoparticles
NASA Astrophysics Data System (ADS)
Wu, Ming-Ru; Cook, W. James; Zhang, Tong; Sentman, Charles L.
2014-11-01
Iron oxide nanoparticles (IONPs) hold great potential for cancer therapy. Actively targeting IONPs to tumor cells can further increase therapeutic efficacy and decrease off-target side effects. To target tumor cells, a natural killer (NK) cell activating receptor, NKG2D, was utilized to develop pan-tumor targeting IONPs. NKG2D ligands are expressed on many tumor types and its ligands are not found on most normal tissues under steady state conditions. The data showed that mouse and human fragment crystallizable (Fc)-fusion NKG2D (Fc-NKG2D) coated IONPs (NKG2D/NPs) can target multiple NKG2D ligand positive tumor types in vitro in a dose dependent manner by magnetic cell sorting. Tumor targeting effect was robust even under a very low tumor cell to normal cell ratio and targeting efficiency correlated with NKG2D ligand expression level on tumor cells. Furthermore, the magnetic separation platform utilized to test NKG2D/NP specificity has the potential to be developed into high throughput screening strategies to identify ideal fusion proteins or antibodies for targeting IONPs. In conclusion, NKG2D/NPs can be used to target multiple tumor types and magnetic separation platform can facilitate the proof-of-concept phase of tumor targeting IONP development.
Land, Sander; Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E; Hurtado, Daniel E; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P; Sundnes, Joakim; Rice, J Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A
2015-12-08
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software.
Pre-test CFD Calculations for a Bypass Flow Standard Problem
Rich Johnson
2011-11-01
The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.
Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M.; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E.; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E.; Hurtado, Daniel E.; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P.; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P.; Sundnes, Joakim; Rice, J. Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A.
2015-01-01
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software. PMID:26807042
NASA Astrophysics Data System (ADS)
Lacava, C.; Carrol, L.; Bozzola, A.; Marchetti, R.; Minzioni, P.; Cristiani, I.; Fournier, M.; Bernabe, S.; Gerace, D.; Andreani, L. C.
2016-03-01
We present the characterization of Silicon-on-insulator (SOI) photonic-crystal based 2D grating-couplers (2D-GCs) fabricated by CEA-Leti in the frame of the FP7 Fabulous project, which is dedicated to the realization of devices and systems for low-cost and high-performance passives-optical-networks. On the analyzed samples different test structures are present, including 2D-GC connected to another 2D-GC by different waveguides (in a Mach-Zehnder like configuration), and 2D-GC connected to two separate 2D-GCs, so as to allow a complete assessment of different parameters. Measurements were carried out using a tunable laser source operating in the extended telecom bandwidth and a fiber-based polarization controlling system at the input of device-under-test. The measured data yielded an overall fiber-to-fiber loss of 7.5 dB for the structure composed by an input 2D-GC connected to two identical 2D-GCs. This value was obtained at the peak wavelength of the grating, and the 3-dB bandwidth of the 2D-GC was assessed to be 43 nm. Assuming that the waveguide losses are negligible, so as to make a worst-case analysis, the coupling efficiency of the single 2D-GC results to be equal to -3.75 dB, constituting, to the best of our knowledge, the lowest value ever reported for a fully CMOS compatible 2D-GC. It is worth noting that both the obtained values are in good agreement with those expected by the numerical simulations performed using full 3D analysis by Lumerical FDTD-solutions.
Modeling of Gap Closure in Uranium-Zirconium Alloy Metal Fuel - A Test Problem
Simunovic, Srdjan; Ott, Larry J; Gorti, Sarma B; Nukala, Phani K; Radhakrishnan, Balasubramaniam; Turner, John A
2009-10-01
Uranium based binary and ternary alloy fuel is a possible candidate for advanced fast spectrum reactors with long refueling intervals and reduced liner heat rating [1]. An important metal fuel issue that can impact the fuel performance is the fuel-cladding gap closure, and fuel axial growth. The dimensional change in the fuel during irradiation is due to a superposition of the thermal expansion of the fuel due to heating, volumetric changes due to possible phase transformations that occur during heating and the swelling due to fission gas retention. The volumetric changes due to phase transformation depend both on the thermodynamics of the alloy system and the kinetics of phase change reactions that occur at the operating temperature. The nucleation and growth of fission gas bubbles that contributes to fuel swelling is also influenced by the local fuel chemistry and the microstructure. Once the fuel expands and contacts the clad, expansion in the radial direction is constrained by the clad, and the overall deformation of the fuel clad assembly depends upon the dynamics of the contact problem. The neutronics portion of the problem is also inherently coupled with microstructural evolution in terms of constituent redistribution and phase transformation. Because of the complex nature of the problem, a series of test problems have been defined with increasing complexity with the objective of capturing the fuel-clad interaction in complex fuels subjected to a wide range of irradiation and temperature conditions. The abstract, if short, is inserted here before the introduction section. If the abstract is long, it should be inserted with the front material and page numbered as such, then this page would begin with the introduction section.
NASA Astrophysics Data System (ADS)
Payne, Joshua; Taitano, William; Knoll, Dana; Liebs, Chris; Murthy, Karthik; Feltman, Nicolas; Wang, Yijie; McCarthy, Colleen; Cieren, Emanuel
2012-10-01
In order to solve problems such as the ion coalescence and slow MHD shocks fully kinetically we developed a fully implicit 2D energy and charge conserving electromagnetic PIC code, PlasmaApp2D. PlasmaApp2D differs from previous implicit PIC implementations in that it will utilize advanced architectures such as GPUs and shared memory CPU systems, with problems too large to fit into cache. PlasmaApp2D will be a hybrid CPU-GPU code developed primarily to run on the DARWIN cluster at LANL utilizing four 12-core AMD Opteron CPUs and two NVIDIA Tesla GPUs per node. MPI will be used for cross-node communication, OpenMP will be used for on-node parallelism, and CUDA will be used for the GPUs. Development progress and initial results will be presented.
2D quantum gravity from quantum entanglement.
Gliozzi, F
2011-01-21
In quantum systems with many degrees of freedom the replica method is a useful tool to study the entanglement of arbitrary spatial regions. We apply it in a way that allows them to backreact. As a consequence, they become dynamical subsystems whose position, form, and extension are determined by their interaction with the whole system. We analyze, in particular, quantum spin chains described at criticality by a conformal field theory. Its coupling to the Gibbs' ensemble of all possible subsystems is relevant and drives the system into a new fixed point which is argued to be that of the 2D quantum gravity coupled to this system. Numerical experiments on the critical Ising model show that the new critical exponents agree with those predicted by the formula of Knizhnik, Polyakov, and Zamolodchikov.
Simulation of Yeast Cooperation in 2D.
Wang, M; Huang, Y; Wu, Z
2016-03-01
Evolution of cooperation has been an active research area in evolutionary biology in decades. An important type of cooperation is developed from group selection, when individuals form spatial groups to prevent them from foreign invasions. In this paper, we study the evolution of cooperation in a mixed population of cooperating and cheating yeast strains in 2D with the interactions among the yeast cells restricted to their small neighborhoods. We conduct a computer simulation based on a game theoretic model and show that cooperation is increased when the interactions are spatially restricted, whether the game is of a prisoner's dilemma, snow drifting, or mutual benefit type. We study the evolution of homogeneous groups of cooperators or cheaters and describe the conditions for them to sustain or expand in an opponent population. We show that under certain spatial restrictions, cooperator groups are able to sustain and expand as group sizes become large, while cheater groups fail to expand and keep them from collapse.
Graphene suspensions for 2D printing
NASA Astrophysics Data System (ADS)
Soots, R. A.; Yakimchuk, E. A.; Nebogatikova, N. A.; Kotin, I. A.; Antonova, I. V.
2016-04-01
It is shown that, by processing a graphite suspension in ethanol or water by ultrasound and centrifuging, it is possible to obtain particles with thicknesses within 1-6 nm and, in the most interesting cases, 1-1.5 nm. Analogous treatment of a graphite suspension in organic solvent yields eventually thicker particles (up to 6-10 nm thick) even upon long-term treatment. Using the proposed ink based on graphene and aqueous ethanol with ethylcellulose and terpineol additives for 2D printing, thin (~5 nm thick) films with sheet resistance upon annealing ~30 MΩ/□ were obtained. With the ink based on aqueous graphene suspension, the sheet resistance was ~5-12 kΩ/□ for 6- to 15-nm-thick layers with a carrier mobility of ~30-50 cm2/(V s).
Metrology for graphene and 2D materials
NASA Astrophysics Data System (ADS)
Pollard, Andrew J.
2016-09-01
The application of graphene, a one atom-thick honeycomb lattice of carbon atoms with superlative properties, such as electrical conductivity, thermal conductivity and strength, has already shown that it can be used to benefit metrology itself as a new quantum standard for resistance. However, there are many application areas where graphene and other 2D materials, such as molybdenum disulphide (MoS2) and hexagonal boron nitride (h-BN), may be disruptive, areas such as flexible electronics, nanocomposites, sensing and energy storage. Applying metrology to the area of graphene is now critical to enable the new, emerging global graphene commercial world and bridge the gap between academia and industry. Measurement capabilities and expertise in a wide range of scientific areas are required to address this challenge. The combined and complementary approach of varied characterisation methods for structural, chemical, electrical and other properties, will allow the real-world issues of commercialising graphene and other 2D materials to be addressed. Here, examples of metrology challenges that have been overcome through a multi-technique or new approach are discussed. Firstly, the structural characterisation of defects in both graphene and MoS2 via Raman spectroscopy is described, and how nanoscale mapping of vacancy defects in graphene is also possible using tip-enhanced Raman spectroscopy (TERS). Furthermore, the chemical characterisation and removal of polymer residue on chemical vapour deposition (CVD) grown graphene via secondary ion mass spectrometry (SIMS) is detailed, as well as the chemical characterisation of iron films used to grow large domain single-layer h-BN through CVD growth, revealing how contamination of the substrate itself plays a role in the resulting h-BN layer. In addition, the role of international standardisation in this area is described, outlining the current work ongoing in both the International Organization of Standardization (ISO) and the
Quantum damped oscillator II: Bateman's Hamiltonian vs. 2D parabolic potential barrier
Chruscinski, Dariusz . E-mail: darch@phys.uni.torun.pl
2006-04-15
We show that quantum Bateman's system which arises in the quantization of a damped harmonic oscillator is equivalent to a quantum problem with 2D parabolic potential barrier known also as 2D inverted isotropic oscillator. It turns out that this system displays the family of complex eigenvalues corresponding to the poles of analytical continuation of the resolvent operator to the complex energy plane. It is shown that this representation is more suitable than the hyperbolic one used recently by Blasone and Jizba.
Dynamical modeling of sub-grid scales in 2D turbulence
NASA Astrophysics Data System (ADS)
Laval, Jean-Philippe; Dubrulle, Bérengère; Nazarenko, Sergey
2000-08-01
We develop a new numerical method which treats resolved and sub-grid scales as two different fluid components evolving according to their own dynamical equations. These two fluids are nonlinearly interacting and can be transformed one into another when their scale becomes comparable to the grid size. Equations describing the two-fluid dynamics were rigorously derived from Euler equations [B. Dubrulle, S. Nazarenko, Physica D 110 (1997) 123-138] and they do not involve any adjustable parameters. The main assumption of such a derivation is that the large-scale vortices are so strong that they advect the sub-grid scales as a passive scalar, and the interactions of small scales with small and intermediate scales can be neglected. As a test for our numerical method, we performed numerical simulations of 2D turbulence with a spectral gap, and we found a good agreement with analytical results obtained for this case by Nazarenko and Laval [Non-local 2D turbulence and passive scalars in Batchelor’s regime, J. Fluid Mech., in press]. We used the two-fluid method to study three typical problems in 2D dynamics of incompressible fluids: decaying turbulence, vortex merger and forced turbulence. The two-fluid simulations performed on at 128 2 and 256 2 resolution were compared with pseudo-spectral simulations using hyperviscosity performed at the same and at much higher resolution. This comparison shows that performance of the two-fluid method is much better than one of the pseudo-spectral method at the same resolution and comparable computational cost. The most significant improvement is observed in modeling of the small-scale component, so that effective inertial interval increases by about two decades compared to the high-resolution pseudo-spectral method. Using the two-fluid method, we demonstrated that the k-3 tail always exists for the energy spectrum, although its amplitude is slowly decreasing in decaying turbulence.
NASA Astrophysics Data System (ADS)
Torgoev, Almaz; Havenith, Hans-Balder
2016-07-01
A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings of the landslide-prone slopes situated in the Mailuu-Suu River Valley, Southern Kyrgyzstan. The main studied parameter is the Arias Intensity (Ia, m/sec), which is applied in the GIS-based Newmark method to regionally map the seismically-induced landslide susceptibility. This method maps the Ia values via empirical attenuation laws and our studies investigate a potential to include topographic input into them. Numerical studies analyse several signals with varying shape and changing central frequency values. All tests demonstrate that the spectral amplification patterns directly affect the amplification of the Ia values. These results let to link the 2D distribution of the topographically amplified Ia values with the parameter called as smoothed curvature. The amplification values for the low-frequency signals are better correlated with the curvature smoothed over larger spatial extent, while those values for the high-frequency signals are more linked to the curvature with smaller smoothing extent. The best predictions are provided by the curvature smoothed over the extent calculated according to Geli's law. The sample equations predicting the Ia amplification based on the smoothed curvature are presented for the sinusoid-shape input signals. These laws cannot be directly implemented in the regional Newmark method, as 3D amplification of the Ia values addresses more problem complexities which are not studied here. Nevertheless, our 2D results prepare the theoretical framework which can potentially be applied to the 3D domain and, therefore, represent a robust basis for these future research targets.
NASA Astrophysics Data System (ADS)
Neyamadpour, Ahmad; Taib, Samsudin; Wan Abdullah, W. A. T.
2009-11-01
MATLAB is a high-level matrix/array language with control flow statements and functions. MATLAB has several useful toolboxes to solve complex problems in various fields of science, such as geophysics. In geophysics, the inversion of 2D DC resistivity imaging data is complex due to its non-linearity, especially for high resistivity contrast regions. In this paper, we investigate the applicability of MATLAB to design, train and test a newly developed artificial neural network in inverting 2D DC resistivity imaging data. We used resilient propagation to train the network. The model used to produce synthetic data is a homogeneous medium of 100 Ω m resistivity with an embedded anomalous body of 1000 Ω m. The location of the anomalous body was moved to different positions within the homogeneous model mesh elements. The synthetic data were generated using a finite element forward modeling code by means of the RES2DMOD. The network was trained using 21 datasets and tested on another 16 synthetic datasets, as well as on real field data. In field data acquisition, the cable covers 120 m between the first and the last take-out, with a 3 m x-spacing. Three different electrode spacings were measured, which gave a dataset of 330 data points. The interpreted result shows that the trained network was able to invert 2D electrical resistivity imaging data obtained by a Wenner-Schlumberger configuration rapidly and accurately.
3D/2D registration and segmentation of scoliotic vertebrae using statistical models.
Benameur, Said; Mignotte, Max; Parent, Stefan; Labelle, Hubert; Skalli, Wafa; de Guise, Jacques
2003-01-01
We propose a new 3D/2D registration method for vertebrae of the scoliotic spine, using two conventional radiographic views (postero-anterior and lateral), and a priori global knowledge of the geometric structure of each vertebra. This geometric knowledge is efficiently captured by a statistical deformable template integrating a set of admissible deformations, expressed by the first modes of variation in Karhunen-Loeve expansion, of the pathological deformations observed on a representative scoliotic vertebra population. The proposed registration method consists of fitting the projections of this deformable template with the preliminary segmented contours of the corresponding vertebra on the two radiographic views. The 3D/2D registration problem is stated as the minimization of a cost function for each vertebra and solved with a gradient descent technique. Registration of the spine is then done vertebra by vertebra. The proposed method efficiently provides accurate 3D reconstruction of each scoliotic vertebra and, consequently, it also provides accurate knowledge of the 3D structure of the whole scoliotic spine. This registration method has been successfully tested on several biplanar radiographic images and validated on 57 scoliotic vertebrae. The validation results reported in this paper demonstrate that the proposed statistical scheme performs better than other conventional 3D reconstruction methods.
3D/2D Model-to-Image Registration for Quantitative Dietary Assessment.
Chen, Hsin-Chen; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Sun, Mingui
2012-12-31
Image-based dietary assessment is important for health monitoring and management because it can provide quantitative and objective information, such as food volume, nutrition type, and calorie intake. In this paper, a new framework, 3D/2D model-to-image registration, is presented for estimating food volume from a single-view 2D image containing a reference object (i.e., a circular dining plate). First, the food is segmented from the background image based on Otsu's thresholding and morphological operations. Next, the food volume is obtained from a user-selected, 3D shape model. The position, orientation and scale of the model are optimized by a model-to-image registration process. Then, the circular plate in the image is fitted and its spatial information is used as constraints for solving the registration problem. Our method takes the global contour information of the shape model into account to obtain a reliable food volume estimate. Experimental results using regularly shaped test objects and realistically shaped food models with known volumes both demonstrate the effectiveness of our method.
Quantitation of protein in samples prepared for 2-D electrophoresis.
Berkelman, Tom
2008-01-01
The concentration of protein in a sample prepared for two dimensional (2-D) electrophoretic analysis is usually determined by protein assay. Reasons for this include the following. (1) Protein quantitation ensures that the amount of protein to be separated is appropriate for the gel size and visualization method. (2) Protein quantitation facilitates comparison among similar samples, as image-based analysis is simplified when equivalent quantities of proteins have been loaded on the gels to be compared. (3) Quantitation is necessary in cases where the protein sample is labeled with dye before separation (1,2). The labeling chemistry is affected by the dye to protein ratio so it is essential to know the protein concentration before setting up the labeling reaction.A primary consideration with quantitating protein in samples prepared for 2-D electrophoresis is interference by nonprotein substances that may be present in the sample. These samples generally contain chaotropic solubilizing agents, detergents, reductants, buffers or carrier ampholytes, all of which potentially interfere with protein quantitation. The most commonly used protein assays in proteomics research are colorimetric assays in which the presence of protein causes a color change that can be measured spectrophotometrically (3). All protein assays utilize standards, a dilution series of a known concentration of a known protein, to create a standard curve. Two methods will be considered that circumvent some of the problems associated with interfering substances and are well suited for samples prepared for 2-D electrophoresis. The first method (4.1.1) relies on a color change that occurs upon binding of a dye to protein and the second (4.1.2) relies on binding and reduction of cupric ion (Cu2+) ion to cuprous ion (Cu+) by proteins.
2D to 3D conversion implemented in different hardware
NASA Astrophysics Data System (ADS)
Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli
2015-02-01
Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.
A HUPO test sample study reveals common problems in mass spectrometry-based proteomics
Bell, Alexander W.; Deutsch, Eric W.; Au, Catherine E.; Kearney, Robert E.; Beavis, Ron; Sechi, Salvatore; Nilsson, Tommy; Bergeron, John J.M.
2009-01-01
We carried out a test sample study to try to identify errors leading to irreproducibility, including incompleteness of peptide sampling, in LC-MS-based proteomics. We distributed a test sample consisting of an equimolar mix of 20 highly purified recombinant human proteins, to 27 laboratories for identification. Each protein contained one or more unique tryptic peptides of 1250 Da to also test for ion selection and sampling in the mass spectrometer. Of the 27 labs, initially only 7 labs reported all 20 proteins correctly, and only 1 lab reported all the tryptic peptides of 1250 Da. Nevertheless, a subsequent centralized analysis of the raw data revealed that all 20 proteins and most of the 1250 Da peptides had in fact been detected by all 27 labs. The centralized analysis allowed us to determine sources of problems encountered in the study, which include missed identifications (false negatives), environmental contamination, database matching, and curation of protein identifications. Improved search engines and databases are likely to increase the fidelity of mass spectrometry-based proteomics. PMID:19448641
Computing 2D constrained delaunay triangulation using the GPU.
Qi, Meng; Cao, Thanh-Tung; Tan, Tiow-Seng
2013-05-01
We propose the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation (DT) problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.
Tay-Sontheimer, Jessica; Shireman, Laura M; Beyer, Richard P; Senn, Taurence; Witten, Daniela; Pearce, Robin E; Gaedigk, Andrea; Fomban, Cletus L Gana; Lutz, Justin D; Isoherranen, Nina; Thummel, Kenneth E; Fiehn, Oliver; Leeder, J Steven; Lin, Yvonne S
2015-01-01
Aim We sought to discover endogenous urinary biomarkers of human CYP2D6 activity. Patients & methods Healthy pediatric subjects (n = 189) were phenotyped using dextromethorphan and randomized for candidate biomarker selection and validation. Global urinary metabolomics was performed using liquid chromatography quadrupole time-of-flight mass spectrometry. Candidate biomarkers were tested in adults receiving fluoxetine, a CYP2D6 inhibitor. Results A biomarker, M1 (m/z 444.3102) was correlated with CYP2D6 activity in both the pediatric training and validation sets. Poor metabolizers had undetectable levels of M1, whereas it was present in subjects with other phenotypes. In adult subjects, a 9.56-fold decrease in M1 abundance was observed during CYP2D6 inhibition. Conclusion Identification and validation of M1 may provide a noninvasive means of CYP2D6 phenotyping. PMID:25521354
2D Gridded Surface Data Value-Added Product
Tang, Q; Xie, S
2015-08-30
This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutions for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.
DNN-state identification of 2D distributed parameter systems
NASA Astrophysics Data System (ADS)
Chairez, I.; Fuentes, R.; Poznyak, A.; Poznyak, T.; Escudero, M.; Viana, L.
2012-02-01
There are many examples in science and engineering which are reduced to a set of partial differential equations (PDEs) through a process of mathematical modelling. Nevertheless there exist many sources of uncertainties around the aforementioned mathematical representation. Moreover, to find exact solutions of those PDEs is not a trivial task especially if the PDE is described in two or more dimensions. It is well known that neural networks can approximate a large set of continuous functions defined on a compact set to an arbitrary accuracy. In this article, a strategy based on the differential neural network (DNN) for the non-parametric identification of a mathematical model described by a class of two-dimensional (2D) PDEs is proposed. The adaptive laws for weights ensure the 'practical stability' of the DNN-trajectories to the parabolic 2D-PDE states. To verify the qualitative behaviour of the suggested methodology, here a non-parametric modelling problem for a distributed parameter plant is analysed.
Pilot study risk assessment for selected problems at the Nevada Test Site (NTS)
Daniels, J.I.
1993-06-01
The Nevada Test Site (NTS) is located in southwestern Nevada, about 105 km (65 mi) northwest of the city of Las Vegas. A series of tests was conducted in the late 1950s and early 1960s at or near the NTS to study issues involving plutonium-bearing devices. These tests resulted in the dispersal of about 5 TBq of [sup 239,24O]Pu on the surficial soils at the test locations. Additionally, underground tests of nuclear weapons devices have been conducted at the NTS since late 1962; ground water beneath the NTS has been contaminated with radionuclides produced by these tests. These two important problems have been selected for assessment. Regarding the plutonium contamination, because the residual [sup 239]Pu decays slowly (half-life of 24,110 y), these sites could represent a long-term hazard if they are not remediated and if institutional controls are lost. To investigate the magnitude of the potential health risks for this no-remediation case, three basic exposure scenarios were defined that could bring individuals in contact with [sup 239,24O]Pu at the sites: (1) a resident living in a subdivision, (2) a resident farmer, and (3) a worker at a commercial facility -- all located at a test site. The predicted cancer risks for the resident farmer were more than a factor of three times higher than the suburban resident at the median risk level, and about a factor of ten greater than the reference worker at a commercial facility. At 100 y from the present, the 5, 50, and 95th percentile risks for the resident farmer at the most contaminated site were 4 x 10[sup [minus]6], 6 x 10[sup [minus]5], and 5 x 10[sup [minus]4], respectively. For the assessment of Pu in surface soil, the principal sources of uncertainty in the estimated risks were population mobility, the relationship between indoor and outdoor contaminant levels, and the dose and risk factors for bone, liver, and lung.
Pilot study risk assessment for selected problems at the Nevada Test Site (NTS)
Daniels, J.I.; Anspaugh, L.R.; Bogen, K.T.; Daniels, J.I.; Layton, D.W.; Straume, T.; Andricevic, R.; Jacobson, R.L.; Meinhold, A.F.; Holtzman, S.; Morris, S.C.; Hamilton, L.D.
1993-06-01
The Nevada Test Site (NTS) is located in southwestern Nevada, about 105 km (65 mi) northwest of the city of Las Vegas. A series of tests was conducted in the late 1950s and early 1960s at or near the NTS to study issues involving plutonium-bearing devices. These tests resulted in the dispersal of about 5 TBq of {sup 239,24O}Pu on the surficial soils at the test locations. Additionally, underground tests of nuclear weapons devices have been conducted at the NTS since late 1962; ground water beneath the NTS has been contaminated with radionuclides produced by these tests. These two important problems have been selected for assessment. Regarding the plutonium contamination, because the residual {sup 239}Pu decays slowly (half-life of 24,110 y), these sites could represent a long-term hazard if they are not remediated and if institutional controls are lost. To investigate the magnitude of the potential health risks for this no-remediation case, three basic exposure scenarios were defined that could bring individuals in contact with {sup 239,24O}Pu at the sites: (1) a resident living in a subdivision, (2) a resident farmer, and (3) a worker at a commercial facility -- all located at a test site. The predicted cancer risks for the resident farmer were more than a factor of three times higher than the suburban resident at the median risk level, and about a factor of ten greater than the reference worker at a commercial facility. At 100 y from the present, the 5, 50, and 95th percentile risks for the resident farmer at the most contaminated site were 4 x 10{sup {minus}6}, 6 x 10{sup {minus}5}, and 5 x 10{sup {minus}4}, respectively. For the assessment of Pu in surface soil, the principal sources of uncertainty in the estimated risks were population mobility, the relationship between indoor and outdoor contaminant levels, and the dose and risk factors for bone, liver, and lung.
Process-oriented tests for validation of baroclinic shallow water models: The lock-exchange problem
NASA Astrophysics Data System (ADS)
Kolar, R. L.; Kibbey, T. C. G.; Szpilka, C. M.; Dresback, K. M.; Tromble, E. M.; Toohey, I. P.; Hoggan, J. L.; Atkinson, J. H.
A first step often taken to validate prognostic baroclinic codes is a series of process-oriented tests, as those suggested by Haidvogel and Beckmann [Haidvogel, D., Beckmann, A., 1999. Numerical Ocean Circulation Modeling. Imperial College Press, London], among others. One of these tests is the so-called "lock-exchange" test or "dam break" problem, wherein water of different densities is separated by a vertical barrier, which is removed at time zero. Validation against these tests has primarily consisted of comparing the propagation speed of the wave front, as predicted by various theoretical and experimental results, to model output. In addition, inter-model comparisons of the lock-exchange test have been used to validate codes. Herein, we present a high resolution data set, taken from a laboratory-scale model, for direct and quantitative comparison of experimental and numerical results throughout the domain, not just the wave front. Data is captured every 0.2 s using high resolution digital photography, with salt concentration extracted by comparing pixel intensity of the dyed fluid against calibration standards. Two scenarios are discussed in this paper, symmetric and asymmetric mixing, depending on the proportion of dense/light water (17.5 ppt/0.0 ppt) in the experiment; the Boussinesq approximation applies to both. Front speeds, cast in terms of the dimensionless Froude number, show excellent agreement with literature-reported values. Data are also used to quantify the degree of mixing, as measured by the front thickness, which also provides an error band on the front speed. Finally, experimental results are used to validate baroclinic enhancements to the barotropic shallow water ADvanced CIRCulation (ADCIRC) model, including the effect of the vertical mixing scheme on simulation results. Based on salinity data, the model provides an average root-mean-square (rms) error of 3.43 ppt for the symmetric case and 3.74 ppt for the asymmetric case, most of which can
Allison, Scott A; Sweet, Clifford F; Beall, Douglas P; Lewis, Thomas E; Monroe, Thomas
2005-09-01
The PACS implementation process is complicated requiring a tremendous amount of time, resources, and planning. The Department of Defense (DOD) has significant experience in developing and refining PACS acceptance testing (AT) protocols that assure contract compliance, clinical safety, and functionality. The DOD's AT experience under the initial Medical Diagnostic Imaging Support System contract led to the current Digital Imaging Network-Picture Archiving and Communications Systems (DIN-PACS) contract AT protocol. To identify the most common system and component deficiencies under the current DIN-PACS AT protocol, 14 tri-service sites were evaluated during 1998-2000. Sixteen system deficiency citations with 154 separate types of limitations were noted with problems involving the workstation, interfaces, and the Radiology Information System comprising more than 50% of the citations. Larger PACS deployments were associated with a higher number of deficiencies. The most commonly cited systems deficiencies were among the most expensive components of the PACS.
TOPAZ2D heat transfer code users manual and thermal property data base
Shapiro, A.B.; Edwards, A.L.
1990-05-01
TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependent boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.
Persistence Measures for 2d Soap Froth
NASA Astrophysics Data System (ADS)
Feng, Y.; Ruskin, H. J.; Zhu, B.
Soap froths as typical disordered cellular structures, exhibiting spatial and temporal evolution, have been studied through their distributions and topological properties. Recently, persistence measures, which permit representation of the froth as a two-phase system, have been introduced to study froth dynamics at different length scales. Several aspects of the dynamics may be considered and cluster persistence has been observed through froth experiment. Using a direct simulation method, we have investigated persistent properties in 2D froth both by monitoring the persistence of survivor cells, a topologically independent measure, and in terms of cluster persistence. It appears that the area fraction behavior for both survivor and cluster persistence is similar for Voronoi froth and uniform froth (with defects). Survivor and cluster persistent fractions are also similar for a uniform froth, particularly when geometries are constrained, but differences observed for the Voronoi case appear to be attributable to the strong topological dependency inherent in cluster persistence. Survivor persistence, on the other hand, depends on the number rather than size and position of remaining bubbles and does not exhibit the characteristic decay to zero.
SEM signal emulation for 2D patterns
NASA Astrophysics Data System (ADS)
Sukhov, Evgenii; Muelders, Thomas; Klostermann, Ulrich; Gao, Weimin; Braylovska, Mariya
2016-03-01
The application of accurate and predictive physical resist simulation is seen as one important use model for fast and efficient exploration of new patterning technology options, especially if fully qualified OPC models are not yet available at an early pre-production stage. The methodology of using a top-down CD-SEM metrology to extract the 3D resist profile information, such as the critical dimension (CD) at various resist heights, has to be associated with a series of presumptions which may introduce such small, but systematic CD errors. Ideally, the metrology effects should be carefully minimized during measurement process, or if possible be taken into account through proper metrology modeling. In this paper we discuss the application of a fast SEM signal emulation describing the SEM image formation. The algorithm is applied to simulated resist 3D profiles and produces emulated SEM image results for 1D and 2D patterns. It allows estimating resist simulation quality by comparing CDs which were extracted from the emulated and from the measured SEM images. Moreover, SEM emulation is applied for resist model calibration to capture subtle error signatures through dose and defocus. Finally, it should be noted that our SEM emulation methodology is based on the approximation of physical phenomena which are taking place in real SEM image formation. This approximation allows achieving better speed performance compared to a fully physical model.
Competing coexisting phases in 2D water
NASA Astrophysics Data System (ADS)
Zanotti, Jean-Marc; Judeinstein, Patrick; Dalla-Bernardina, Simona; Creff, Gaëlle; Brubach, Jean-Blaise; Roy, Pascale; Bonetti, Marco; Ollivier, Jacques; Sakellariou, Dimitrios; Bellissent-Funel, Marie-Claire
2016-05-01
The properties of bulk water come from a delicate balance of interactions on length scales encompassing several orders of magnitudes: i) the Hydrogen Bond (HBond) at the molecular scale and ii) the extension of this HBond network up to the macroscopic level. Here, we address the physics of water when the three dimensional extension of the HBond network is frustrated, so that the water molecules are forced to organize in only two dimensions. We account for the large scale fluctuating HBond network by an analytical mean-field percolation model. This approach provides a coherent interpretation of the different events experimentally (calorimetry, neutron, NMR, near and far infra-red spectroscopies) detected in interfacial water at 160, 220 and 250 K. Starting from an amorphous state of water at low temperature, these transitions are respectively interpreted as the onset of creation of transient low density patches of 4-HBonded molecules at 160 K, the percolation of these domains at 220 K and finally the total invasion of the surface by them at 250 K. The source of this surprising behaviour in 2D is the frustration of the natural bulk tetrahedral local geometry and the underlying very significant increase in entropy of the interfacial water molecules.
Competing coexisting phases in 2D water
Zanotti, Jean-Marc; Judeinstein, Patrick; Dalla-Bernardina, Simona; Creff, Gaëlle; Brubach, Jean-Blaise; Roy, Pascale; Bonetti, Marco; Ollivier, Jacques; Sakellariou, Dimitrios; Bellissent-Funel, Marie-Claire
2016-01-01
The properties of bulk water come from a delicate balance of interactions on length scales encompassing several orders of magnitudes: i) the Hydrogen Bond (HBond) at the molecular scale and ii) the extension of this HBond network up to the macroscopic level. Here, we address the physics of water when the three dimensional extension of the HBond network is frustrated, so that the water molecules are forced to organize in only two dimensions. We account for the large scale fluctuating HBond network by an analytical mean-field percolation model. This approach provides a coherent interpretation of the different events experimentally (calorimetry, neutron, NMR, near and far infra-red spectroscopies) detected in interfacial water at 160, 220 and 250 K. Starting from an amorphous state of water at low temperature, these transitions are respectively interpreted as the onset of creation of transient low density patches of 4-HBonded molecules at 160 K, the percolation of these domains at 220 K and finally the total invasion of the surface by them at 250 K. The source of this surprising behaviour in 2D is the frustration of the natural bulk tetrahedral local geometry and the underlying very significant increase in entropy of the interfacial water molecules. PMID:27185018
NASA Astrophysics Data System (ADS)
Cheng, Chingyun; Kangara, Jayampathi; Arakelyan, Ilya; Thomas, John
2016-05-01
We tune the dimensionality of a strongly interacting degenerate 6 Li Fermi gas from 2D to quasi-2D, by adjusting the radial confinement of pancake-shaped clouds to control the radial chemical potential. In the 2D regime with weak radial confinement, the measured pair binding energies are in agreement with 2D-BCS mean field theory, which predicts dimer pairing energies in the many-body regime. In the qausi-2D regime obtained with increased radial confinement, the measured pairing energy deviates significantly from 2D-BCS theory. In contrast to the pairing energy, the measured radii of the cloud profiles are not fit by 2D-BCS theory in either the 2D or quasi-2D regimes, but are fit in both regimes by a beyond mean field polaron-model of the free energy. Supported by DOE, ARO, NSF, and AFOSR.
3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.
Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo
2013-08-01
Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.
Impact of CYP2D6 polymorphisms on clinical efficacy and tolerability of metoprolol tartrate.
Hamadeh, I S; Langaee, T Y; Dwivedi, R; Garcia, S; Burkley, B M; Skaar, T C; Chapman, A B; Gums, J G; Turner, S T; Gong, Y; Cooper-DeHoff, R M; Johnson, J A
2014-08-01
Metoprolol is a selective β-1 adrenergic receptor blocker that undergoes extensive metabolism by the polymorphic enzyme cytochrome P450 2D6 (CYP2D6). Our objective was to investigate the influence of CYP2D6 polymorphisms on the efficacy and tolerability of metoprolol tartrate. Two hundred and eighty-one participants with uncomplicated hypertension received 50 mg of metoprolol twice daily followed by response-guided titration to 100 mg twice daily. Phenotypes were assigned based on results of CYP2D6 genotyping and copy number variation assays. Clinical response to metoprolol and adverse effect rates were analyzed in relation to CYP2D6 phenotypes using appropriate statistical tests. Heart rate response differed significantly by CYP2D6 phenotype (P < 0.0001), with poor and intermediate metabolizers showing greater reduction. However, blood pressure response and adverse effect rates were not significantly different by CYP2D6 phenotype. Other than a significant difference in heart rate response, CYP2D6 polymorphisms were not determinants of variability in metoprolol response or tolerability.
ERIC Educational Resources Information Center
Erford, Bradley T.; Alsamadi, Silvana C.
2012-01-01
Score reliability and validity of parent responses concerning their 10- to 17-year-old students were analyzed using the Screening Test for Emotional Problems-Parent Report (STEP-P), which assesses a variety of emotional problems classified under the Individuals with Disabilities Education Improvement Act. Score reliability, convergent, and…
2014-11-01
Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking (ICArUS): Phase 2 Challenge Problem Design and Test...SUBTITLE Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking (ICArUS): A Computational Basis for ICArUS: Phase 2 Challenge...IARPA program ICArUS (Integrated Cognitive- neuroscience Architectures for Understanding Sensemaking) requires a research problem that poses cognitive
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false What procedural problems do not result in the... Problems in Drug Tests § 40.209 What procedural problems do not result in the cancellation of a test and do... aware, even if they are not considered problems that will cause a test to be cancelled as listed in...
Advecting Procedural Textures for 2D Flow Animation
NASA Technical Reports Server (NTRS)
Kao, David; Pang, Alex; Moran, Pat (Technical Monitor)
2001-01-01
This paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flowfield, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. This paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.
Adaptive superplastic forming using NIKE2D with ISLAND
Engelmann, B.E.; Whirley, R.G.; Raboin, P.J.
1992-07-30
Superplastic forming has emerged as an important manufacturing process for producing near-net-shape parts. The design of a superplastic forming process is more difficult than conventional manufacturing operations, and is less amenable to trial and error approaches. This paper describes a superplastic forming process design capability incorporating nonlinear finite element analysis. The material constraints to allow superplastic behavior are integrated into an external constraint equation which is solved concurrently with the nonlinear finite element equations. The implementation of this approach using the ISLAND solution control language with the nonlinear finite element code NIKE2D is discussed in detail. Superplastic forming process design problems with one and two control parameters are presented as examples.
The Anatomy of High-Performance 2D Similarity Calculations
Haque, Imran S.; Pande, Vijay S.
2011-01-01
Similarity measures based on the comparison of dense bit-vectors of two-dimensional chemical features are a dominant method in chemical informatics. For large-scale problems, including compound selection and machine learning, computing the intersection between two dense bit-vectors is the overwhelming bottleneck. We describe efficient implementations of this primitive, as well as example applications, using features of modern CPUs that allow 20-40x performance increases relative to typical code. Specifically, we describe fast methods for population count on modern x86 processors and cache-efficient matrix traversal and leader clustering algorithms that alleviate memory bandwidth bottlenecks in similarity matrix construction and clustering. The speed of our 2D comparison primitives is within a small factor of that obtained on GPUs, and does not require specialized hardware. PMID:21854053
Repression of multiple CYP2D genes in mouse primary hepatocytes with a single siRNA construct.
Elraghy, Omaima; Baldwin, William S
2015-01-01
The Cyp2d subfamily is the second most abun-dant subfamily of hepatic drug-metabolizing CYPs. In mice, there are nine Cyp2d members that are believed to have redundant catalytic activity. We are testing and optimizing the ability of one short interfering RNA (siRNA) construct to knockdown the expression of multiple mouse Cyp2ds in primary hepatocytes. Expression of Cyp2d10, Cyp2d11, Cyp2d22, and Cyp2d26 was observed in the primary male mouse hepatocytes. Cyp2d9, which is male-specific and growth hormone-dependent, was not expressed in male primary hepatocytes, potentially because of its dependence on pulsatile growth hormone release from the anterior pituitary. Several different siRNAs at different concentrations and with different reagents were used to knockdown Cyp2d expression. siRNA constructs designed to repress only one construct often mildly repressed several Cyp2d isoforms. A construct designed to knockdown every Cyp2d isoform provided the best results, especially when incubated with transfection reagents designed specifically for primary cell culture. Interestingly, a construct designed to knockdown all Cyp2d isoforms, except Cyp2d10, caused a 2.5× increase in Cyp2d10 expression, presumably because of a compensatory response. However, while RNA expression is repressed 24 h after siRNA treatment, associated changes in Cyp2d-mediated metabolism are tenuous. Overall, this study provides data on the expression of murine Cyp2ds in primary cell lines, valuable information on designing siRNAs for silencing multiple murine CYPs, and potential pros and cons of using siRNA as a tool for repressing Cyp2d and estimating Cyp2d's role in murine xenobiotic metabolism.
Is There a Space-Based Technology Solution to Problems with Preclinical Drug Toxicity Testing?
Hammond, Timothy; Allen, Patricia; Birdsall, Holly
2016-07-01
Even the finest state-of-the art preclinical drug testing, usually in primary hepatocytes, remains an imperfect science. Drugs continue to be withdrawn from the market due to unforeseen toxicity, side effects, and drug interactions. The space program may be able to provide a lifeline. Best known for rockets, space shuttles, astronauts and engineering, the space program has also delivered some serious medical science. Optimized suspension culture in NASA's specialized suspension culture devices, known as rotating wall vessels, uniquely maintains Phase I and Phase II drug metabolizing pathways in hepatocytes for weeks in cell culture. Previously prohibitively expensive, new materials and 3D printing techniques have the potential to make the NASA rotating wall vessel available inexpensively on an industrial scale. Here we address the tradeoffs inherent in the rotating wall vessel, limitations of alternative approaches for drug metabolism studies, and the market to be addressed. Better pre-clinical drug testing has the potential to significantly reduce the morbidity and mortality of one of the most common problems in modern medicine: adverse events related to pharmaceuticals.
''Super 2D,'' Innovative seismic reprocessing: A case history
Conne, D.K.M.; Bolander, A.G.; MacDonald, R.J.; Strelioff, D.M.
1988-01-01
The ''Super 2D'' processing sequence involves taking a randomly oriented grid of multivintage two-dimensional seismic data and reprocessing to tie the data where required, then interpolating the data set to a regular grid suitable for three-dimensional processing and interpretation. A data set from Alberta, provided by a Canadian oil company, comprises 15 two-dimensional seismic lines collected and processed over a period of 6 years by various contractors. Field conditions, advances in technology, and changing objectives combined to result in a data set that densely sampled a small area, but did not tie in well enough to be interpreted as a whole. The data mistied in time, phase, and frequency, as well as having a problem with multiples in the zone of interest that had been partly attenuated in varying degrees. Therefore, the first objective of reprocessing was to resolve these problems. The authors' current land data processing sequence, which includes frequency balancing followed by source wavelet designature, F/K multiple attenuation, trim statics, and F-X filtering, as well as close attention to statics and velocity control, resolved all the mistie issues and produced a standardized data volume. This data volume was now suitable for the second stage of this sequence (i.e., interpolating to a regular grid and subsequent three-dimensional processing). The volume was three-dimensionally migrated (finite difference), filtered, and scaled. The full range of three-dimensional display and interpretational options, including loading on an interactive system, are now possible. This, along with standardizing the data set and improving the spatial location of events via three-dimensional migration are the key results of the ''Super 2D'' sequence.
2D discrete Fourier transform on sliding windows.
Park, Chun-Su
2015-03-01
Discrete Fourier transform (DFT) is the most widely used method for determining the frequency spectra of digital signals. In this paper, a 2D sliding DFT (2D SDFT) algorithm is proposed for fast implementation of the DFT on 2D sliding windows. The proposed 2D SDFT algorithm directly computes the DFT bins of the current window using the precalculated bins of the previous window. Since the proposed algorithm is designed to accelerate the sliding transform process of a 2D input signal, it can be directly applied to computer vision and image processing applications. The theoretical analysis shows that the computational requirement of the proposed 2D SDFT algorithm is the lowest among existing 2D DFT algorithms. Moreover, the output of the 2D SDFT is mathematically equivalent to that of the traditional DFT at all pixel positions.
CAST2D: A finite element computer code for casting process modeling
Shapiro, A.B.; Hallquist, J.O.
1991-10-01
CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).
MAGNUM2D. Radionuclide Transport Porous Media
Langford, D.W.; Baca, R.G.
1989-03-01
MAGNUM2D was developed to analyze thermally driven fluid motion in the deep basalts below the Paco Basin at the Westinghouse Hanford Site. Has been used in the Basalt Waste Isolation Project to simulate nonisothermal groundwater flow in a heterogeneous anisotropic medium and heat transport in a water/rock system near a high level nuclear waste repository. Allows three representations of the hydrogeologic system: an equivalent porous continuum, a system of discrete, unfilled, and interconnecting fractures separated by impervious rock mass, and a low permeability porous continuum with several discrete, unfilled fractures traversing the medium. The calculations assume local thermodynamic equilibrium between the rock and groundwater, nonisothermal Darcian flow in the continuum portions of the rock, and nonisothermal Poiseuille flow in discrete unfilled fractures. In addition, the code accounts for thermal loading within the elements, zero normal gradient and fixed boundary conditions for both temperature and hydraulic head, and simulation of the temperature and flow independently. The Q2DGEOM preprocessor was developed to generate, modify, plot and verify quadratic two dimensional finite element geometries. The BCGEN preprocessor generates the boundary conditions for head and temperature and ICGEN generates the initial conditions. The GRIDDER postprocessor interpolates nonregularly spaced nodal flow and temperature data onto a regular rectangular grid. CONTOUR plots and labels contour lines for a function of two variables and PARAM plots cross sections and time histories for a function of time and one or two spatial variables. NPRINT generates data tables that display the data along horizontal or vertical cross sections. VELPLT differentiates the hydraulic head and buoyancy data and plots the velocity vectors. The PATH postprocessor plots flow paths and computes the corresponding travel times.
MAGNUM2D. Radionuclide Transport Porous Media
Langford, D.W.; Baca, R.G.
1988-08-01
MAGNUM2D was developed to analyze thermally driven fluid motion in the deep basalts below the Paco Basin at the Westinghouse Hanford Site. Has been used in the Basalt Waste Isolation Project to simulate nonisothermal groundwater flow in a heterogeneous anisotropic medium and heat transport in a water/rock system near a high level nuclear waste repository. Allows three representations of the hydrogeologic system: an equivalent porous continuum, a system of discrete, unfilled, and interconnecting fractures separated by impervious rock mass, and a low permeability porous continuum with several discrete, unfilled fractures traversing the medium. The calculation assumes local thermodynamic equilibrium between the rock and groundwater, nonisothermal Darcian flow in the continuum portions of the rock, and nonisothermal Poiseuille flow in discrete unfilled fractures. In addition, the code accounts for thermal loading within the elements, zero normal gradient and fixed boundary conditions for both temperature and hydraulic head, and simulation of the temperature and flow independently. The Q2DGEOM preprocessor was developed to generate, modify, plot and verify quadratic two dimensional finite element geometries. The BCGEN preprocessor generates the boundary conditions for head and temperature and ICGEN generates the initial conditions. The GRIDDER postprocessor interpolates nonregularly spaced nodal flow and temperature data onto a regular rectangular grid. CONTOUR plots and labels contour lines for a function of two variables and PARAM plots cross sections and time histories for a function of time and one or two spatial variables. NPRINT generates data tables that display the data along horizontal or vertical cross sections. VELPLT differentiates the hydraulic head and buoyancy data and plots the velocity vectors. The PATH postprocessor plots flow paths and computes the corresponding travel times.
NIKE2D96. Static & Dynamic Response of 2D Solids
Raboin, P.; Engelmann, B.; Halquist, J.O.
1992-01-24
NIKE2D is an implicit finite-element code for analyzing the finite deformation, static and dynamic response of two-dimensional, axisymmetric, plane strain, and plane stress solids. The code is fully vectorized and available on several computing platforms. A number of material models are incorporated to simulate a wide range of material behavior including elasto-placicity, anisotropy, creep, thermal effects, and rate dependence. Slideline algorithms model gaps and sliding along material interfaces, including interface friction, penetration and single surface contact. Interactive-graphics and rezoning is included for analyses with large mesh distortions. In addition to quasi-Newton and arc-length procedures, adaptive algorithms can be defined to solve the implicit equations using the solution language ISLAND. Each of these capabilities and more make NIKE2D a robust analysis tool.
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1994-01-01
A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.
Micro-Expression Recognition based on 2D Gabor Filter and Sparse Representation
NASA Astrophysics Data System (ADS)
Zheng, Hao
2017-01-01
Micro-expression recognition is always a challenging problem for its quick facial expression. This paper proposed a novel method named 2D Gabor filter and Sparse Representation (2DGSR) to deal with the recognition of micro-expression. In our method, 2D Gabor filter is used for enhancing the robustness of the variations due to increasing the discrimination power. While the sparse representation is applied to deal with the subtlety, and cast recognition as a sparse approximation problem. We compare our method to other popular methods in three spontaneous micro-expression recognition databases. The results show that our method has more excellent performance than other methods.
Reconstruction of a 2D seismic wavefield by seismic gradiometry
NASA Astrophysics Data System (ADS)
Maeda, Takuto; Nishida, Kiwamu; Takagi, Ryota; Obara, Kazushige
2016-12-01
We reconstructed a 2D seismic wavefield and obtained its propagation properties by using the seismic gradiometry method together with dense observations of the Hi-net seismograph network in Japan. The seismic gradiometry method estimates the wave amplitude and its spatial derivative coefficients at any location from a discrete station record by using a Taylor series approximation. From the spatial derivatives in horizontal directions, the properties of a propagating wave packet, including the arrival direction, slowness, geometrical spreading, and radiation pattern can be obtained. In addition, by using spatial derivatives together with free-surface boundary conditions, the 2D vector elastic wavefield can be decomposed into divergence and rotation components. First, as a feasibility test, we performed an analysis with a synthetic seismogram dataset computed by a numerical simulation for a realistic 3D medium and the actual Hi-net station layout. We confirmed that the wave amplitude and its spatial derivatives were very well-reproduced for period bands longer than 25 s. Applications to a real large earthquake showed that the amplitude and phase of the wavefield were well reconstructed, along with slowness vector. The slowness of the reconstructed wavefield showed a clear contrast between body and surface waves and regional non-great-circle-path wave propagation, possibly owing to scattering. Slowness vectors together with divergence and rotation decomposition are expected to be useful for determining constituents of observed wavefields in inhomogeneous media.
Facial biometrics based on 2D vector geometry
NASA Astrophysics Data System (ADS)
Malek, Obaidul; Venetsanopoulos, Anastasios; Androutsos, Dimitrios
2014-05-01
The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates' biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public "Put Face Database". The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.
The effects of aging on haptic 2D shape recognition.
Overvliet, Krista E; Wagemans, J; Krampe, Ralf T
2013-12-01
We use the image-mediation model (Klatzky & Lederman, 1987) as a framework to investigate potential sources of adult age differences in the haptic recognition of two-dimensional (2D) shapes. This model states that the low-resolution, temporally sequential, haptic input is translated into a visual image, which is then reperceived through the visual processors, before it is matched against a long-term memory representation and named. In three experiments we tested groups of 12 older (mean age 73.11) and three groups of 12 young adults (mean age 22.80) each. In Experiment 1 we confirm age-related differences in haptic 2D shape recognition, and we show the typical age × complexity interaction. In Experiment 2 we show that if we facilitate the visual translation process, age differences become smaller, but only with simple shapes and not with the more complex everyday objects. In Experiment 3 we target the last step in the model (matching and naming) for complex stimuli. We found that age differences in exploration time were considerably reduced when this component process was facilitated by providing a category name. We conclude that the image-mediation model can explain adult-age differences in haptic recognition, particularly if the role of working memory in forming the transient visual image is considered. Our findings suggest that sensorimotor skills thought to rely on peripheral processes for the most part are critically constrained by age-related changes in central processing capacity in later adulthood.
Scientometric analysis and bibliography of digit ratio (2D:4D) research, 1998-2008.
Voracek, Martin; Loibl, Lisa Mariella
2009-06-01
A scientometric analysis of modern research on the second-to-fourth digit ratio (2D:4D), a widely studied putative marker for prenatal androgen action, is presented. In early 2009, this literature totalled more than 300 publications and, since its initiation in 1998, has grown at a rate slightly faster than linear. Key findings included evidence of publication bias and citation bias, incomplete coverage and outdatedness of existing reviews, and a dearth of meta-analyses in this field. 2D:4D research clusters noticeably in terms of researchers, institutions, countries, and journals involved. Although 2D:4D is an anthropometric trait, most of the research has been conducted at psychology departments, not anthropology departments. However, 2D:4D research has not been predominantly published in core and specialized journals of psychology, but rather in more broadly scoped journals of the behavioral sciences, biomedical social sciences, and neurosciences. Total citation numbers of 2D:4D papers for the most part were not larger than their citation counts within 2D:4D research, indicating that until now, only a few 2D:4D studies have attained broader interest outside this specific field. Comparative citation analyses show that 2D:4D research presently is commensurate in size and importance to evolutionary psychological jealousy research, but has grown faster than the latter field. In contrast, it is much smaller and has spread more slowly than research about the Implicit Association Test Fifteen conjectures about anticipated trends in 2D:4D research are outlined, appendixed by a first-time bibliography of the entirety of the published 2D:4D literature.
ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data
NASA Astrophysics Data System (ADS)
Akca, Irfan
2016-04-01
ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discre-tized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.
NASA Technical Reports Server (NTRS)
Thakur, Siddarth; Wright, Jeffrey
2006-01-01
The traditional design and analysis practice for advanced propulsion systems, particularly chemical rocket engines, relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment by non-CFD specialists. A computational tool, called Loci-STREAM is being developed for this purpose. It is a pressure-based, Reynolds-averaged Navier-Stokes (RANS) solver for generalized unstructured grids, which is designed to handle all-speed flows (incompressible to hypersonic) and is particularly suitable for solving multi-species flow in fixed-frame combustion devices. Loci-STREAM integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective of the ongoing work is to develop a robust simulation capability for combustion problems in rocket engines. As an initial step towards validating this capability, a model problem is investigated in the present study which involves a gaseous oxygen/gaseous hydrogen (GO2/GH2) shear coaxial single element injector, for which experimental data are available. The sensitivity of the computed solutions to grid density, grid distribution, different turbulence models, and different near-wall treatments is investigated. A refined grid, which is clustered in the vicinity of
Haug, Tobias; Mann, Wolfgang
2008-01-01
Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another.
Locke, Thomas F; Newcomb, Michael
2004-03-01
The authors tested how adverse childhood experiences (child maltreatment and parent alcohol- and drug-related problems) and adult polydrug use (as a mediator) predict poor parenting in a community sample (237 mothers and 81 fathers). These relationships were framed within several theoretical perspectives, including observational learning, impaired functioning, self-medication, and parentification-pseudomaturity. Structural models revealed that child maltreatment predicted poor parenting practices among mothers. Parent alcohol- and drug-related problems had an indirect detrimental influence on mothers' parenting and practices through self-drug problems. Among fathers, emotional neglect experienced as a child predicted lack of parental warmth more parental neglect, and sexual abuse experienced as a child predicted a rejecting style of parenting.
ERIC Educational Resources Information Center
Erford, Bradley T.; Butler, Caitlin; Peacock, Elizabeth
2015-01-01
The Screening Test for Emotional Problems-Teacher Version (STEP-T) was designed to identify students aged 7-17 years with wide-ranging emotional disturbances. Coefficients alpha and test-retest reliability were adequate for all subscales except Anxiety. The hypothesized five-factor model fit the data very well and external aspects of validity were…
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
Four topics were emphasized during this conference on testing problems: (1) the selection of appropriate score scales for tests; (2) the experimental approach to the measurement of human motivation; (3) trends in public opinion polling since 1948 and their probable effects on predictions of the 1952 election; and (4) techniques for developing…
Testing a Comprehensive Community Problem-Solving Framework for Community Coalitions
ERIC Educational Resources Information Center
Yang, Evelyn; Foster-Fishman, Pennie; Collins, Charles; Ahn, Soyeon
2012-01-01
Community problem solving is believed to help coalitions achieve community changes and subsequent population-level reductions in targeted community health problems. This study empirically examined a community problem solving model used by CADCA, a national coalition training organization, to determine if the model explains how coalitions become…
Generation and Radiation of Acoustic Waves from a 2-D Shear Layer
NASA Technical Reports Server (NTRS)
Agarwal, Anurag; Morris, Philip J.
2000-01-01
A parallel numerical simulation of the radiation of sound from an acoustic source inside a 2-D jet is presented in this paper. This basic benchmark problem is used as a test case for scattering problems that are presently being solved by using the Impedance Mismatch Method (IMM). In this technique, a solid body in the domain is represented by setting the acoustic impedance of each medium, encountered by a wave, to a different value. This impedance discrepancy results in reflected and scattered waves with appropriate amplitudes. The great advantage of the use of this method is that no modifications to a simple Cartesian grid need to be made for complicated geometry bodies. Thus, high order finite difference schemes may be applied simply to all parts of the domain. In the IMM, the total perturbation field is split into incident and scattered fields. The incident pressure is assumed to be known and the equivalent sources for the scattered field are associated with the presence of the scattering body (through the impedance mismatch) and the propagation of the incident field through a non-uniform flow. An earlier version of the technique could only handle uniform flow in the vicinity of the source and at the outflow boundary. Scattering problems in non-uniform mean flow are of great practical importance (for example, scattering from a high lift device in a non-uniform mean flow or the effects of a fuselage boundary layer). The solution to this benchmark problem, which has an acoustic wave propagating through a non-uniform mean flow, serves as a test case for the extensions of the IMM technique.
Subplane-based Control Rod Decusping Techniques for the 2D/1D Method in MPACT
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
2017-01-01
The MPACT transport code is being jointly developed by Oak Ridge National Laboratory and the University of Michigan to serve as the primary neutron transport code for the Virtual Environment for Reactor Applications Core Simulator. MPACT uses the 2D/1D method to solve the transport equation by decomposing the reactor model into a stack of 2D planes. A fine mesh flux distribution is calculated in each 2D plane using the Method of Characteristics (MOC), then the planes are coupled axially through a 1D NEM-P$_3$ calculation. This iterative calculation is then accelerated using the Coarse Mesh Finite Difference method. One problem that arises frequently when using the 2D/1D method is that of control rod cusping. This occurs when the tip of a control rod falls between the boundaries of an MOC plane, requiring that the rodded and unrodded regions be axially homogenized for the 2D MOC calculations. Performing a volume homogenization does not properly preserve the reaction rates, causing an error known as cusping. The most straightforward way of resolving this problem is by refining the axial mesh, but this can significantly increase the computational expense of the calculation. The other way of resolving the partially inserted rod is through the use of a decusping method. This paper presents new decusping methods implemented in MPACT that can dynamically correct the rod cusping behavior for a variety of problems.
Wang, Yuxian; Xie, Yongbing; Sun, Hongqi; Xiao, Jiadong; Cao, Hongbin; Wang, Shaobin
2016-01-15
Two-dimensional reduced graphene oxide (2D rGO) was employed as both a shape-directing medium and support to fabricate 2D γ-MnO2/2D rGO nano-hybrids (MnO2/rGO) via a facile hydrothermal route. For the first time, the 2D/2D hybrid materials were used for catalytic ozonation of 4-nitrophenol. The catalytic efficiency of MnO2/rGO was much higher than either MnO2 or rGO only, and rGO was suggested to play the role for promoting electron transfers. Quenching tests using tert-butanol, p-benzoquinone, and sodium azide suggested that the major radicals responsible for 4-nitrophenol degradation and mineralization are O2(-) and (1)O2, but not ·OH. Reusability tests demonstrated a high stability of the materials in catalytic ozonation with minor Mn leaching below 0.5 ppm. Degradation mechanism, reaction kinetics, reusability and a synergistic effect between catalytic ozonation and coupling peroxymonosulfate (PMS) activation were also discussed.
NASA Astrophysics Data System (ADS)
Bernauer, F.; Hürkamp, K.; Rühm, W.; Tschiersch, J.
2015-08-01
Detailed characterization and classification of precipitation is an important task in atmospheric research. Line scanning 2-D video disdrometer devices are well established for rain observations. The two orthogonal views taken of each hydrometeor passing the sensitive area of the instrument qualify these devices especially for detailed characterization of nonsymmetric solid hydrometeors. However, in case of solid precipitation, problems related to the matching algorithm have to be considered and the user must be aware of the limited spatial resolution when size and shape descriptors are analyzed. Clarifying the potential of 2-D video disdrometers in deriving size, velocity and shape parameters from single recorded pictures is the aim of this work. The need of implementing a matching algorithm suitable for mixed- and solid-phase precipitation is highlighted as an essential step in data evaluation. For this purpose simple reproducible experiments with solid steel spheres and irregularly shaped Styrofoam particles are conducted. Self-consistency of shape parameter measurements is tested in 38 cases of real snowfall. As a result, it was found that reliable size and shape characterization with a relative standard deviation of less than 5 % is only possible for particles larger than 1 mm. For particles between 0.5 and 1.0 mm the relative standard deviation can grow up to 22 % for the volume, 17 % for size parameters and 14 % for shape descriptors. Testing the adapted matching algorithm with a reproducible experiment with Styrofoam particles, a mismatch probability of less than 3 % was found. For shape parameter measurements in case of real solid-phase precipitation, the 2-DVD shows self-consistent behavior.
van Dijk, Jan
2013-01-01
Eradicating disease from livestock populations involves the balancing act of removing sufficient numbers of diseased animals without removing too many healthy individuals in the process. As ever more tests for bovine tuberculosis (BTB) are carried out on the UK cattle herd, and each positive herd test triggers more testing, the question arises whether ‘false positive’ results contribute significantly to the measured BTB prevalence. Here, this question is explored using simple probabilistic models of test behaviour. When the screening test is applied to the average UK herd, the estimated proportion of test-associated false positive new outbreaks is highly sensitive to small fluctuations in screening test specificity. Estimations of this parameter should be updated as a priority. Once outbreaks have been confirmed in screening-test positive herds, the following rounds of intensive testing with more sensitive, albeit less specific, tests are highly likely to remove large numbers of false positive animals from herds. Despite this, it is unlikely that significantly more truly infected animals are removed. BTB test protocols should become based on quantified risk in order to prevent the needless slaughter of large numbers of healthy animals. PMID:23717517
Stochastic 2-D Navier-Stokes Equation
Menaldi, J.L. Sritharan, S.S.
2002-10-01
In this paper we prove the existence and uniqueness of strong solutions for the stochastic Navier-Stokes equation in bounded and unbounded domains. These solutions are stochastic analogs of the classical Lions-Prodi solutions to the deterministic Navier-Stokes equation. Local monotonicity of the nonlinearity is exploited to obtain the solutions in a given probability space and this significantly improves the earlier techniques for obtaining strong solutions, which depended on pathwise solutions to the Navier-Stokes martingale problem where the probability space is also obtained as a part of the solution.
Model dielectric function for 2D semiconductors including substrate screening
Trolle, Mads L.; Pedersen, Thomas G.; Véniard, Valerie
2017-01-01
Dielectric screening of excitons in 2D semiconductors is known to be a highly non-local effect, which in reciprocal space translates to a strong dependence on momentum transfer q. We present an analytical model dielectric function, including the full non-linear q-dependency, which may be used as an alternative to more numerically taxing ab initio screening functions. By verifying the good agreement between excitonic optical properties calculated using our model dielectric function, and those derived from ab initio methods, we demonstrate the versatility of this approach. Our test systems include: Monolayer hBN, monolayer MoS2, and the surface exciton of a 2 × 1 reconstructed Si(111) surface. Additionally, using our model, we easily take substrate screening effects into account. Hence, we include also a systematic study of the effects of substrate media on the excitonic optical properties of MoS2 and hBN. PMID:28117326
Silicene: silicon conquers the 2D world
NASA Astrophysics Data System (ADS)
Le Lay, Guy; Salomon, Eric; Angot, Thierry
2016-01-01
We live in the digital age based on the silicon chip and driven by Moore's law. Last July, IBM created a surprise by announcing the fabrication of a 7 nm test chip with functional transistors using, instead of just silicon, a silicon-germanium alloy. Will silicon be dethroned?
Sachse, C.; Brockmoeller, J.; Bauer, S.; Roots, I.
1997-02-01
Cytochrome P450 2D6 (CYP2D6) metabolizes many important drugs. CYP2D6 activity ranges from complete deficiency to ultrafast metabolism, depending on at least 16 different known alleles. Their frequencies were determined in 589 unrelated German volunteers and correlated with enzyme activity measured by phenotyping with dextromethorphan or debrisoquine. For genotyping, nested PCR-RFLP tests from a PCR amplificate of the entire CYP2D6 gene were developed. The frequency of the CYP2D6*1 allele coding for extensive metabolizer (EM) phenotype was .364. The alleles coding for slightly (CYP2D6*2) or moderately (*9 and *10) reduced activity (intermediate metabolizer phenotype [IM]) showed frequencies of .324, .018, and .015, respectively. By use of novel PCR tests for discrimination, CYP2D6 gene duplication alleles were found with frequencies of.005 (*1 x 2), .013 (* 2 x 2), and .001 (*4 x 2). Frequencies of alleles with complete deficiency (poor metabolizer phenotype [PM]) were .207 (*4), .020 (*3 and *5), .009 (*6), and .001 (*7, *15, and *16). The defective CYP2D6 alleles *8, *11, *12, *13, and *14 were not found. All 41 PMs (7.0%) in this sample were explained by five mutations detected by four PCR-RFLP tests, which may suffice, together with the gene duplication test, for clinical prediction of CYP2D6 capacity. Three novel variants of known CYP2D6 alleles were discovered: *1C (T{sub 1957}C), *2B (additional C{sub 2558}T), and *4E (additional C{sub 2938}T). Analysis of variance showed significant differences in enzymatic activity measured by the dextromethorphan metabolic ratio (MR) between carriers of EN/PM (mean MR = .006) and IM/PM (mean MR = .014) alleles and between carriers of one (mean MR = .009) and two (mean MR = .003) functional alleles. The results of this study provide a solid basis for prediction of CYP2D6 capacity, as required in drug research and routine drug treatment. 35 refs., 4 figs., 5 tabs.
"Gold standard" data for evaluation and comparison of 3D/2D registration methods.
Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo
2004-01-01
Evaluation and comparison of registration techniques for image-guided surgery is an important problem that has received little attention in the literature. In this paper we address the challenging problem of generating reliable "gold standard" data for use in evaluating the accuracy of 3D/2D registrations. We have devised a cadaveric lumbar spine phantom with fiducial markers and established highly accurate correspondences between 3D CT and MR images and 18 2D X-ray images. The expected target registration errors for target points on the pedicles are less than 0.26 mm for CT-to-X-ray registration and less than 0.42 mm for MR-to-X-ray registration. As such, the "gold standard" data, which has been made publicly available on the Internet (http://lit.fe.uni-lj.si/Downloads/downloads.asp), is useful for evaluation and comparison of 3D/2D image registration methods.
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
The conference focused upon the users of tests in counseling and guidance. The first session centered on multi-factor ability test batteries, with papers on Use of Multi-Factor Aptitude Tests in School Counseling, by Robert D. North; Use of the General Aptitude Test Battery in the Employment Service, by Pauline K. Anderson; Service Tests of…
Residual lens effects in 2D mode of auto-stereoscopic lenticular-based switchable 2D/3D displays
NASA Astrophysics Data System (ADS)
Sluijter, M.; IJzerman, W. L.; de Boer, D. K. G.; de Zwart, S. T.
2006-04-01
We discuss residual lens effects in multi-view switchable auto-stereoscopic lenticular-based 2D/3D displays. With the introduction of a switchable lenticular, it is possible to switch between a 2D mode and a 3D mode. The 2D mode displays conventional content, whereas the 3D mode provides the sensation of depth to the viewer. The uniformity of a display in the 2D mode is quantified by the quality parameter modulation depth. In order to reduce the modulation depth in the 2D mode, birefringent lens plates are investigated analytically and numerically, by ray tracing. We can conclude that the modulation depth in the 2D mode can be substantially decreased by using birefringent lens plates with a perfect index match between lens material and lens plate. Birefringent lens plates do not disturb the 3D performance of a switchable 2D/3D display.
Lagrangian statistics in laboratory 2D turbulence
NASA Astrophysics Data System (ADS)
Xia, Hua; Francois, Nicolas; Punzmann, Horst; Shats, Michael
2014-05-01
Turbulent mixing in liquids and gases is ubiquitous in nature and industrial flows. Understanding statistical properties of Lagrangian trajectories in turbulence is crucial for a range of problems such as spreading of plankton in the ocean, transport of pollutants, etc. Oceanic data on trajectories of the free-drifting instruments, indicate that the trajectory statistics can often be described by a Lagrangian integral scale. Turbulence however is a state of a flow dominated by a hierarchy of scales, and it is not clear which of these scales mostly affect particle dispersion. Moreover, coherent structures often coexist with turbulence in laboratory experiments [1]. The effect of coherent structures on particle dispersion in turbulent flows is not well understood. Recent progress in scientific imaging and computational power made it possible to tackle this problem experimentally. In this talk, we report the analysis of the higher order Lagrangian statistics in laboratory two-dimensional turbulence. Our results show that fluid particle dispersion is diffusive and it is determined by a single measurable Lagrangian scale related to the forcing scale [2]. Higher order moments of the particle dispersion show strong self-similarity in fully developed turbulence [3]. Here we introduce a new dispersion law that describes single particle dispersion during the turbulence development [4]. These results offer a new way of predicting dispersion in turbulent flows in which one of the low energy scales are persistent. It may help better understanding of drifter Lagrangian statistics in the regions of the ocean where small scale coherent eddies are present [5]. Reference: 1. H. Xia, H. Punzmann, G. Falkovich and M. Shats, Physical Review Letters, 101, 194504 (2008) 2. H. Xia, N. Francois, H. Punzmann, and M. Shats, Nature Communications, 4, 2013 (2013) 3. R. Ferrari, A.J. Manfroi , W.R. Young, Physica D 154 111 (2001) 4. H. Xia, N. Francois, H. Punzmann and M. Shats, submitted (2014
Mechanical properties of 2D and 3D braided textile composites
NASA Technical Reports Server (NTRS)
Norman, Timothy L.
1991-01-01
The purpose of this research was to determine the mechanical properties of 2D and 3D braided textile composite materials. Specifically, those designed for tension or shear loading were tested under static loading to failure to investigate the effects of braiding. The overall goal of the work was to provide a structural designer with an idea of how textile composites perform under typical loading conditions. From test results for unnotched tension, it was determined that the 2D is stronger, stiffer, and has higher elongation to failure than the 3D. It was also found that the polyetherether ketone (PEEK) resin system was stronger, stiffer, and had higher elongation at failure than the resin transfer molding (RTM) epoxy. Open hole tension tests showed that PEEK resin is more notch sensitive than RTM epoxy. Of greater significance, it was found that the 3D is less notch sensitive than the 2D. Unnotched compression tests indicated, as did the tension tests, that the 2D is stronger, stiffer, and has higher elongation at failure than the RTM epoxy. The most encouraging results were from compression after impact. The 3D braided composite showed a compression after impact failure stress equal to 92 percent of the unimpacted specimen. The 2D braided composite failed at about 67 percent of the unimpacted specimen. Higher damage tolerance is observed in textiles over conventional composite materials. This is observed in the results, especially in the 3D braided materials.
Differential CYP 2D6 metabolism alters primaquine pharmacokinetics.
Potter, Brittney M J; Xie, Lisa H; Vuong, Chau; Zhang, Jing; Zhang, Ping; Duan, Dehui; Luong, Thu-Lan T; Bandara Herath, H M T; Dhammika Nanayakkara, N P; Tekwani, Babu L; Walker, Larry A; Nolan, Christina K; Sciotti, Richard J; Zottig, Victor E; Smith, Philip L; Paris, Robert M; Read, Lisa T; Li, Qigui; Pybus, Brandon S; Sousa, Jason C; Reichard, Gregory A; Marcsisin, Sean R
2015-04-01
Primaquine (PQ) metabolism by the cytochrome P450 (CYP) 2D family of enzymes is required for antimalarial activity in both humans (2D6) and mice (2D). Human CYP 2D6 is highly polymorphic, and decreased CYP 2D6 enzyme activity has been linked to decreased PQ antimalarial activity. Despite the importance of CYP 2D metabolism in PQ efficacy, the exact role that these enzymes play in PQ metabolism and pharmacokinetics has not been extensively studied in vivo. In this study, a series of PQ pharmacokinetic experiments were conducted in mice with differential CYP 2D metabolism characteristics, including wild-type (WT), CYP 2D knockout (KO), and humanized CYP 2D6 (KO/knock-in [KO/KI]) mice. Plasma and liver pharmacokinetic profiles from a single PQ dose (20 mg/kg of body weight) differed significantly among the strains for PQ and carboxy-PQ. Additionally, due to the suspected role of phenolic metabolites in PQ efficacy, these were probed using reference standards. Levels of phenolic metabolites were highest in mice capable of metabolizing CYP 2D6 substrates (WT and KO/KI 2D6 mice). PQ phenolic metabolites were present in different quantities in the two strains, illustrating species-specific differences in PQ metabolism between the human and mouse enzymes. Taking the data together, this report furthers understanding of PQ pharmacokinetics in the context of differential CYP 2D metabolism and has important implications for PQ administration in humans with different levels of CYP 2D6 enzyme activity.
Mechanical characterization of 2D, 2D stitched, and 3D braided/RTM materials
NASA Technical Reports Server (NTRS)
Deaton, Jerry W.; Kullerd, Susan M.; Portanova, Marc A.
1993-01-01
Braided composite materials have potential for application in aircraft structures. Fuselage frames, floor beams, wing spars, and stiffeners are examples where braided composites could find application if cost effective processing and damage tolerance requirements are met. Another important consideration for braided composites relates to their mechanical properties and how they compare to the properties of composites produced by other textile composite processes being proposed for these applications. Unfortunately, mechanical property data for braided composites do not appear extensively in the literature. Data are presented in this paper on the mechanical characterization of 2D triaxial braid, 2D triaxial braid plus stitching, and 3D (through-the-thickness) braid composite materials. The braided preforms all had the same graphite tow size and the same nominal braid architectures, (+/- 30 deg/0 deg), and were resin transfer molded (RTM) using the same mold for each of two different resin systems. Static data are presented for notched and unnotched tension, notched and unnotched compression, and compression after impact strengths at room temperature. In addition, some static results, after environmental conditioning, are included. Baseline tension and compression fatigue results are also presented, but only for the 3D braided composite material with one of the resin systems.
Hasegawa, Akira; Nishimura, Haruki; Mastuda, Yuko; Kunisato, Yoshihiko; Morimoto, Hiroshi; Adachi, Masaki
This study examined the relationship between trait rumination and the effectiveness of problem solving strategies as assessed by the Means-Ends Problem-Solving Test (MEPS) in a nonclinical population. The present study extended previous studies in terms of using two instructions in the MEPS: the second-person, actual strategy instructions, which has been utilized in previous studies on rumination, and the third-person, ideal-strategy instructions, which is considered more suitable for assessing the effectiveness of problem solving strategies. We also replicated the association between rumination and each dimension of the Social Problem-Solving Inventory-Revised Short Version (SPSI-R:S). Japanese undergraduate students (N = 223) completed the Beck Depression Inventory-Second Edition, Ruminative Responses Scale (RRS), MEPS, and SPSI-R:S. One half of the sample completed the MEPS with the second-person, actual strategy instructions. The other participants completed the MEPS with the third-person, ideal-strategy instructions. The results showed that neither total RRS score, nor its subscale scores were significantly correlated with MEPS scores under either of the two instructions. These findings taken together with previous findings indicate that in nonclinical populations, trait rumination is not related to the effectiveness of problem solving strategies, but that state rumination while responding to the MEPS deteriorates the quality of strategies. The correlations between RRS and SPSI-R:S scores indicated that trait rumination in general, and its brooding subcomponent in particular are parts of cognitive and behavioral responses that attempt to avoid negative environmental and negative private events. Results also showed that reflection is a part of active problem solving.
Distribution of CYP2D6 alleles and phenotypes in the Brazilian population.
Friedrich, Deise C; Genro, Júlia P; Sortica, Vinicius A; Suarez-Kurtz, Guilherme; de Moraes, Maria Elizabete; Pena, Sergio D J; dos Santos, Andrea K Ribeiro; Romano-Silva, Marco A; Hutz, Mara H
2014-01-01
The CYP2D6 enzyme is one of the most important members of the cytochrome P450 superfamily. This enzyme metabolizes approximately 25% of currently prescribed medications. The CYP2D6 gene presents a high allele heterogeneity that determines great inter-individual variation. The aim of this study was to evaluate the variability of CYP2D6 alleles, genotypes and predicted phenotypes in Brazilians. Eleven single nucleotide polymorphisms and CYP2D6 duplications/multiplications were genotyped by TaqMan assays in 1020 individuals from North, Northeast, South, and Southeast Brazil. Eighteen CYP2D6 alleles were identified in the Brazilian population. The CYP2D6*1 and CYP2D6*2 alleles were the most frequent and widely distributed in different geographical regions of Brazil. The highest number of CYPD6 alleles observed was six and the frequency of individuals with more than two copies ranged from 6.3% (in Southern Brazil) to 10.2% (Northern Brazil). The analysis of molecular variance showed that CYP2D6 is homogeneously distributed across different Brazilian regions and most of the differences can be attributed to inter-individual differences. The most frequent predicted metabolic status was EM (83.5%). Overall 2.5% and 3.7% of Brazilians were PMs and UMs respectively. Genomic ancestry proportions differ only in the prevalence of intermediate metabolizers. The IM predicted phenotype is associated with a higher proportion of African ancestry and a lower proportion of European ancestry in Brazilians. PM and UM classes did not vary among regions and/or ancestry proportions therefore unique CYP2D6 testing guidelines for Brazilians are possible and could potentially avoid ineffective or adverse events outcomes due to drug prescriptions.
Distribution of CYP2D6 Alleles and Phenotypes in the Brazilian Population
Sortica, Vinicius A.; Suarez-Kurtz, Guilherme; de Moraes, Maria Elizabete; Pena, Sergio D. J.; dos Santos, Ândrea K. Ribeiro; Romano-Silva, Marco A.; Hutz, Mara H.
2014-01-01
Abstract The CYP2D6 enzyme is one of the most important members of the cytochrome P450 superfamily. This enzyme metabolizes approximately 25% of currently prescribed medications. The CYP2D6 gene presents a high allele heterogeneity that determines great inter-individual variation. The aim of this study was to evaluate the variability of CYP2D6 alleles, genotypes and predicted phenotypes in Brazilians. Eleven single nucleotide polymorphisms and CYP2D6 duplications/multiplications were genotyped by TaqMan assays in 1020 individuals from North, Northeast, South, and Southeast Brazil. Eighteen CYP2D6 alleles were identified in the Brazilian population. The CYP2D6*1 and CYP2D6*2 alleles were the most frequent and widely distributed in different geographical regions of Brazil. The highest number of CYPD6 alleles observed was six and the frequency of individuals with more than two copies ranged from 6.3% (in Southern Brazil) to 10.2% (Northern Brazil). The analysis of molecular variance showed that CYP2D6 is homogeneously distributed across different Brazilian regions and most of the differences can be attributed to inter-individual differences. The most frequent predicted metabolic status was EM (83.5%). Overall 2.5% and 3.7% of Brazilians were PMs and UMs respectively. Genomic ancestry proportions differ only in the prevalence of intermediate metabolizers. The IM predicted phenotype is associated with a higher proportion of African ancestry and a lower proportion of European ancestry in Brazilians. PM and UM classes did not vary among regions and/or ancestry proportions therefore unique CYP2D6 testing guidelines for Brazilians are possible and could potentially avoid ineffective or adverse events outcomes due to drug prescriptions. PMID:25329392
Presynaptic GluN2D receptors detect glutamate spillover and regulate cerebellar GABA release
Dubois, Christophe J.; Lachamp, Philippe M.; Sun, Lu; Mishina, Masayoshi
2015-01-01
Glutamate directly activates N-methyl-d-aspartate (NMDA) receptors on presynaptic inhibitory interneurons and enhances GABA release, altering the excitatory-inhibitory balance within a neuronal circuit. However, which class of NMDA receptors is involved in the detection of glutamate spillover is not known. GluN2D subunit-containing NMDA receptors are ideal candidates as they exhibit a high affinity for glutamate. We now show that cerebellar stellate cells express both GluN2B and GluN2D NMDA receptor subunits. Genetic deletion of GluN2D subunits prevented a physiologically relevant, stimulation-induced, lasting increase in GABA release from stellate cells [long-term potentiation of inhibitory transmission (I-LTP)]. NMDA receptors are tetramers composed of two GluN1 subunits associated to either two identical subunits (di-heteromeric receptors) or to two different subunits (tri-heteromeric receptors). To determine whether tri-heteromeric GluN2B/2D NMDA receptors mediate I-LTP, we tested the prediction that deletion of GluN2D converts tri-heteromeric GluN2B/2D to di-heteromeric GluN2B NMDA receptors. We find that prolonged stimulation rescued I-LTP in GluN2D knockout mice, and this was abolished by GluN2B receptor blockers that failed to prevent I-LTP in wild-type mice. Therefore, NMDA receptors that contain both GluN2D and GluN2B mediate the induction of I-LTP. Because these receptors are not present in the soma and dendrites, presynaptic tri-heteromeric GluN2B/2D NMDA receptors in inhibitory interneurons are likely to mediate the cross talk between excitatory and inhibitory transmission. PMID:26510761
Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo
2016-01-01
Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of
A 2D spring model for the simulation of ultrasonic wave propagation in nonlinear hysteretic media.
Delsanto, P P; Gliozzi, A S; Hirsekorn, M; Nobili, M
2006-07-01
A two-dimensional (2D) approach to the simulation of ultrasonic wave propagation in nonclassical nonlinear (NCNL) media is presented. The approach represents the extension to 2D of a previously proposed one dimensional (1D) Spring Model, with the inclusion of a PM space treatment of the intersticial regions between grains. The extension to 2D is of great practical relevance for its potential applications in the field of quantitative nondestructive evaluation and material characterization, but it is also useful, from a theoretical point of view, to gain a better insight of the interaction mechanisms involved. The model is tested by means of virtual 2D experiments. The expected NCNL behaviors are qualitatively well reproduced.
Correspondenceless 3D-2D registration based on expectation conditional maximization
NASA Astrophysics Data System (ADS)
Kang, X.; Taylor, R. H.; Armand, M.; Otake, Y.; Yau, W. P.; Cheung, P. Y. S.; Hu, Y.
2011-03-01
3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).
Rhee, Soo Hyun; Silvern, Louise E.; Haberstick, Brett C.; Hopfer, Christian; Lessem, Jeffrey M.; Hewitt, John K.
2011-01-01
It is often assumed that childhood maltreatment causes conduct problems via an environmentally mediated process. However, the association may be due alternatively to either a nonpassive gene-environment correlation, in which parents react to children’s genetically-influenced conduct problems by maltreating them, or a passive gene-environment correlation, in which parents’ tendency to engage in maltreatment and children’s conduct problems are both influenced by a hereditary vulnerability to antisocial behavior (i.e. genetic mediation). The present study estimated the contribution of these processes to the association between maltreatment and conduct problems. Bivariate behavior genetic analyses were conducted on approximately 1,650 twin and sibling pairs drawn from a large longitudinal study of adolescent health (Add Health). The correlation between maltreatment and conduct problems was small; much of the association between maltreatment and conduct problems was due to a nonpassive gene-environment correlation. Results were more consistent with the hypothesis that parents respond to children’s genetically-influenced conduct problems by maltreating them than the hypothesis that maltreatment causes conduct problems. PMID:20024671
NASA Astrophysics Data System (ADS)
Benjamini, Dan; Basser, Peter J.
2016-10-01
Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1 -T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1 -T2, D -D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time.
Computational Screening of 2D Materials for Photocatalysis.
Singh, Arunima K; Mathew, Kiran; Zhuang, Houlong L; Hennig, Richard G
2015-03-19
Two-dimensional (2D) materials exhibit a range of extraordinary electronic, optical, and mechanical properties different from their bulk counterparts with potential applications for 2D materials emerging in energy storage and conversion technologies. In this Perspective, we summarize the recent developments in the field of solar water splitting using 2D materials and review a computational screening approach to rapidly and efficiently discover more 2D materials that possess properties suitable for solar water splitting. Computational tools based on density-functional theory can predict the intrinsic properties of potential photocatalyst such as their electronic properties, optical absorbance, and solubility in aqueous solutions. Computational tools enable the exploration of possible routes to enhance the photocatalytic activity of 2D materials by use of mechanical strain, bias potential, doping, and pH. We discuss future research directions and needed method developments for the computational design and optimization of 2D materials for photocatalysis.
ERIC Educational Resources Information Center
Leahy, Wayne; Hanham, José; Sweller, John
2015-01-01
The testing effect occurs when learners who are tested rather than relearning material perform better on a final test than those who relearn. Based on cognitive load theory, it was predicted that the testing effect may not be obtained when the material being learned is high in element interactivity. Three experiments investigated conditions of the…
Solutions for Some Technical Problems in Domain-Referenced Mastery Testing. Final Report.
ERIC Educational Resources Information Center
Huynh, Huynh; Saunders, Joseph C.
A basic technical framework is provided for the design and use of mastery tests. The Mastery Testing Project (MTP) prepared this framework using advanced mathematics supplemented with computer simulation based on real test data collected by the South Carolina Statewide Testing Program. The MTP focused on basic technical issues encountered in using…
Synthetic Covalent and Non-Covalent 2D Materials.
Boott, Charlotte E; Nazemi, Ali; Manners, Ian
2015-11-16
The creation of synthetic 2D materials represents an attractive challenge that is ultimately driven by their prospective uses in, for example, electronics, biomedicine, catalysis, sensing, and as membranes for separation and filtration. This Review illustrates some recent advances in this diverse field with a focus on covalent and non-covalent 2D polymers and frameworks, and self-assembled 2D materials derived from nanoparticles, homopolymers, and block copolymers.
Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy
NASA Astrophysics Data System (ADS)
Naaz, Farah
Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups:
Hoffman, E.L.; Ammerman, D.J.
1995-04-01
A series of tests investigating dynamic pulse buckling of a cylindrical shell under axial impact is compared to several 2D and 3D finite element simulations of the event. The purpose of the work is to investigate the performance of various analysis codes and element types on a problem which is applicable to radioactive material transport packages, and ultimately to develop a benchmark problem to qualify finite element analysis codes for the transport package design industry. During the pulse buckling tests, a buckle formed at each end of the cylinder, and one of the two buckles became unstable and collapsed. Numerical simulations of the test were performed using PRONTO, a Sandia developed transient dynamics analysis code, and ABAQUS/Explicit with both shell and continuum elements. The calculations are compared to the tests with respect to deformed shape and impact load history.
Persistent radiographic cone cuts: a simple test to avoid the frustrating problem.
Shivanandappa, Santosh Gowdru; Mushannavar, Lata Shankarappa; Katti, Girish
2014-01-01
In routine dental radiography, one may encounter numerous radiographic errors, one of which may be partial images or cone cuts. These cones cut errors although may appear as simple problems, but often persistent cone cuts may be frustrating to the dental practitioner. In this study, our main aim was to find and solve the origin of the problem. This study was conducted with two different intraoral X-ray machines with 12 inch length position indicating devices (PIDs) with four No. 2 intraoral films. If the problem is originated in the X-ray machine, it can be solved by either tapping the collimator or by repositioning the PID.
Epitaxial 2D SnSe2/ 2D WSe2 van der Waals Heterostructures.
Aretouli, Kleopatra Emmanouil; Tsoutsou, Dimitra; Tsipas, Polychronis; Marquez-Velasco, Jose; Aminalragia Giamini, Sigiava; Kelaidis, Nicolaos; Psycharis, Vassilis; Dimoulas, Athanasios
2016-09-07
van der Waals heterostructures of 2D semiconductor materials can be used to realize a number of (opto)electronic devices including tunneling field effect devices (TFETs). It is shown in this work that high quality SnSe2/WSe2 vdW heterostructure can be grown by molecular beam epitaxy on AlN(0001)/Si(111) substrates using a Bi2Se3 buffer layer. A valence band offset of 0.8 eV matches the energy gap of SnSe2 in such a way that the VB edge of WSe2 and the CB edge of SnSe2 are lined up, making this materials combination suitable for (nearly) broken gap TFETs.
Assessment of 25 CYP2D6 alleles found in the Chinese population on propafenone metabolism in vitro.
Su, Ying; Liang, Bing-Qing; Feng, Yan-Lin; Zhan, Yunyun; Gu, Ermin; Chen, Xinxin; Dai, Da-Peng; Hu, Guo-Xin; Cai, Jian-Ping
2016-08-01
Cytochrome P450 enzyme 2D6 (CYP2D6) is an important member of the cytochrome P450 enzyme superfamily, with more than 100 CYP2D6 allelic variants being previously reported. The aim of this study was to assess the catalytic characteristics of 25 alleles (CYP2D6.1 and 24 CYP2D6 variants) and their effects on the metabolism of propafenone in vitro. Twenty-five CYP2D6 alleles were expressing in 21 Spodoptera frugiperda (Sf) insect cells, and each variant was evaluated using propafenone as the substrate. Reactions were performed at 37 °C with 1-100 μmol/L propafenone for 30 min. After termination, the product 5-OH-propafenone was extracted and used for signal collection by ultra-performance liquid chromatography (UPLC). Compared with wild type CYP2D6.1, the intrinsic clearance (Vmax and Km) values of all variants were significantly altered. Three variants (CYP2D6.87, CYP2D6.90, CYP2D6.F219S) exhibited markedly increased intrinsic clearance values (129% to 165%), whereas 21 variants exhibited significantly decreased values (16% to 85%) due to increased Km and (or) decreased Vmax values. These results indicated that the majority of tested alleles had significantly altered catalytic activity towards propafenone hydroxylation in this expression system. Attention should be paid to subjects carrying these rare alleles when treated with propafenone.
GetDDM: An open framework for testing optimized Schwarz methods for time-harmonic wave problems
NASA Astrophysics Data System (ADS)
Thierry, B.; Vion, A.; Tournier, S.; El Bouajaji, M.; Colignon, D.; Marsic, N.; Antoine, X.; Geuzaine, C.
2016-06-01
We present an open finite element framework, called GetDDM, for testing optimized Schwarz domain decomposition techniques for time-harmonic wave problems. After a review of Schwarz domain decomposition methods and associated transmission conditions, we discuss the implementation, based on the open source software GetDP and Gmsh. The solver, along with ready-to-use examples for Helmholtz and Maxwell's equations, is freely available online for further testing.
FLAG Simulations of the Elasticity Test Problem of Gavrilyuk et al.
Kamm, James R.; Runnels, Scott R.; Canfield, Thomas R.; Carney, Theodore C.
2014-04-23
This report contains a description of the impact problem used to compare hypoelastic and hyperelastic material models, as described by Gavrilyuk, Favrie & Saurel. That description is used to set up hypoelastic simulations in the FLAG hydrocode.
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2013 CFR
2013-10-01
... corrected? (a) As a collector, you have the responsibility of trying to successfully complete a collection...), you must try to correct the problem promptly, if doing so is practicable. You may conduct...
2D nearly orthogonal mesh generation
NASA Astrophysics Data System (ADS)
Zhang, Yaoxin; Jia, Yafei; Wang, Sam S. Y.
2004-11-01
The Ryskin and Leal (RL) system is the most widely used mesh generation system for the orthogonal mapping. However, when this system is used in domains with complex geometry, particularly in those with sharp corners and strong curvatures, serious distortion or overlapping of mesh lines may occur and an acceptable solution may not be possible. In the present study, two methods are proposed to generate nearly orthogonal meshes with the smoothness control. In the first method, the original RL system is modified by introducing smoothness control functions, which are formulated through the blending of the conformal mapping and the orthogonal mapping; while in the second method, the RL system is modified by introducing the contribution factors. A hybrid system of both methods is also developed. The proposed methods are illustrated by several test examples. Applications of these methods in a natural river channel are demonstrated. It is shown that the modified RL systems are capable of producing meshes with an adequate balance between the orthogonality and the smoothness for complex computational domains without mesh distortions and overlapping.
The use of 2D Hilbert transform for phase retrieval of speckle fields
NASA Astrophysics Data System (ADS)
Angelsky, O. V.; Zenkova, C. Yu.; Riabyi, P. A.
2016-12-01
The use of a "window" 2D Hilbert transform for reconstruction of the phase distribution of remote objects is proposed. It is shown that the advantage of this approach consists in the invariance of a phase map to a change of the position of the kernel of transformation and in a possibility to reconstruct the structure-forming elements of the skeleton of an optical field, including singular points and saddle points. We demonstrate the possibility to reconstruct the equi-phase lines within a narrow confidence interval, and introduce a new algorithm for solving the phase problem for random 2D intensity distributions.
NASA Astrophysics Data System (ADS)
Magarill, L. I.; Entin, M. V.
2016-12-01
The electron absorption and the edge photocurrent of a 2D topological insulator are studied for transitions between edge states to 2D states. The circular polarized light is found to produce the edge photocurrent, the direction of which is determined by light polarization and edge orientation. It is shown that the edge-state current is found to exceed the 2D current owing to the topological protection of the edge states.
Test Problems for Reactive Flow HE Model in the ALE3D Code and Limited Sensitivity Study
Gerassimenko, M.
2000-03-01
We document quick running test problems for a reactive flow model of HE initiation incorporated into ALE3D. A quarter percent change in projectile velocity changes the outcome from detonation to HE burn that dies down. We study the sensitivity of calculated HE behavior to several parameters of practical interest where modeling HE initiation with ALE3D.
ERIC Educational Resources Information Center
Hambrick, David Z.; Libarkin, Julie C.; Petcovic, Heather L.; Baker, Kathleen M.; Elkins, Joe; Callahan, Caitlin N.; Turner, Sheldon P.; Rench, Tara A.; LaDue, Nicole D.
2012-01-01
Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco…
ERIC Educational Resources Information Center
Masson, J. D.; Dagnan, D.; Evans, J.
2010-01-01
Background: There is a need for validated, standardised tools for the assessment of executive functions in adults with intellectual disabilities (ID). This study examines the validity of a test of planning and problem solving (Tower of London) with adults with ID. Method: Participants completed an adapted version of the Tower of London (ToL) while…
Effect of 22 CYP2D6 variants found in the Chinese population on tolterodine metabolism in vitro.
Wang, Hao; Dai, Da-Peng; Sun, Peng; Xu, Li-Ping; Liang, Bing-Qing; Cai, Jian-Ping; Hu, Guo-Xin
2017-02-25
Cytochrome P450 2D6 (CYP2D6) is an important member of the cytochrome P450 enzyme superfamily. We recently identified 22 novel variants in the Chinese population using PCR and bidirectional sequencing methods. The aim of this study is to characterize the enzymatic activity of these variants and their effects on the metabolism of the antimuscarinic drug tolterodine in vitro. A baculovirus-mediated expression system was used to express wild-type CYP2D6 and 24 variants (CYP2D6*2, CYP2D6*10, and 22 novel CYP2D6 variants) at high levels. The insect microsomes expressing CYP2D6 proteins were incubated with 0.1-50 μM tolterodine at 37 °C for 30 min and the metabolites were analyzed by high-performance liquid chromatography-tandem mass spectrometry system. Of the 24 CYP2D6 variants tested, 2 variants (CYP2D6*92 and CYP2D6*96) were found to be catalytically inactive, 4 variants (CYP2D6*94, F164L, F219S and D336N) exhibited markedly increased intrinsic clearance values (Vmax/Km) compared with the wild-type (from 66.34 to 99.79%), whereas 4 variants (CYP2D6*10, *93, *95 and E215K) exhibited significantly decreased values (from 49.02 to 98.50%). This is the first report of all these rare alleles for tolterodine metabolism and these findings suggest that more attention should be paid to subjects carrying these infrequent CYP2D6 alleles when administering tolterodine in the clinic.
Generalization Technique for 2D+SCALE Dhe Data Model
NASA Astrophysics Data System (ADS)
Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.
Testing the effectiveness of problem-based learning with learning-disabled students in biology
NASA Astrophysics Data System (ADS)
Guerrera, Claudia Patrizia
The purpose of the present study was to investigate the effects of problem-based learning (PBL) with learning-disabled (LD) students. Twenty-four students (12 dyads) classified as LD and attending a school for the learning-disabled participated in the study. Students engaged in either a computer-based environment involving BioWorld, a hospital simulation designed to teach biology students problem-solving skills, or a paper-and-pencil version based on the computer program. A hybrid model of learning was adopted whereby students were provided with direct instruction on the digestive system prior to participating in a problem-solving activity. Students worked in dyads and solved three problems involving the digestive system in either a computerized or a paper-and-pencil condition. The experimenter acted as a coach to assist students throughout the problem-solving process. A follow-up study was conducted, one month later, to measure the long-term learning gains. Quantitative and qualitative methods were used to analyze three types of data: process data, outcome data, and follow-up data. Results from the process data showed that all students engaged in effective collaboration and became more systematic in their problem solving over time. Findings from the outcome and follow-up data showed that students in both treatment conditions, made both learning and motivational gains and that these benefits were still evident one month later. Overall, results demonstrated that the computer facilitated students' problem solving and scientific reasoning skills. Some differences were noted in students' collaboration and the amount of assistance required from the coach in both conditions. Thus, PBL is an effective learning approach with LD students in science, regardless of the type of learning environment. These results have implications for teaching science to LD students, as well as for future designs of educational software for this population.
Can exposure to prenatal sex hormones (2D:4D) predict cognitive reflection?
Bosch-Domènech, Antoni; Brañas-Garza, Pablo; Espín, Antonio M
2014-05-01
The Cognitive Reflection Test (CRT) is a test introduced by Frederick (2005). The task is designed to measure the tendency to override an intuitive response that is incorrect and to engage in further reflection that leads to the correct response. The consistent sex differences in CRT performance may suggest a role for prenatal sex hormones. A now widely studied putative marker for relative prenatal testosterone is the second-to-fourth digit ratio (2D:4D). This paper tests to what extent 2D:4D, as a proxy for the prenatal ratio of testosterone/estrogens, can predict CRT scores in a sample of 623 students. After controlling for sex, we observe that a lower 2D:4D (reflecting a relative higher exposure to testosterone) is significantly associated with a higher number of correct answers. The result holds for both hands' 2D:4Ds. In addition, the effect appears to be stronger for females than for males. We also control for patience and math proficiency, which are significantly related to performance in the CRT. But the effect of 2D:4D on performance in CRT is not reduced with these controls, implying that these variables are not mediating the relationship between digit ratio and CRT.
EFFECTS OF SMOKING ON D2/D3 STRIATAL RECEPTOR AVAILABILITY IN ALCOHOLICS AND SOCIAL DRINKERS
Albrecht, Daniel S.; Kareken, David A.; Yoder, Karmen K.
2013-01-01
Objective Studies have reported lower striatal D2/D3 receptor availability in both alcoholics and cigarette smokers relative to healthy controls. These substances are commonly co-abused, yet the relationship between comorbid alcohol/tobacco abuse and striatal D2/D3 receptor availability has not been examined. We sought to determine the degree to which dual abuse of alcohol and tobacco is associated with lower D2/D3 receptor availability. Method Eighty-one subjects (34 nontreatment-seeking alcoholic smokers [NTS-S], 21 social-drinking smokers [SD-S], and 26 social-drinking non-smokers [SD-NS]) received baseline [11C]raclopride scans. D2/D3 binding potential (BPND ≡ Bavail/KD) was estimated for ten anatomically defined striatal regions of interest (ROIs). Results Significant group effects were detected in bilateral pre-commissural dorsal putamen, bilateral pre-commissural dorsal caudate; and bilateral post-commissural dorsal putamen. Post-hoc testing revealed that, regardless of drinking status, smokers had lower D2/D3 receptor availability than non-smoking controls. Conclusions Chronic tobacco smokers have lower striatal D2/D3 receptor availability than non-smokers, independent of alcohol use. Additional studies are needed to identify the mechanisms by which chronic tobacco smoking is associated with striatal dopamine receptor availability. PMID:23649848
The use of 2D and 3D information in a perceptual-cognitive judgement task.
Put, Koen; Wagemans, Johan; Spitz, Jochim; Gallardo, Manuel Armenteros; Williams, A Mark; Helsen, Werner F
2014-01-01
We examined whether the use of three-dimensional (3D) simulations in an off-field offside decision-making task is beneficial compared to the more widely available two-dimensional (2D) simulations. Thirty-three assistant referees, who were all involved in professional football, participated in the experiment. They assessed 40 offside situations in both 2D and 3D formats using a counterbalanced design. A distinction was made between offside situations near (i.e., 15 m) and far (i.e., 30 m) from the touchline. Subsequently, a frame recognition task was performed in which assistant referees were asked to indicate which of the five pictures represented the previous video scene. A higher response accuracy score was observed under 3D (80.0%) compared to 2D (75.0%) conditions, in particular for the situations near the touchline (3D: 81.8%; 2D: 72.7%). No differences were reported between 2D and 3D in the frame recognition task. Findings suggest that in highly dynamic and complex situations, the visual system can benefit from the availability of 3D information, especially for relatively fine, metric position judgements. In the memory task, in which a mental abstraction had to be made from a dynamic situation to a static snapshot, 3D stereo disparities do not add anything over and beyond 2D simulations. The specific task demands should be taken into account when considering the most appropriate format for testing and training.
Effect of CYP2D6 genetic polymorphism on the metabolism of citalopram in vitro.
Hu, Xiao-Xia; Yuan, Ling-Jing; Fang, Ping; Mao, Yong-Hui; Zhan, Yun-Yun; Li, Xiang-Yu; Dai, Da-Peng; Cai, Jian-Ping; Hu, Guo-Xin
2016-04-01
Genetic polymorphisms of CYP2D6 significantly influence the efficacy and safety of some drugs, which might cause adverse effects and therapeutic failure. We aimed at investigating the role of CYP2D6 in the metabolism of citalopram and identifying the effect of 24 CYP2D6 allelic variants we found in Chinese Han population on the metabolism of citalopram in vitro. These CYP2D6 variants expressed by insect cells system were incubated with 10-1000 μM citalopram for 30 min at 37 °C and the reaction was terminated by cooling to -80 °C immediately. Citalopram and its metabolites were analyzed by high-performance liquid chromatography (HPLC). The intrinsic clearance (Vmax/Km) values of the variants toward citalopram metabolites were significantly altered, 38-129% for demethylcitalopram and 13-138% for citalopram N-oxide when compared with CYP2D6*1. Most of the tested rare alleles exhibited significantly decreased values due to increased Km and/or decreased Vmax values. We conclude that recombinant system could be used to investigate the enzymes involved in drug metabolism and these findings suggest that more attention should be paid to subjects carrying these CYP2D6 alleles when administering citalopram in the clinic.
NASA Astrophysics Data System (ADS)
Jeromin, A.; Schaffarczyk, A. P.; Puczylowski, J.; Peinke, J.; Hölling, M.
2014-12-01
For the investigation of atmospheric turbulent flows on small scales a new anemometer was developed, the so-called 2d-Atmospheric Laser Cantilever Anemometer (2d-ALCA). It performs highly resolved measurements with a spatial resolution in millimeter range and temporal resolution in kHz range, thus detecting very small turbulent structures. The anemometer is a redesign of the successfully operating 2d-LCA for laboratory application. The new device was designed to withstand hostile operating environments (rain and saline, humid air). In February 2012, the 2d-ALCA was used for the first time in a test field. The device was mounted in about 53 m above ground level on a lattice tower near the German North Sea coast. Wind speed was measured by the 2d-ALCA at 10 kHz sampling rate and by cup anemometers at 1 Hz. The instantaneous wind speed ranged from 8 m/s to 19 m/s at an average turbulence level of about 7 %. Wind field characteristics were analyzed based on cup anemometer as well as 2d-ALCA. The combination of both devices allowed the study of atmospheric turbulence over several magnitudes in turbulent scales.
SNP genotyping using TaqMan technology: the CYP2D6*17 assay conundrum.
Gaedigk, Andrea; Freeman, Natalie; Hartshorne, Toinette; Riffel, Amanda K; Irwin, David; Bishop, Jeffrey R; Stein, Mark A; Newcorn, Jeffrey H; Jaime, Lazara Karelia Montané; Cherner, Mariana; Leeder, J Steven
2015-03-19
CYP2D6 contributes to the metabolism of many clinically used drugs and is increasingly tested to individualize drug therapy. The CYP2D6 gene is challenging to genotype due to the highly complex nature of its gene locus. TaqMan technology is widely used in the clinical and research settings for genotype analysis due to assay reliability, low cost, and the availability of commercially available assays. The assay identifying 1023C>T (rs28371706) defining a reduced function (CYP2D6*17) and several nonfunctional alleles, produced a small number of unexpected diplotype calls in three independent sets of samples, i.e. calls suggested the presence of a CYP2D6*4 subvariant containing 1023C>T. Gene resequencing did not reveal any unknown SNPs in the primer or probe binding sites in any of the samples, but all affected samples featured a trio of SNPs on their CYP2D6*4 allele between one of the PCR primer and probe binding sites. While the phenomenon was ultimately overcome by an alternate assay utilizing a PCR primer excluding the SNP trio, the mechanism causing this phenomenon remains elusive. This rare and unexpected event underscores the importance of assay validation in samples representing a variety of genotypes, but also vigilance of assay performance in highly polymorphic genes such as CYP2D6.
Kolkoori, S R; Rahman, M-U; Chinta, P K; Ktreutzbruck, M; Rethmeier, M; Prager, J
2013-02-01
Ultrasound propagation in inhomogeneous anisotropic materials is difficult to examine because of the directional dependency of elastic properties. Simulation tools play an important role in developing advanced reliable ultrasonic non destructive testing techniques for the inspection of anisotropic materials particularly austenitic cladded materials, austenitic welds and dissimilar welds. In this contribution we present an adapted 2D ray tracing model for evaluating ultrasonic wave fields quantitatively in inhomogeneous anisotropic materials. Inhomogeneity in the anisotropic material is represented by discretizing into several homogeneous layers. According to ray tracing model, ultrasonic ray paths are traced during its energy propagation through various discretized layers of the material and at each interface the problem of reflection and transmission is solved. The presented algorithm evaluates the transducer excited ultrasonic fields accurately by taking into account the directivity of the transducer, divergence of the ray bundle, density of rays and phase relations as well as transmission coefficients. The ray tracing model is able to calculate the ultrasonic wave fields generated by a point source as well as a finite dimension transducer. The ray tracing model results are validated quantitatively with the results obtained from 2D Elastodynamic Finite Integration Technique (EFIT) on several configurations generally occurring in the ultrasonic non destructive testing of anisotropic materials. Finally, the quantitative comparison of ray tracing model results with experiments on 32mm thick austenitic weld material and 62mm thick austenitic cladded material is discussed.
Projection-slice theorem based 2D-3D registration
NASA Astrophysics Data System (ADS)
van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.
2007-03-01
In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.
Synthesis and characterization of 2D molybdenum carbide (MXene)
Halim, Joseph; Kota, Sankalp; Lukatskaya, Maria R.; ...
2016-02-17
Large scale synthesis and delamination of 2D Mo2CT x (where T is a surface termination group) has been achieved by selectively etching gallium from the recently discovered nanolaminated, ternary transition metal carbide Mo2Ga2C. Different synthesis and delamination routes result in different flake morphologies. The resistivity of free-standing Mo2CT x films increases by an order of magnitude as the temperature is reduced from 300 to 10 K, suggesting semiconductor-like behavior of this MXene, in contrast to Ti3C2T x which exhibits metallic behavior. At 10 K, the magnetoresistance is positive. Additionally, changes in electronic transport are observed upon annealing of the films.more » When 2 μm thick films are tested as electrodes in supercapacitors, capacitances as high as 700 F cm–3 in a 1 m sulfuric acid electrolyte and high capacity retention for at least 10,000 cycles at 10 A g–1 are obtained. Free-standing Mo2CT x films, with ≈8 wt% carbon nanotubes, perform well when tested as an electrode material for Li-ions, especially at high rates. In conclusion, at 20 and 131 C cycling rates, stable reversible capacities of 250 and 76 mAh g–1, respectively, are achieved for over 1000 cycles.« less
PLAN2D - A PROGRAM FOR ELASTO-PLASTIC ANALYSIS OF PLANAR FRAMES
NASA Technical Reports Server (NTRS)
Lawrence, C.
1994-01-01
PLAN2D is a FORTRAN computer program for the plastic analysis of planar rigid frame structures. Given a structure and loading pattern as input, PLAN2D calculates the ultimate load that the structure can sustain before collapse. Element moments and plastic hinge rotations are calculated for the ultimate load. The location of hinges required for a collapse mechanism to form are also determined. The program proceeds in an iterative series of linear elastic analyses. After each iteration the resulting elastic moments in each member are compared to the reserve plastic moment capacity of that member. The member or members that have moments closest to their reserve capacity will determine the minimum load factor and the site where the next hinge is to be inserted. Next, hinges are inserted and the structural stiffness matrix is reformulated. This cycle is repeated until the structure becomes unstable. At this point the ultimate collapse load is calculated by accumulating the minimum load factor from each previous iteration and multiplying them by the original input loads. PLAN2D is based on the program STAN, originally written by Dr. E.L. Wilson at U.C. Berkeley. PLAN2D has several limitations: 1) Although PLAN2D will detect unloading of hinges it does not contain the capability to remove hinges; 2) PLAN2D does not allow the user to input different positive and negative moment capacities and 3) PLAN2D does not consider the interaction between axial and plastic moment capacity. Axial yielding and buckling is ignored as is the reduction in moment capacity due to axial load. PLAN2D is written in FORTRAN and is machine independent. It has been tested on an IBM PC and a DEC MicroVAX. The program was developed in 1988.
3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients
NASA Astrophysics Data System (ADS)
Mitrovic, Uroš; Špiclin, Žiga; Štern, Darko; Markelj, Primož; Likar, Boštjan; Miloševic, Zoran; Pernuš, Franjo
2012-02-01
Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.
Digit ratio (2D:4D), lateral preferences, and performance in fencing.
Voracek, Martin; Reimer, Barbara; Ertl, Clara; Dressler, Stefan G
2006-10-01
The second to fourth digit ratio (2D:4D) is a sexually dimorphic trait (men tend to have lower values than women) and a likely biomarker for the organizational (permanent) effects of prenatal androgens on the human brain and body. Prenatal testosterone, as reflected by 2D:4D, has many extragenital effects, including its relevance for the formation of an efficient cardiovascular system. Previous research, reviewed here, has therefore investigated possible associations of 2D:4D with sport performance. Several studies found more masculinized digit ratio patterns (low 2D:4D values or a negative right-minus-left difference in 2D:4D) to be related to high performance in running, soccer, and skiing. The present research tested this hypothesis in a sample of 54 tournament fencers, predominantly from Austria. For men, negative right-left differences in 2D:4D corresponded significantly to better current as well as highest national fencing rankings, independent of training intensity and fencing experience. The mean 2D:4D values of these fencers were significantly lower and the proportion of left-handers was elevated relative to the local general population. For the right hand, the ratio was somewhat lower in male sabre fencers than in male epée and foil fencers combined and significantly lower in left-handed compared to right-handed fencers. Although nonsignificant due to low statistical power, effect sizes suggested that crossed versus congruent hand-eye and hand-foot preferences might also be related to fencing performance. The present findings add to the evidence that 2D:4D might be a performance indicator for men across a variety of sports.
Integrating Mobile Multimedia into Textbooks: 2D Barcodes
ERIC Educational Resources Information Center
Uluyol, Celebi; Agca, R. Kagan
2012-01-01
The major goal of this study was to empirically compare text-plus-mobile phone learning using an integrated 2D barcode tag in a printed text with three other conditions described in multimedia learning theory. The method examined in the study involved modifications of the instructional material such that: a 2D barcode was used near the text, the…
Efficient Visible Quasi-2D Perovskite Light-Emitting Diodes.
Byun, Jinwoo; Cho, Himchan; Wolf, Christoph; Jang, Mi; Sadhanala, Aditya; Friend, Richard H; Yang, Hoichang; Lee, Tae-Woo
2016-09-01
Efficient quasi-2D-structure perovskite light-emitting diodes (4.90 cd A(-1) ) are demonstrated by mixing a 3D-structured perovskite material (methyl ammonium lead bromide) and a 2D-structured perovskite material (phenylethyl ammonium lead bromide), which can be ascribed to better film uniformity, enhanced exciton confinement, and reduced trap density.
Adaptation algorithms for 2-D feedforward neural networks.
Kaczorek, T
1995-01-01
The generalized weight adaptation algorithms presented by J.G. Kuschewski et al. (1993) and by S.H. Zak and H.J. Sira-Ramirez (1990) are extended for 2-D madaline and 2-D two-layer feedforward neural nets (FNNs).
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-20
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method.
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-01
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method. PMID:26805835
Installed Transonic 2D Nozzle Nacelle Boattail Drag Study
NASA Technical Reports Server (NTRS)
Malone, Michael B.; Peavey, Charles C.
1999-01-01
The Transonic Nozzle Boattail Drag Study was initiated in 1995 to develop an understanding of how external nozzle transonic aerodynamics effect airplane performance and how strongly those effects are dependent on nozzle configuration (2D vs. axisymmetric). MDC analyzed the axisymmetric nozzle. Boeing subcontracted Northrop-Grumman to analyze the 2D nozzle. AU participants analyzed the AGARD nozzle as a check-out and validation case. Once the codes were checked out and the gridding resolution necessary for modeling the separated flow in this region determined, the analysis moved to the installed wing/body/nacelle/diverter cases. The boat tail drag validation case was the AGARD B.4 rectangular nozzle. This test case offered both test data and previous CFD analyses for comparison. Results were obtained for test cases B.4.1 (M=0.6) and B.4.2 (M=0.938) and compared very well with the experimental data. Once the validation was complete a CFD grid was constructed for the full Ref. H configuration (wing/body/nacelle/diverter) using a combination of patched and overlapped (Chimera) grids. This was done to ensure that the grid topologies and density would be adequate for the full model. The use of overlapped grids allowed the same grids from the full configuration model to be used for the wing/body alone cases, thus eliminating the risk of grid differences affecting the determination of the installation effects. Once the full configuration model was run and deemed to be suitable the nacelle/diverter grids were removed and the wing/body analysis performed. Reference H wing/body results were completed for M=0.9 (a=0.0, 2.0, 4.0, 6.0 and 8.0), M=1.1 (a=4.0 and 6.0) and M=2.4 (a=0.0, 2.0, 4.4, 6.0 and 8.0). Comparisons of the M=0.9 and M=2.4 cases were made with available wind tunnel data and overall comparisons were good. The axi-inlet/2D nozzle nacelle was analyzed isolated. The isolated nacelle data coupled with the wing/body result enabled the interference effects of the
Regulation of ligands for the NKG2D activating receptor
Raulet, David H.; Gasser, Stephan; Gowen, Benjamin G.; Deng, Weiwen; Jung, Heiyoun
2014-01-01
NKG2D is an activating receptor expressed by all NK cells and subsets of T cells. It serves as a major recognition receptor for detection and elimination of transformed and infected cells and participates in the genesis of several inflammatory diseases. The ligands for NKG2D are self-proteins that are induced by pathways that are active in certain pathophysiological states. NKG2D ligands are regulated transcriptionally, at the level of mRNA and protein stability, and by cleavage from the cell surface. In some cases, ligand induction can be attributed to pathways that are activated specifically in cancer cells or infected cells. We review the numerous pathways that have been implicated in the regulation of NKG2D ligands, discuss the pathologic states in which those pathways are likely to act, and attempt to synthesize the findings into general schemes of NKG2D ligand regulation in NK cell responses to cancer and infection. PMID:23298206
2D materials and van der Waals heterostructures.
Novoselov, K S; Mishchenko, A; Carvalho, A; Castro Neto, A H
2016-07-29
The physics of two-dimensional (2D) materials and heterostructures based on such crystals has been developing extremely fast. With these new materials, truly 2D physics has begun to appear (for instance, the absence of long-range order, 2D excitons, commensurate-incommensurate transition, etc.). Novel heterostructure devices--such as tunneling transistors, resonant tunneling diodes, and light-emitting diodes--are also starting to emerge. Composed from individual 2D crystals, such devices use the properties of those materials to create functionalities that are not accessible in other heterostructures. Here we review the properties of novel 2D crystals and examine how their properties are used in new heterostructure devices.
New generation transistor technologies enabled by 2D crystals
NASA Astrophysics Data System (ADS)
Jena, D.
2013-05-01
The discovery of graphene opened the door to 2D crystal materials. The lack of a bandgap in 2D graphene makes it unsuitable for electronic switching transistors in the conventional field-effect sense, though possible techniques exploiting the unique bandstructure and nanostructures are being explored. The transition metal dichalcogenides have 2D crystal semiconductors, which are well-suited for electronic switching. We experimentally demonstrate field effect transistors with current saturation and carrier inversion made from layered 2D crystal semiconductors such as MoS2, WS2, and the related family. We also evaluate the feasibility of such semiconducting 2D crystals for tunneling field effect transistors for low-power digital logic. The article summarizes the current state of new generation transistor technologies either proposed, or demonstrated, with a commentary on the challenges and prospects moving forward.
The Psychostimulant Khat (Catha edulis) Inhibits CYP2D6 Enzyme Activity in Humans.
Bedada, Worku; de Andrés, Fernando; Engidawork, Ephrem; Pohanka, Anton; Beck, Olof; Bertilsson, Leif; Llerena, Adrián; Aklillu, Eleni
2015-12-01
The use of khat (Catha edulis) while on medication may alter treatment outcome. In particular, the influence of khat on the metabolic activities of drug-metabolizing enzymes is not known. We performed a comparative 1-way crossover study to evaluate the effect of khat on cytochrome P450 (CYP)2D6 and CYP3A4 enzyme activity. After 1 week of khat abstinence, baseline CYP2D6 and CYP3A4 metabolic activities were determined in 40 Ethiopian male volunteers using 30 mg dextromethorphan (DM) as a probe drug and then repeated after 1 week of daily use of 400 g fresh khat leaves. Urinary concentrations of cathinone and cathine were determined to monitor the subjects' compliance to the study protocol. Genotyping for CYP2D6*3 and CYP2D6*4 was done. Plasma DM, dextrorphan and 3-methoxymorphinan concentrations were quantified. CYP2D6 and CYP3A4 enzyme activities were assessed by comparing plasma log DM/dextrorphan and log DM/methoxymorphinan metabolic ratio (MR) respectively in the presence and absence of khat. Cytochrome 2D6 MR was significantly increased from baseline by concurrent khat use (paired t test, P = 0.003; geometric mean ratio, 1.38; 95% confidence interval [95% CI], 1.12-1.53). Moreover, the inhibition of CYP2D6 activity by khat was more pronounced in CYP2D6*1/*1 compared with CYP2D6*1/*4 genotypes (P = 0.01). A marginal inhibition of CYP3A4 activity in the presence of khat was observed (P = 0.24). The mean percentage increase of CYP2D6 and CYP3A4 MR from baseline by khat use was 46% (95% CI, 20-72) and 31% (95% CI, 8-54), respectively. This is the first report linking khat use with significant inhibition of CYP2D6 metabolic activity in humans.
Dynamics and Control of a Reduced Order System of the 2-d Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Smaoui, Nejib; Zribi, Mohamed
2014-11-01
The dynamics and control problem of a reduced order system of the 2-d Navier-Stokes (N-S) equations is analyzed. First, a seventh order system of nonlinear ordinary differential equations (ODE) which approximates the dynamical behavior of the 2-d N-S equations is obtained by using the Fourier Galerkin method. We show that the dynamics of this ODE system transforms from periodic solutions to chaotic attractors through a sequence of bifurcations including a period doubling scenarios. Then three Lyapunov based controllers are designed to either control the system of ODEs to a desired fixed point or to synchronize two ODE systems obtained from the truncation of the 2-d N-S equations under different conditions. Numerical simulations are presented to show the effectiveness of the proposed controllers. This research was supported and funded by the Research Sector, Kuwait University under Grant No. SM02/14.
Estrogen-Induced Cholestasis Leads to Repressed CYP2D6 Expression in CYP2D6-Humanized Mice.
Pan, Xian; Jeong, Hyunyoung
2015-07-01
Cholestasis activates bile acid receptor farnesoid X receptor (FXR) and subsequently enhances hepatic expression of small heterodimer partner (SHP). We previously demonstrated that SHP represses the transactivation of cytochrome P450 2D6 (CYP2D6) promoter by hepatocyte nuclear factor (HNF) 4α. In this study, we investigated the effects of estrogen-induced cholestasis on CYP2D6 expression. Estrogen-induced cholestasis occurs in subjects receiving estrogen for contraception or hormone replacement, or in susceptible women during pregnancy. In CYP2D6-humanized transgenic (Tg-CYP2D6) mice, cholestasis triggered by administration of 17α-ethinylestradiol (EE2) at a high dose led to 2- to 3-fold decreases in CYP2D6 expression. This was accompanied by increased hepatic SHP expression and subsequent decreases in the recruitment of HNF4α to CYP2D6 promoter. Interestingly, estrogen-induced cholestasis also led to increased recruitment of estrogen receptor (ER) α, but not that of FXR, to Shp promoter, suggesting a predominant role of ERα in transcriptional regulation of SHP in estrogen-induced cholestasis. EE2 at a low dose (that does not cause cholestasis) also increased SHP (by ∼ 50%) and decreased CYP2D6 expression (by 1.5-fold) in Tg-CYP2D6 mice, the magnitude of differences being much smaller than that shown in EE2-induced cholestasis. Taken together, our data indicate that EE2-induced cholestasis increases SHP and represses CYP2D6 expression in Tg-CYP2D6 mice in part through ERα transactivation of Shp promoter.
Quasi 2D Materials: Raman Nanometrology and Thermal Management Applications
NASA Astrophysics Data System (ADS)
Shahil, Khan Mohammad Farhan
Quasi two-dimensional (2D) materials obtained by the "graphene-like" exfoliation attracted tremendous attention. Such materials revealed unique electronic, thermal and optical properties, which can be potentially used in electronics, thermal management and energy conversion. This dissertation research addresses two separate but synergetic problems: (i) preparation and optical characterization of quasi-2D films of the bismuth-telluride (Bi 2Te3) family of materials, which demonstrate both thermoelectric and topological insulator properties; and (ii) investigation of thermal properties of composite materials prepared with graphene and few-layer graphene (FLG). The first part of dissertation reports properties of the exfoliated few-quintuple layers of Bi2Te3, Bi2Se3 and Sb 2Te3. Both non-resonant and resonant Raman scattering spectra have been investigated. It was found that the crystal symmetry breaking in few-quintuple films results in appearance of A1u-symmetry Raman peaks, which are not active in the bulk crystals. The scattering spectra measured under the 633-nm wavelength excitation reveals a number of resonant features, which could be used for analysis of the electronic and phonon processes in these materials. The obtained results help to understand the physical mechanisms of Raman scattering in the few-quintuple-thick films and can be used for nanometrology of topological insulator films on various substrates. The second part of the dissertation is dedicated to investigation of properties of composite materials prepared with graphene and FLG. It was found that the optimized mixture of graphene and multilayer graphene---produced by the high-yield inexpensive liquid-phase-exfoliation technique---can lead to an extremely strong enhancement of the cross-plane thermal conductivity K of the composite. The "laser flash" measurements revealed a record-high enhancement of K by 2300 % in the graphene-based polymer at the filler loading fraction f =10 vol. %. It was
Graph-Based Transform for 2D Piecewise Smooth Signals With Random Discontinuity Locations.
Zhang, Dong; Liang, Jie
2017-04-01
The graph-based block transform recently emerged as an effective tool for compressing some special signals such as depth images in 3D videos. However, in existing methods, overheads are required to describe the graph of the block, from which the decoder has to calculate the transform via time-consuming eigendecomposition. To address these problems, in this paper, we aim to develop a single graph-based transform for a class of 2D piecewise smooth signals with similar discontinuity patterns. We first consider the deterministic case with a known discontinuity location in each row. We propose a 2D first-order autoregression (2D AR1) model and a 2D graph for this type of signals. We show that the closed-form expression of the inverse of a biased Laplacian matrix of the proposed 2D graph is exactly the covariance matrix of the proposed 2D AR1 model. Therefore, the optimal transform for the signal are the eigenvectors of the proposed graph Laplacian. Next, we show that similar results hold in the random case, where the locations of the discontinuities in different rows are randomly distributed within a confined region, and we derive the closed-form expression of the corresponding optimal 2D graph Laplacian. The theory developed in this paper can be used to design both pre-computed transforms and signal-dependent transforms with low complexities. Finally, depth image coding experiments demonstrate that our methods can achieve similar performance to the state-of-the-art method, but our complexity is much lower.
A novel KMT2D mutation resulting in Kabuki syndrome: A case report
Lu, Jun; Mo, Guiling; Ling, Yaojun; Ji, Lijuan
2016-01-01
Kabuki syndrome (KS) is a rare genetic syndrome characterized by multiple congenital anomalies and varying degrees of mental retardation. Patients with KS often present with facial, skeletal, visceral and dermatoglyphic abnormalities, cardiac anomalies and immunological defects. Mutation of the lysine methyltransferase 2D (KMT2D) gene (formerly known as MLL2) is the primary cause of KS. The present study reported the case of a 4-year-old Chinese girl who presented with atypical KS, including atypical facial features, unclear speech and suspected mental retardation. A diagnosis of KS was confirmed by genetic testing, which revealed a nonsense mutation in exon 16 of KMT2D (c.4485C>A, Tyr1495Ter). To the best of our knowledge, this is a novel mutation that has not been reported previously. The present case underscores the importance of genetic testing in KS diagnosis. PMID:27573763
Damage Assessment and Digital 2D-3D Documentation of PetraTreasury
NASA Astrophysics Data System (ADS)
Bala'awi, Fadi; Alshawabkeh, Yahya; Alawneh, Firas; Masri, Eyed al
The treasury is the icon monument of the world heritage site of ancient Petra city. Unfortunately, this important part of the world's cultural heritage is gradually being diminished due to weathering and erosion problems. This give rise to the need to have a comprehensive study and full documentation of the monument in order to evaluate its status. In this research a comprehensive approach utilizing 2D-3D documentation of the structure using laser scanner and photogrammetry is carried parallel with a laboratory analysis and a correlation study of the salt content and the surface weathering forms. In addition, the research extends to evaluate a set of chemical and physical properties of the case study monument. Studies of stone texture and spatial distribution of soluble salts were carried out at the monument in order to explain the mechanism of the weathering problem. Then a series of field work investigations and laboratory work were undertaken to study the effect of relative humidity, temperature, and wind are the main factors in the salt damage process. The 3D modelling provides accurate geometric and radiometric properties of the damage shape. In order to support the visual quality of 3D surface details and cracks, a hybrid approach combining data from the laser scanner and the digital imagery was developed. Based on the findings, salt damage appears to be one of the main problems at this monument. Although, the total soluble salt content are quite low, but the salts contamination is all over the tested samples in all seasons, with higher concentrations at deep intervals. The thermodynamic calculations carried out by this research have also shown that salt damage could be minimised by controlling the surrounding relative humidity conditions. This measure is undoubtedly the most challenging of all, and its application, if deemed feasible, should be carried out in parallel with other conservation measures.
Calculating tissue shear modulus and pressure by 2D Log-Elastographic methods
McLaughlin, Joyce R; Zhang, Ning; Manduca, Armando
2010-01-01
Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data is two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ · u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D Log-Elastographic inverse algorithm that: (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first order partial differential equation system, with the goal of imaging μ; (2) controls potential exponential growth in the numerical error; and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the Log-Elastographic algorithm in [20] which uses one displacement component, is derived assuming the component satisfies the wave equation, and is tested on synthetic data computed with the wave equation model. The 2D Log-Elastographic algorithm is tested on 2D synthetic data and 2D in-vivo data from Mayo Clinic. We also exhibit examples to show that the 2D Log-Elastographic algorithm improves the quality of the recovered images as compared to the Log-Elastographic and Direct Inversion algorithms. PMID:21822349
ERIC Educational Resources Information Center
Kidd, David E.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide on water pollution includes the following units: (1) Overview of World Pollution; (2) History, Definition, Criteria; (3) Ecosystem Theory; (4) Biological…
Public Policy Analysis. Test Edition. AAAS Study Guides on Contemporary Problems No. 3.
ERIC Educational Resources Information Center
Ostrom, Elinor
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This publication on Public Policy Analysis includes 11 sections: (1) The Purposes of this Study Guide; (2) Urban Reform Proposals; (3) Evaluating Reform Proposals; (4)…
Atmospheric Sciences. Test Edition. AAAS Study Guides on Contemporary Problems, No. 6.
ERIC Educational Resources Information Center
Schaefer, Vincent J.; Mohnen, Volker A.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide includes the following sections: (1) Solar Radiation and Its Interaction with the Earth's Atmosphere System; (2) The Water Cycle; (3) Fundamentals of Air…
ERIC Educational Resources Information Center
Lorber, Michael F.; Egeland, Byron
2011-01-01
The prediction of conduct problems (CPs) from infant difficulty and parenting measured in the first 6 months of life was studied in a sample of 267 high-risk mother-child dyads. Stable, cross-situational CPs at school entry (5-6 years) were predicted by negative infancy parenting, mediated by mutually angry and hostile mother-toddler interactions…
Invitational Conference on Testing Problems. Proceedings (New York City, November 2, 1968).
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
Papers presented at this conference discussed the educational evaluation and the problems of the socially disadvantaged. Topics were: "The Comparative Field Experiment: An Illustration from High School Biology," by Richard C. Anderson; "Evaluation of Teacher Training in a Title III Center," by Ethna R. Reid; "Evaluating a…
The English Translation and Testing of the Problems after Discharge Questionnaire
ERIC Educational Resources Information Center
Holland, Diane E.; Mistiaen, Patriek; Knafl, George J.; Bowles, Kathryn H.
2011-01-01
The quality of hospital discharge planning assessments determines whether patients receive the health and social services they need or are sent home with unmet needs and without services. There is a valid and reliable Dutch instrument that measures problems and unmet needs patients encounter after discharge. This article describes the translation…
A Test of Problem Behavior and Self-Medication Theories in Incarcerated Adolescent Males
ERIC Educational Resources Information Center
Esposito-Smythers, Christianne; Penn, Joseph V.; Stein, L. A. R.; Lacher-Katz, Molly; Spirito, Anthony
2008-01-01
The purpose of this study is to examine the problem behavior and self-medication models of alcohol abuse in incarcerated male adolescents. Male adolescents (N = 56) incarcerated in a juvenile correction facility were administered a battery of psychological measures. Approximately 84% of adolescents with clinically significant alcohol-related…
Ethical Issues and the Life Sciences. Test Edition. AAAS Study Guides on Contemporary Problems.
ERIC Educational Resources Information Center
Kieffer, George H.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide on Ethical Issues and the Life Sciences includes the following sections: (1) Introduction; (2) The Search for an Ethic; (3) Biomedical Issues including…
ERIC Educational Resources Information Center
Simic, Andrei
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the ethnology of traditional and complex societies. Part I, Simple and Complex Societies, includes three sections: (1) Introduction: Anthropologists…
ERIC Educational Resources Information Center
Bolkan, San; Goodboy, Alan K.
2016-01-01
Protection motivation theory (PMT) explains people's adaptive behavior in response to personal threats. In this study, PMT was used to predict rhetorical dissent episodes related to 210 student reports of perceived classroom problems. In line with theoretical predictions, a moderated moderation analysis revealed that students were likely to voice…
Simulation Technique in the Teaching and Testing of Problem-Solving Skills.
ERIC Educational Resources Information Center
McGuire, Christine
This presentation, by an invited speaker at the 46th annual meeting of the National Association for Research in Science Teaching, describes simulation techniques used in the medical education program at the University of Illinois. Medical students interact with simulated patients and acquire problem-solving competencies for use in working with…
ERIC Educational Resources Information Center
Mesa, Vilma; Wladis, Claire; Watkins, Laura
2014-01-01
This commentary articulates the need to investigate problems of mathematics instruction at community colleges. The authors briefly describe some features of this often-ignored institution and the current status of research. They also make an argument for how investigations of instruction in this setting can both advance understanding of this…
Boyan, B D; Hurst-Kennedy, J; Denison, T A; Schwartz, Z
2010-07-01
Previously we showed that costochondral growth plate resting zone (RC) chondrocytes response primarily to 24R,25(OH)2D3 whereas prehypertrophic and hypertrophic (GC) cells respond to 1alpha,25(OH)2D3. 24R,25(OH)2D3 increases RC cell proliferation and inhibits activity of matrix processing enzymes, suggesting it stabilizes cells in the reserve zone, possibly by inhibiting the matrix degradation characteristic of apoptotic hypertrophic GC cells. To test this, apoptosis was induced in rat RC cells by treatment with exogenous inorganic phosphate (Pi). 24R,25(OH)2D3 blocked apoptotic effects in a dose-dependent manner. Similarly, apoptosis was induced in ATDC5 cell cultures and 24R,25(OH)2D3 blocked this effect. Further studies indicated that 24R,25(OH)2D3 acts via at least two independent pathways. 24R,25(OH)2D3 increases LPA receptor-1 (LPA R1) expression and production of lysophosphatidic acid (LPA), and subsequent LPA R1/3-dependent signaling, thereby decreasing p53 abundance. LPA also increases the Bcl-2/Bax ratio. In addition, 24R,25(OH)2D3 acts by increasing PKC activity. 24R,25(OH)2D3 stimulates 1-hydroxylase activity, resulting in increased levels of 1,25(OH)2D3, and it increases levels of phospholipase A2 activating protein, which is required for rapid 1alpha,25(OH)2D3-dependent activation of PKC in GC cells. These results suggest that 24R,25(OH)2D3 modulates growth plate development by controlling the rate and extent of RC chondrocyte transition to a GC chondrocyte phenotype.
2D/3D Program work summary report, [January 1988--December 1992
Damerell, P. S.; Simons, J. W.
1993-06-01
The 2D/3D Program was carried out by Germany, Japan and the United States to investigate the thermal-hydraulics of a PWR large-break LOCA. A contributory approach was utilized in which each country contributed significant effort to the program and all three countries shared the research results. Germany constructed and operated the Upper Plenum Test Facility (UPTF), and Japan constructed and operated the Cylindrical Core Test Facility (CCTF) and the Slab Core Test Facility (SCTF). The US contribution consisted of provision of advanced instrumentation to each of the three test facilities, and assessment of the TRAC computer code against the test results. Evaluations of the test results were carried out in all three countries. This report summarizes the 2D/3D Program in terms of the contributing efforts of the participants.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
ERIC Educational Resources Information Center
Norris, John M.
2015-01-01
Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…
Ziegenhain, U; Müller, B; Rauh, H
1996-01-01
In this article, the influence of quality of attachment (Ainsworth Strange Situation at 21 months) and of the intensity of attachment insecurity on test performance and emotional state in the test situation (Bayley-test at 20 months) are analyzed. The quality of attachment of 75 infants was classified according to Crittenden's PAA (Preschool Assessment of Attachment) as: secure (B), insecure defended (A) and insecure-coercive (C). Alternately, the infants were classified according to their intensity of insecurity of attachment across subtypes of qualities (secure, insecure, highly insecure). Securely attached (B) infants had the best Bayley Mental scores, were socially open and bodily relaxed. The insecure-defended (A) infants had moderate test results, were moderately open and tense, whereas the insecure-coercive (C) infants showed not only the worst test-results but were often withdrawn, fearful, tense, and poorly coordinated. Additional clinical signs of disorganization were spread unspecifically over all attachment groups particularly those of the insecure children. In the classification of children according to intensity of insecurity, these signs of disorganization accumulated particularly in the group of highly insecure infants. Children with highly insecure attachment who also exhibited unusual test situation behavior also had the lowest Bayley-test scores in the Mental Scale. These results are interpreted in the sense of balance between test engagement and emotional cost.
The Major Field Test in Business: A Solution to the Problem of Assurance of Learning Assessment?
ERIC Educational Resources Information Center
Green, Jeffrey J.; Stone, Courtenay Clifford; Zegeye, Abera
2014-01-01
Colleges and universities are being asked by numerous sources to provide assurance of learning assessments of their students and programs. Colleges of business have responded by using a plethora of assessment tools, including the Major Field Test in Business. In this article, the authors show that the use of the Major Field Test in Business for…
Problem-Solving Test: Analysis of DNA Damage Recognizing Proteins in Yeast and Human Cells
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2013-01-01
The experiment described in this test was aimed at identifying DNA repair proteins in human and yeast cells. Terms to be familiar with before you start to solve the test: DNA repair, germline mutation, somatic mutation, inherited disease, cancer, restriction endonuclease, radioactive labeling, [alpha-[superscript 32]P]ATP, [gamma-[superscript…
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
Three themes were addressed at the conference: (1) implications of factor analysis for achievement testing; (2) use of achievement tests in awarding course credits; and (3) extended conceptions of evaluation in higher education. The speeches were entitled: Factors of Verbal Achievement, by John B. Carroll; Schools of Thought in Judging Excellence…
ERIC Educational Resources Information Center
Tobin, Michael; Hill, Eileen
2010-01-01
An examination is made of the value of using published personality tests with young blind and partially sighted children. Based on data gathered during a longitudinal investigation into the educational and psychological development of a group of 120 visually impaired learners, the authors conclude that their own selection of a test instrument…
Direct and Inverse Problems of Item Pool Design for Computerized Adaptive Testing
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2009-01-01
The recent literature on computerized adaptive testing (CAT) has developed methods for creating CAT item pools from a large master pool. Each CAT pool is designed as a set of nonoverlapping forms reflecting the skill levels of an assumed population of test takers. This article presents a Monte Carlo method to obtain these CAT pools and discusses…
Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng
2016-08-04
Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples.
The FDA and genetic testing: improper tools for a difficult problem
Willmarth, Kirk
2015-01-01
The US Food and Drug Administration (FDA) has recently issued draft guidance on how it intends to regulate laboratory-developed tests, including genetic tests. This article argues that genetic tests differ from traditional targets of FDA regulation in both product as well as industry landscape, and that the FDA's traditional tools are ill-suited for regulating this space. While existing regulatory gaps do create risks in genetic testing, the regulatory burden of the FDA's proposal introduces new risks for both test providers and patients that may offset the benefits. Incremental expansion of current oversight outside of the FDA can mitigate many of the risks necessitating increased oversight while avoiding the creation of new ones that could undermine this industry. PMID:27774193
The FDA and genetic testing: improper tools for a difficult problem.
Willmarth, Kirk
2015-02-01
The US Food and Drug Administration (FDA) has recently issued draft guidance on how it intends to regulate laboratory-developed tests, including genetic tests. This article argues that genetic tests differ from traditional targets of FDA regulation in both product as well as industry landscape, and that the FDA's traditional tools are ill-suited for regulating this space. While existing regulatory gaps do create risks in genetic testing, the regulatory burden of the FDA's proposal introduces new risks for both test providers and patients that may offset the benefits. Incremental expansion of current oversight outside of the FDA can mitigate many of the risks necessitating increased oversight while avoiding the creation of new ones that could undermine this industry.
Efficient 2D MRI relaxometry using compressed sensing
NASA Astrophysics Data System (ADS)
Bai, Ruiliang; Cloninger, Alexander; Czaja, Wojciech; Basser, Peter J.
2015-06-01
Potential applications of 2D relaxation spectrum NMR and MRI to characterize complex water dynamics (e.g., compartmental exchange) in biology and other disciplines have increased in recent years. However, the large amount of data and long MR acquisition times required for conventional 2D MR relaxometry limits its applicability for in vivo preclinical and clinical MRI. We present a new MR pipeline for 2D relaxometry that incorporates compressed sensing (CS) as a means to vastly reduce the amount of 2D relaxation data needed for material and tissue characterization without compromising data quality. Unlike the conventional CS reconstruction in the Fourier space (k-space), the proposed CS algorithm is directly applied onto the Laplace space (the joint 2D relaxation data) without compressing k-space to reduce the amount of data required for 2D relaxation spectra. This framework is validated using synthetic data, with NMR data acquired in a well-characterized urea/water phantom, and on fixed porcine spinal cord tissue. The quality of the CS-reconstructed spectra was comparable to that of the conventional 2D relaxation spectra, as assessed using global correlation, local contrast between peaks, peak amplitude and relaxation parameters, etc. This result brings this important type of contrast closer to being realized in preclinical, clinical, and other applications.
2D vs. 3D mammography observer study
NASA Astrophysics Data System (ADS)
Fernandez, James Reza F.; Hovanessian-Larsen, Linda; Liu, Brent
2011-03-01
Breast cancer is the most common type of non-skin cancer in women. 2D mammography is a screening tool to aid in the early detection of breast cancer, but has diagnostic limitations of overlapping tissues, especially in dense breasts. 3D mammography has the potential to improve detection outcomes by increasing specificity, and a new 3D screening tool with a 3D display for mammography aims to improve performance and efficiency as compared to 2D mammography. An observer study using a mammography phantom was performed to compare traditional 2D mammography with this ne 3D mammography technique. In comparing 3D and 2D mammography there was no difference in calcification detection, and mass detection was better in 2D as compared to 3D. There was a significant decrease in reading time for masses, calcifications, and normals in 3D compared to 2D, however, as well as more favorable confidence levels in reading normal cases. Given the limitations of the mammography phantom used, however, a clearer picture in comparing 3D and 2D mammography may be better acquired with the incorporation of human studies in the future.
Wei, Hongjiang; Zhang, Yuyao; Gibbs, Eric; Chen, Nan-Kuei; Wang, Nian; Liu, Chunlei
2017-04-01
Quantitative susceptibility mapping (QSM) measures tissue magnetic susceptibility and typically relies on time-consuming three-dimensional (3D) gradient-echo (GRE) MRI. Recent studies have shown that two-dimensional (2D) multi-slice gradient-echo echo-planar imaging (GRE-EPI), which is commonly used in functional MRI (fMRI) and other dynamic imaging techniques, can also be used to produce data suitable for QSM with much shorter scan times. However, the production of high-quality QSM maps is difficult because data obtained by 2D multi-slice scans often have phase inconsistencies across adjacent slices and strong susceptibility field gradients near air-tissue interfaces. To address these challenges in 2D EPI-based QSM studies, we present a new data processing procedure that integrates 2D and 3D phase processing. First, 2D Laplacian-based phase unwrapping and 2D background phase removal are performed to reduce phase inconsistencies between slices and remove in-plane harmonic components of the background phase. This is followed by 3D background phase removal for the through-plane harmonic components. The proposed phase processing was evaluated with 2D EPI data obtained from healthy volunteers, and compared against conventional 3D phase processing using the same 2D EPI datasets. Our QSM results were also compared with QSM values from time-consuming 3D GRE data, which were taken as ground truth. The experimental results show that this new 2D EPI-based QSM technique can produce quantitative susceptibility measures that are comparable with those of 3D GRE-based QSM across different brain regions (e.g. subcortical iron-rich gray matter, cortical gray and white matter). This new 2D EPI QSM reconstruction method is implemented within STI Suite, which is a comprehensive shareware for susceptibility imaging and quantification. Copyright © 2016 John Wiley & Sons, Ltd.
NKG2D receptor and its ligands in host defense
Lanier, Lewis L.
2015-01-01
NKG2D is an activating receptor expressed on the surface of natural killer (NK) cells, CD8+ T cells, and subsets of CD4+ T cells, iNKT cells, and γδ T cells. In humans NKG2D transmits signals by its association with the DAP10 adapter subunit and in mice alternatively spliced isoforms transmit signals either using DAP10 or DAP12 adapter subunits. Although NKG2D is encoded by a highly conserved gene (KLRK1) with limited polymorphism, the receptor recognizes an extensive repertoire of ligands, encoded by at least 8 genes in humans (MICA, MICB, RAET1E, RAET1G, RAET1H, RAET1I, RAET1L, and RAET1N), some with extensive allelic polymorphism. Expression of the NKG2D ligands is tightly regulated at the level of transcription, translation, and post-translation. In general healthy adult tissues do not express NKG2D glycoproteins on the cell surface, but these ligands can be induced by hyper-proliferation and transformation, as well as when cells are infected by pathogens. Thus, the NKG2D pathway serves a mechanism for the immune system to detect and eliminate cells that have undergone “stress”. Viruses and tumor cells have devised numerous strategies to evade detection by the NKG2D surveillance system and diversification of the NKG2D ligand genes likely has been driven by selective pressures imposed by pathogens. NKG2D provides an attractive target for therapeutics in the treatment of infectious diseases, cancer, and autoimmune diseases. PMID:26041808
NKG2D Receptor and Its Ligands in Host Defense.
Lanier, Lewis L
2015-06-01
NKG2D is an activating receptor expressed on the surface of natural killer (NK) cells, CD8(+) T cells, and subsets of CD4(+) T cells, invariant NKT cells (iNKT), and γδ T cells. In humans, NKG2D transmits signals by its association with the DAP10 adapter subunit, and in mice alternatively spliced isoforms transmit signals either using DAP10 or DAP12 adapter subunits. Although NKG2D is encoded by a highly conserved gene (KLRK1) with limited polymorphism, the receptor recognizes an extensive repertoire of ligands, encoded by at least eight genes in humans (MICA, MICB, RAET1E, RAET1G, RAET1H, RAET1I, RAET1L, and RAET1N), some with extensive allelic polymorphism. Expression of the NKG2D ligands is tightly regulated at the level of transcription, translation, and posttranslation. In general, healthy adult tissues do not express NKG2D glycoproteins on the cell surface, but these ligands can be induced by hyperproliferation and transformation, as well as when cells are infected by pathogens. Thus, the NKG2D pathway serves as a mechanism for the immune system to detect and eliminate cells that have undergone "stress." Viruses and tumor cells have devised numerous strategies to evade detection by the NKG2D surveillance system, and diversification of the NKG2D ligand genes likely has been driven by selective pressures imposed by pathogens. NKG2D provides an attractive target for therapeutics in the treatment of infectious diseases, cancer, and autoimmune diseases.
2-D Versus 3-D Magnetotelluric Data Interpretation
NASA Astrophysics Data System (ADS)
Ledo, Juanjo
2005-09-01
In recent years, the number of publications dealing with the mathematical and physical 3-D aspects of the magnetotelluric method has increased drastically. However, field experiments on a grid are often impractical and surveys are frequently restricted to single or widely separated profiles. So, in many cases we find ourselves with the following question: is the applicability of the 2-D hypothesis valid to extract geoelectric and geological information from real 3-D environments? The aim of this paper is to explore a few instructive but general situations to understand the basics of a 2-D interpretation of 3-D magnetotelluric data and to determine which data subset (TE-mode or TM-mode) is best for obtaining the electrical conductivity distribution of the subsurface using 2-D techniques. A review of the mathematical and physical fundamentals of the electromagnetic fields generated by a simple 3-D structure allows us to prioritise the choice of modes in a 2-D interpretation of responses influenced by 3-D structures. This analysis is corroborated by numerical results from synthetic models and by real data acquired by other authors. One important result of this analysis is that the mode most unaffected by 3-D effects depends on the position of the 3-D structure with respect to the regional 2-D strike direction. When the 3-D body is normal to the regional strike, the TE-mode is affected mainly by galvanic effects, while the TM-mode is affected by galvanic and inductive effects. In this case, a 2-D interpretation of the TM-mode is prone to error. When the 3-D body is parallel to the regional 2-D strike the TE-mode is affected by galvanic and inductive effects and the TM-mode is affected mainly by galvanic effects, making it more suitable for 2-D interpretation. In general, a wise 2-D interpretation of 3-D magnetotelluric data can be a guide to a reasonable geological interpretation.
Long-Read Single Molecule Real-Time Full Gene Sequencing of Cytochrome P450-2D6.
Qiao, Wanqiong; Yang, Yao; Sebra, Robert; Mendiratta, Geetu; Gaedigk, Andrea; Desnick, Robert J; Scott, Stuart A
2016-03-01
The cytochrome P450-2D6 (CYP2D6) enzyme metabolizes ∼25% of common medications, yet homologous pseudogenes and copy number variants (CNVs) make interrogating the polymorphic CYP2D6 gene with short-read sequencing challenging. Therefore, we developed a novel long-read, full gene CYP2D6 single molecule real-time (SMRT) sequencing method using the Pacific Biosciences platform. Long-range PCR and CYP2D6 SMRT sequencing of 10 previously genotyped controls identified expected star (*) alleles, but also enabled suballele resolution, diplotype refinement, and discovery of novel alleles. Coupled with an optimized variant-calling pipeline, CYP2D6 SMRT sequencing was highly reproducible as triplicate intra- and inter-run nonreference genotype results were completely concordant. Importantly, targeted SMRT sequencing of upstream and downstream CYP2D6 gene copies characterized the duplicated allele in 15 control samples with CYP2D6 CNVs. The utility of CYP2D6 SMRT sequencing was further underscored by identifying the diplotypes of 14 samples with discordant or unclear CYP2D6 configurations from previous targeted genotyping, which again included suballele resolution, duplicated allele characterization, and discovery of a novel allele and tandem arrangement. Taken together, long-read CYP2D6 SMRT sequencing is an innovative, reproducible, and validated method for full-gene characterization, duplication allele-specific analysis, and novel allele discovery, which will likely improve CYP2D6 metabolizer phenotype prediction for both research and clinical testing applications.
Acquisition and Resilience Under Test Stress of Structurally Different Problem Solving Procedures
1973-05-01
markedly different structures during the learning of the same new concept." For the "how much" theories of the acquisition process, there is...which is the main determinate of learning outcome. However, a crucial question for the "what kind" theories is: How can the internal...memory and the process of solving problems. June 1972. 38. Greeno, J. G. 6 Bjork, R. A. Mathematical learning theory and the new "montal forestry
Standardization methods for testing photo-catalytic air remediation materials: Problems and solution
NASA Astrophysics Data System (ADS)
Ifang, S.; Gallus, M.; Liedtke, S.; Kurtenbach, R.; Wiesen, P.; Kleffmann, J.
2014-07-01
In the present study, problems of different methods used for quantifying the air remediation activity of photo-catalytic active surfaces are described. It is demonstrated that in bed photo-reactors (e.g. ISO), transport limitations can lead to underestimation of the activity, if fast heterogeneous reactions are investigated. In contrast, in stirred tank photo-reactors (e.g. UNI), complex secondary chemistry may lead to an overestimation of the photo-catalytic remediation of NOx, if NO2 is also present. In addition, the quantities, used for ranking the photo-catalytic air remediation activity in the different methods are not independent of the applied experimental conditions, and thus, make any intercomparison between the different methods or the extrapolation to atmospheric conditions very difficult. Furthermore, unrealistic high NOx levels are used, for which the chemical kinetics may already be affected by surface saturation problems. Finally, it is shown that the use of only nitrogen monoxide (NO) will not enable users to judge about the quality and effectiveness of a photo-catalytic surface for improving air quality, since surfaces which are active toward NO may be completely non-reactive toward other important atmospheric pollutants. A modified method for quantifying the air remediation activity of photo-catalytic surfaces is proposed here to overcome these problems.
Recent advances in 2D materials for photocatalysis.
Luo, Bin; Liu, Gang; Wang, Lianzhou
2016-04-07
Two-dimensional (2D) materials have attracted increasing attention for photocatalytic applications because of their unique thickness dependent physical and chemical properties. This review gives a brief overview of the recent developments concerning the chemical synthesis and structural design of 2D materials at the nanoscale and their applications in photocatalytic areas. In particular, recent progress on the emerging strategies for tailoring 2D material-based photocatalysts to improve their photo-activity including elemental doping, heterostructure design and functional architecture assembly is discussed.
Comparison of 2D and 3D gamma analyses
Pulliam, Kiley B.; Huang, Jessie Y.; Howell, Rebecca M.; Followill, David; Kry, Stephen F.; Bosca, Ryan; O’Daniel, Jennifer
2014-02-15
Purpose: As clinics begin to use 3D metrics for intensity-modulated radiation therapy (IMRT) quality assurance, it must be noted that these metrics will often produce results different from those produced by their 2D counterparts. 3D and 2D gamma analyses would be expected to produce different values, in part because of the different search space available. In the present investigation, the authors compared the results of 2D and 3D gamma analysis (where both datasets were generated in the same manner) for clinical treatment plans. Methods: Fifty IMRT plans were selected from the authors’ clinical database, and recalculated using Monte Carlo. Treatment planning system-calculated (“evaluated dose distributions”) and Monte Carlo-recalculated (“reference dose distributions”) dose distributions were compared using 2D and 3D gamma analysis. This analysis was performed using a variety of dose-difference (5%, 3%, 2%, and 1%) and distance-to-agreement (5, 3, 2, and 1 mm) acceptance criteria, low-dose thresholds (5%, 10%, and 15% of the prescription dose), and data grid sizes (1.0, 1.5, and 3.0 mm). Each comparison was evaluated to determine the average 2D and 3D gamma, lower 95th percentile gamma value, and percentage of pixels passing gamma. Results: The average gamma, lower 95th percentile gamma value, and percentage of passing pixels for each acceptance criterion demonstrated better agreement for 3D than for 2D analysis for every plan comparison. The average difference in the percentage of passing pixels between the 2D and 3D analyses with no low-dose threshold ranged from 0.9% to 2.1%. Similarly, using a low-dose threshold resulted in a difference between the mean 2D and 3D results, ranging from 0.8% to 1.5%. The authors observed no appreciable differences in gamma with changes in the data density (constant difference: 0.8% for 2D vs 3D). Conclusions: The authors found that 3D gamma analysis resulted in up to 2.9% more pixels passing than 2D analysis. It must