Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies
Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.
2012-08-14
reasonable quality photographic data. The photographs in Figure 2 also reveal the appearance of a boundary layer at the left and right walls; this boundary layer has not been included in the test problem as preliminary calculations suggested it had a negligible effect on plume penetration and RT mixing. The significance of this test problem is that, unlike planar RT experiments such as the Rocket-Rig (Youngs, 1984), Linear Electric Motor - LEM (Dimonte, 1990), or the Water Tunnel (Andrews, 1992), the Tilted-Rig is a unique two-dimensional RT mixing experiment that has experimental data and now (in this TP) Direct Numerical Simulation data from Livescu and Wei. The availability of DNS data for the tilted-rig has made this TP viable as it provides detailed results for comparison purposes. The purpose of the test problem is to provide 3D simulation results, validated by comparison with experiment, which can be used for the development and validation of 2D RANS models. When such models are applied to 2D flows, various physics issues are raised such as double counting, combined buoyancy and shear, and 2-D strain, which have not yet been adequately addressed. The current objective of the test problem is to compare key results, which are needed for RANS model validation, obtained from high-Reynolds number DNS, high-resolution ILES or LES with explicit sub-grid-scale models. The experiment is incompressible and so is directly suitable for algorithms that are designed for incompressible flows (e.g. pressure correction algorithms with multi-grid); however, we have extended the TP so that compressible algorithms, run at low Mach number, may also be used if careful consideration is given to initial pressure fields. Thus, this TP serves as a useful tool for incompressible and compressible simulation codes, and mathematical models. In the remainder of this TP we provide a detailed specification; the next section provides the underlying assumptions for the TP, fluids, geometry details
Evaluation of the entropy consistent euler flux on 1D and 2D test problems
NASA Astrophysics Data System (ADS)
Roslan, Nur Khairunnisa Hanisah; Ismail, Farzad
2012-06-01
Perhaps most CFD simulations may yield good predictions of pressure and velocity when compared to experimental data. Unfortunately, these results will most likely not adhere to the second law of thermodynamics hence comprising the authenticity of predicted data. Currently, the test of a good CFD code is to check how much entropy is generated in a smooth flow and hope that the numerical entropy produced is of the correct sign when a shock is encountered. Herein, a shock capturing code written in C++ based on a recent entropy consistent Euler flux is developed to simulate 1D and 2D flows. Unlike other finite volume schemes in commercial CFD code, this entropy consistent flux (EC) function precisely satisfies the discrete second law of thermodynamics. This EC flux has an entropy-conserved part, preserving entropy for smooth flows and a numerical diffusion part that will accurately produce the proper amount of entropy, consistent with the second law. Several numerical simulations of the entropy consistent flux have been tested on two dimensional test cases. The first case is a Mach 3 flow over a forward facing step. The second case is a flow over a NACA 0012 airfoil while the third case is a hypersonic flow passing over a 2D cylinder. Local flow quantities such as velocity and pressure are analyzed and then compared with mainly the Roe flux. The results herein show that the EC flux does not capture the unphysical rarefaction shock unlike the Roe-flux and does not easily succumb to the carbuncle phenomenon. In addition, the EC flux maintains good performance in cases where the Roe flux is known to be superior.
Boundary identification for 2-D parabolic problems arising in thermal testing of materials
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
Problems on the identification of two-dimensional spatial domains arising in the detection and characterization of structural flaws in materials are considered. For a thermal diffusion system with external boundary input, observations of the temperature on the surface are used in an output least square approach. Parameter estimation techniques based on the method of mappings are discussed, and approximation schemes are developed based on a finite-element Galerkin approach. Theoretical convergence results for computational techniques are given, and the results are applied to the identification of two kinds of boundary shapes.
2D and 3D Traveling Salesman Problem
ERIC Educational Resources Information Center
Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt
2011-01-01
When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…
On 2D bisection method for double eigenvalue problems
Ji, X.
1996-06-01
The two-dimensional bisection method presented in (SIAM J. Matrix Anal. Appl. 13(4), 1085 (1992)) is efficient for solving a class of double eigenvalue problems. This paper further extends the 2D bisection method of full matrix cases and analyses its stability. As in a single parameter case, the 2D bisection method is very stable for the tridiagonal matrix triples satisfying the symmetric-definite condition. Since the double eigenvalue problems arise from two-parameter boundary value problems, an estimate of the discretization error in eigenpairs is also given. Some numerical examples are included. 42 refs., 1 tab.
2-D or not 2-D, that is the question: A Northern California test
Mayeda, K; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D
2005-06-06
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2
A smart repair embedded memetic algorithm for 2D shape matching problems
NASA Astrophysics Data System (ADS)
Sharif Khan, Mohammad; Mohamad Ayob, Ahmad F.; Isaacs, Amitay; Ray, Tapabrata
2012-10-01
Shape representation plays a major role in any shape optimization exercise. The ability to identify a shape with good performance is dependent on both the flexibility of the shape representation scheme and the efficiency of the optimization algorithm. In this article, a memetic algorithm is presented for 2D shape matching problems. The shape is represented using B-splines, in which the control points representing the shape are repaired and subsequently evolved within the optimization framework. The underlying memetic algorithm is a multi-feature hybrid that combines the strength of a real coded genetic algorithm, differential evolution and a local search. The efficiency of the proposed algorithm is illustrated using three test problems, wherein the shapes were identified using a mere 5000 function evaluations. Extension of the approach to deal with problems of unknown shape complexity is also presented in the article.
Parallel algorithms for 2-D cylindrical transport equations of Eigenvalue problem
Wei, J.; Yang, S.
2013-07-01
In this paper, aimed at the neutron transport equations of eigenvalue problem under 2-D cylindrical geometry on unstructured grid, the discrete scheme of Sn discrete ordinate and discontinuous finite is built, and the parallel computation for the scheme is realized on MPI systems. Numerical experiments indicate that the designed parallel algorithm can reach perfect speedup, it has good practicality and scalability. (authors)
Generalized 2D problem of icosahedral quasicrystals containing an elliptic hole
NASA Astrophysics Data System (ADS)
Li, Lian-He
2013-11-01
The generalized 2D problem of icosahedral quasicrystals containing an elliptic hole is considered by using the extended Stroh formalism. The closed-form solutions for the displacements and stresses are obtained under general loading conditions. The solution of the Griffith crack problem as a special case of the results is also observed. The stress intensity factor and strain energy release rate are given. The effect of the phonon—phason coupling elastic constant on the mechanical behavior is also discussed.
Hertz-Mindlin problem for arbitrary oblique 2D loading: General solution by memory diagrams
NASA Astrophysics Data System (ADS)
Aleshin, V.; Van Den Abeele, K.
2012-01-01
In this paper we present a new general solution to the fundamental problem of frictional contact of two elastic spheres, also known as the Hertz-Mindlin (HM) problem. The description of spheres in contact is a central topic in contact mechanics. It became a foundation of many applications, such as the friction of rough surfaces and the mechanics of granular materials and rocks, etc. Moreover, it serves as a theoretical background in modern nonlinear acoustics and elasticity, e.g. seismology and nondestructive testing. However, despite many efforts, a rigorous analytical solution for the general case when arbitrary normal and tangential forces are present is still missing, mainly because the traction distribution within the contact zone is convoluted and hardly tractable, even under relatively simple external action. Here, accepting a number of traditional limitations such as 2D loading and the existence of a functional dependence between normal and tangential forces, we propose an original way of replacing the complex traction distributions by simple graphical counterparts called memory diagrams, and we formulate a procedure that enables initiating and maintaining these memory diagrams following an arbitrary loading history. For each memory diagram, the solution can be expressed by closed-form analytical formulas that we have derived using known techniques suggested by Mindlin, Deresiewicz, and others. So far, to the best of our knowledge, arbitrary loading histories have been treated only numerically. Implementation of the proposed memory diagram method provides an easy-to-use computer-assisted analytical solution with a high level of generality. Examples and results illustrate the variety and richness of effects that can be encountered in a geometrically simple system of two contacting spheres.
Coupling finite and boundary element methods for 2-D elasticity problems
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Raju, I. S.; Sistla, R.
1993-01-01
A finite element-boundary element (FE-BE) coupling method for two-dimensional elasticity problems is developed based on a weighted residual variational method in which a portion of the domain of interest is modeled by FEs and the remainder of the region by BEs. The performance of the FE-BE coupling method is demonstrated via applications to a simple 'patch test' problem and three-crack problems. The method passed the patch tests for various modeling configurations and yielded accurate strain energy release rates for the crack problems studied.
Fluctuating Pressure Data from 2-D Nozzle Cold Flow Tests (Dual Bell)
NASA Technical Reports Server (NTRS)
Nesman, Tomas E.
2001-01-01
Rocket engines nozzle performance changes as a vehicle climbs through the atmosphere. An altitude compensating nozzle, ACN, is intended to improve on a fixed geometry bell nozzle that performs at optimum at only one trajectory point. In addition to nozzle performance, nozzle transient loads are an important consideration. Any nozzle experiences large transient toads when shocks pass through the nozzle at start and shutdown. Additional transient toads will occur at transitional flow conditions. The objectives of cold flow nozzle testing at MSFC are CFD benchmark / calibration and Unsteady flow / sideloads. Initial testing performed with 2-D inserts to 14" transonic wind tunnel. Recent review of 2-D data in preparation for nozzle test facility 3-D testing. This presentation shows fluctuating pressure data and some observations from 2-D dual-bell nozzle cold flow tests.
NASA Astrophysics Data System (ADS)
Hochman, Amit; Leviatan, Yehuda; White, Jacob K.
2013-04-01
A computational scheme for solving 2D Laplace boundary-value problems using rational functions as the basis functions is described. The scheme belongs to the class of desingularized methods, for which the location of singularities and testing points is a major issue that is addressed by the proposed scheme, in the context he 2D Laplace equation. Well-established rational-function fitting techniques are used to set the poles, while residues are determined by enforcing the boundary conditions in the least-squares sense at the nodes of rational Gauss-Chebyshev quadrature rules. Numerical results show that errors approaching the machine epsilon can be obtained for sharp and almost sharp corners, nearly-touching boundaries, and almost-singular boundary data. We show various examples of these cases in which the method yields compact solutions, requiring fewer basis functions than the Nyström method, for the same accuracy. A scheme for solving fairly large-scale problems is also presented.
A 2D inverse problem of predicting boiling heat transfer in a long fin
NASA Astrophysics Data System (ADS)
Orzechowski, Tadeusz
2016-10-01
A method for the determination of local values of the heat transfer coefficient on non-isothermal surfaces was analyzed on the example of a long smooth-surfaced fin made of aluminium. On the basis of the experimental data, two cases were taken into consideration: one-dimensional model for Bi < 0.1 and two-dimensional model for thicker elements. In the case when the drop in temperature over the thickness could be omitted, the rejected local values of heat fluxes were calculated from the integral of the equation describing temperature distribution on the fin. The corresponding boiling curve was plotted on the basis of temperature gradient distribution as a function of superheat. For thicker specimens, where Bi > 0.1, the problem was modelled using a 2-D heat conduction equation, for which the boundary conditions were posed on the surface observed with a thermovision camera. The ill-conditioned inverse problem was solved using a method of heat polynomials, which required validation.
2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability
Mayeda, K M; Malagnini, L; Phillips, W S; Walter, W R; Dreger, D S; Morasca, P
2005-07-13
Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter
Moran, B
2005-06-02
We present test problems that can be used to check the hydrodynamic implementation in computer codes designed to model the implosion of a National Ignition Facility (NIF) capsule. The problems are simplified, yet one of them is three-dimensional. It consists of a nearly-spherical incompressible imploding shell subjected to an exponentially decaying pressure on its outer surface. We present a semi-analytic solution for the time-evolution of that shell with arbitrary small three-dimensional perturbations on its inner and outer surfaces. The perturbations on the shell surfaces are intended to model the imperfections that are created during capsule manufacturing.
Fung, Jimmy; Masser, Thomas; Morgan, Nathaniel R.
2012-06-25
The Sedov test is classically defined as a point blast problem. The Sedov problem has led us to advances in algorithms and in their understanding. Vorticity generation can be physical or numerical. Both play a role in Sedov calculations. The RAGE code (Eulerian) resolves the shock well, but produces vorticity. The source definition matters. For the FLAG code (Lagrange), CCH is superior to SGH by avoiding spurious vorticity generation. FLAG SGH currently has a number of options that improve results over traditional settings. Vorticity production, not shock capture, has driven the Sedov work. We are pursuing treatments with respect to the hydro discretization as well as to artificial viscosity.
Evaluation of a [13C]-Dextromethorphan Breath Test to Assess CYP2D6 Phenotype
Leeder, J. Steven; Pearce, Robin E.; Gaedigk, Andrea; Modak, Anil; Rosen, David I.
2016-01-01
A [13C]-dextromethorphan ([13C]-DM) breath test was evaluated to assess its feasibility as a rapid, phenotyping assay for CYP2D6 activity. [13C]-DM (0.5 mg/kg) was administered orally with water or potassium bicarbonate-sodium bicarbonate to 30 adult Caucasian volunteers (n = 1 each): CYP2D6 poor metabolizers (2 null alleles; PM-0) and extensive metabolizers with 1 (EM-1) or 2 functional alleles (EM-2). CYP2D6 phenotype was determined by 13CO2 enrichment measured by infrared spectrometry (delta-over-baseline [DOB] value) in expired breath samples collected before and up to 240 minutes after [13C]-DM ingestion and by 4-hour urinary metabolite ratio. The PM-0 group was readily distinguishable from either EM group by both the breath test and urinary metabolite ratio. Using a single point determination of phenotype at 40 minutes and defining PMs as subjects with a DOB ≤ 0.5, the sensitivity of the method was 100%; specificity was 95% with 95% accuracy and resulted in the misclassification of 1 EM-1 individual as a PM. Modification of the initial protocol (timing of potassium bicarbonate-sodium bicarbonate administration relative to dose) yielded comparable results, but there was a tendency toward increased DOB values. Although further development is required, these studies suggest that the [13C]-DM breath test offers promise as a rapid, minimally invasive phenotyping assay for CYP2D6 activity. PMID:18728242
Moran, B
2007-08-08
We present analytic solutions to two test problems that can be used to check the hydrodynamic implementation in computer codes designed to calculate the propagation of shocks in spherically convergent geometry. Our analysis is restricted to fluid materials with constant bulk modulus. In the first problem we present the exact initial acceleration and pressure gradient at the outer surface of a sphere subjected to an exponentially decaying pressure of the form P(t) = P{sub 0}e{sup -at}. We show that finely-zoned hydro-code simulations are in good agreement with our analytic solution. In the second problem we discuss the implosions of incompressible spherical fluid shells and we present the radial pressure profile across the shell thickness. We also discuss a semi-analytic solution to the time-evolution of a nearly spherical shell with arbitrary but small initial 3-dimensional (3-D) perturbations on its inner and outer surfaces.
2D MHD test-particle simulations in modeling geomagnetic storms
NASA Astrophysics Data System (ADS)
Li, Z.; Elkington, S. R.; Hudson, M. K.; Murphy, J. J.; Schmitt, P.; Wiltberger, M. J.
2012-12-01
The effects of magnetic storms on the evolution of the electron radiation belts are studied using MHD test-particle simulations. The 2D guiding center code developed by Elkington et al. (2002) has been used to simulate particle motion in the Solar Magnetic equatorial plane in the MHD fields calculated from the Lyon-Fedder-Mobarry global MHD code. However, our study shows that the B-minimum plane is well off the SM equatorial plane during solstice events. Since 3D test-particle simulation is computationally expensive, we improve the 2D model by pushing particles in the B-minimum surface instead of the SM equatorial plane. Paraview software is used to visualize the LFM data file and to find the B-minimum surface. Magnetic and electric fields on B-minimum surface are projected to the equatorial plane for particle pushing.
NASA Astrophysics Data System (ADS)
Szerszeń, Krzysztof; Zieniuk, Eugeniusz
2016-06-01
The paper presents a strategy for numerical solving of parametric integral equation system (PIES) for 2D potential problems without explicit calculation of singular integrals. The values of these integrals will be expressed indirectly in terms of easy to compute non-singular integrals. The effectiveness of the proposed strategy is investigated with the example of potential problem modeled by the Laplace equation. The strategy simplifies the structure of the program with good the accuracy of the obtained solutions.
NASA Astrophysics Data System (ADS)
Tang, Shanzhi; Yu, Shengrui; Han, Qingfu; Li, Ming; Wang, Zhao
2016-09-01
Circular test is an important tactic to assess motion accuracy in many fields especially machine tool and coordinate measuring machine. There are setup errors due to using directly centring of the measuring instrument for both of contact double ball bar and existed non-contact methods. To solve this problem, an algorithm for circular test using function construction based on matrix operation is proposed, which is not only used for the solution of radial deviation (F) but also should be applied to obtain two other evaluation parameters especially circular hysteresis (H). Furthermore, an improved optical configuration with a single laser is presented based on a 2D laser heterodyne interferometer. Compared with the existed non-contact method, it has a more pure homogeneity of the laser sources of 2D displacement sensing for advanced metrology. The algorithm and modeling are both illustrated. And error budget is also achieved. At last, to validate them, test experiments for motion paths are implemented based on a gantry machining center. Contrast test results support the proposal.
An investigation of DTNS2D for use as an incompressible turbulence modelling test-bed
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
1992-01-01
This paper documents an investigation of a two dimensional, incompressible Navier-Stokes solver for use as a test-bed for turbulence modelling. DTNS2D is the code under consideration for use at the Center for Modelling of Turbulence and Transition (CMOTT). This code was created by Gorski at the David Taylor Research Center and incorporates the pseudo compressibility method. Two laminar benchmark flows are used to measure the performance and implementation of the method. The classical solution of the Blasius boundary layer is used for validating the flat plate flow, while experimental data is incorporated in the validation of backward facing step flow. Velocity profiles, convergence histories, and reattachment lengths are used to quantify these calculations. The organization and adaptability of the code are also examined in light of the role as a numerical test-bed.
NASA Astrophysics Data System (ADS)
Tanaka, Satoyuki; Suzuki, Hirotaka; Sadamoto, Shota; Sannomaru, Shogo; Yu, Tiantang; Bui, Tinh Quoc
2016-08-01
Two-dimensional (2D) in-plane mixed-mode fracture mechanics problems are analyzed employing an efficient meshfree Galerkin method based on stabilized conforming nodal integration (SCNI). In this setting, the reproducing kernel function as meshfree interpolant is taken, while employing the SCNI for numerical integration of stiffness matrix in the Galerkin formulation. The strain components are smoothed and stabilized employing Gauss divergence theorem. The path-independent integral ( J-integral) is solved based on the nodal integration by summing the smoothed physical quantities and the segments of the contour integrals. In addition, mixed-mode stress intensity factors (SIFs) are extracted from the J-integral by decomposing the displacement and stress fields into symmetric and antisymmetric parts. The advantages and features of the present formulation and discretization in evaluation of the J-integral of in-plane 2D fracture problems are demonstrated through several representative numerical examples. The mixed-mode SIFs are evaluated and compared with reference solutions. The obtained results reveal high accuracy and good performance of the proposed meshfree method in the analysis of 2D fracture problems.
2D Control Problem and TVD-Particle Method for Water Treatment System
NASA Astrophysics Data System (ADS)
Louaked, M.; Saïdi, A.
2011-11-01
This work consists on the study of an optimal control problem relating to the water pollution. We analyze various questions: existence, uniqueness, control and the regularized formulation of the initial pointwise control problem. We propose also an implementation of an hybrid numerical scheme associated with an algorithm of descent.
An ant colony optimisation algorithm for the 2D and 3D hydrophobic polar protein folding problem
Shmygelska, Alena; Hoos, Holger H
2005-01-01
Background The protein folding problem is a fundamental problems in computational molecular biology and biochemical physics. Various optimisation methods have been applied to formulations of the ab-initio folding problem that are based on reduced models of protein structure, including Monte Carlo methods, Evolutionary Algorithms, Tabu Search and hybrid approaches. In our work, we have introduced an ant colony optimisation (ACO) algorithm to address the non-deterministic polynomial-time hard (NP-hard) combinatorial problem of predicting a protein's conformation from its amino acid sequence under a widely studied, conceptually simple model – the 2-dimensional (2D) and 3-dimensional (3D) hydrophobic-polar (HP) model. Results We present an improvement of our previous ACO algorithm for the 2D HP model and its extension to the 3D HP model. We show that this new algorithm, dubbed ACO-HPPFP-3, performs better than previous state-of-the-art algorithms on sequences whose native conformations do not contain structural nuclei (parts of the native fold that predominantly consist of local interactions) at the ends, but rather in the middle of the sequence, and that it generally finds a more diverse set of native conformations. Conclusions The application of ACO to this bioinformatics problem compares favourably with specialised, state-of-the-art methods for the 2D and 3D HP protein folding problem; our empirical results indicate that our rather simple ACO algorithm scales worse with sequence length but usually finds a more diverse ensemble of native states. Therefore the development of ACO algorithms for more complex and realistic models of protein structure holds significant promise. PMID:15710037
Dong, Jianping
2014-03-15
The 2D space-fractional Schrödinger equation in the time-independent and time-dependent cases for the scattering problems in the fractional quantum mechanics is studied. We define the Green's functions for the two cases and give the mathematical expression of them in infinite series form and in terms of some special functions. The asymptotic formulas of the Green's functions are also given, and applied to get the approximate wave functions for the fractional quantum scattering problems. These results contain those in the standard (integer) quantum mechanics as special cases, and can be applied to study the complex quantum systems.
National Prociency Testing Result of CYP2D6*10 Genotyping for Adjuvant Tamoxifen Therapy in China.
Lin, Guigao; Zhang, Kuo; Yi, Lang; Han, Yanxi; Xie, Jiehong; Li, Jinming
2016-01-01
Tamoxifen has been successfully used for treating breast cancer and preventing cancer recurrence. Cytochrome P450 2D6 (CYP2D6) plays a key role in the process of metabolizing tamoxifen to its active moiety, endoxifen. Patients with variants of the CYP2D6 gene may not receive the full benefit of tamoxifen treatment. The CYP2D6*10 variant (the most common variant in Asians) was analyzed to optimize the prescription of tamoxifen in China. To ensure referring clinicians have accurate information for genotype-guided tamoxifen treatment, the Chinese National Center for Clinical Laboratories (NCCL) organized a national proficiency testing (PT) to evaluate the performance of laboratories providing CYP2D6*10 genotyping. Ten genomic DNA samples with CYP2D6 wild-type or CYP2D6*10 variants were validated by PCR-sequencing and sent to 28 participant laboratories. The genotyping results and pharmacogenomic test reports were submitted and evaluated by NCCL experts. Additional information regarding the number of samples tested, the accreditation/certification status, and detecting technology was also requested. Thirty-one data sets were received, with a corresponding analytical sensitivity of 98.2% (548/558 challenges; 95% confidence interval: 96.7-99.1%) and an analytic specificity of 96.5% (675/682; 95% confidence interval: 97.9-99.5%). Overall, 25/28 participants correctly identified CYP2D6*10 status in 10 samples; however, two laboratories made serious genotyping errors. Most of the essential information was included in the 20 submitted CYP2D6*10 test reports. The majority of Chinese laboratories are reliable for detecting the CYP2D6*10 variant; however, several issues revealed in this study underline the importance of PT schemes in continued external assessment and provision of guidelines. PMID:27603206
National Prociency Testing Result of CYP2D6*10 Genotyping for Adjuvant Tamoxifen Therapy in China.
Lin, Guigao; Zhang, Kuo; Yi, Lang; Han, Yanxi; Xie, Jiehong; Li, Jinming
2016-01-01
Tamoxifen has been successfully used for treating breast cancer and preventing cancer recurrence. Cytochrome P450 2D6 (CYP2D6) plays a key role in the process of metabolizing tamoxifen to its active moiety, endoxifen. Patients with variants of the CYP2D6 gene may not receive the full benefit of tamoxifen treatment. The CYP2D6*10 variant (the most common variant in Asians) was analyzed to optimize the prescription of tamoxifen in China. To ensure referring clinicians have accurate information for genotype-guided tamoxifen treatment, the Chinese National Center for Clinical Laboratories (NCCL) organized a national proficiency testing (PT) to evaluate the performance of laboratories providing CYP2D6*10 genotyping. Ten genomic DNA samples with CYP2D6 wild-type or CYP2D6*10 variants were validated by PCR-sequencing and sent to 28 participant laboratories. The genotyping results and pharmacogenomic test reports were submitted and evaluated by NCCL experts. Additional information regarding the number of samples tested, the accreditation/certification status, and detecting technology was also requested. Thirty-one data sets were received, with a corresponding analytical sensitivity of 98.2% (548/558 challenges; 95% confidence interval: 96.7-99.1%) and an analytic specificity of 96.5% (675/682; 95% confidence interval: 97.9-99.5%). Overall, 25/28 participants correctly identified CYP2D6*10 status in 10 samples; however, two laboratories made serious genotyping errors. Most of the essential information was included in the 20 submitted CYP2D6*10 test reports. The majority of Chinese laboratories are reliable for detecting the CYP2D6*10 variant; however, several issues revealed in this study underline the importance of PT schemes in continued external assessment and provision of guidelines.
National Prociency Testing Result of CYP2D6*10 Genotyping for Adjuvant Tamoxifen Therapy in China
Lin, Guigao; Zhang, Kuo; Yi, Lang; Han, Yanxi; Xie, Jiehong; Li, Jinming
2016-01-01
Tamoxifen has been successfully used for treating breast cancer and preventing cancer recurrence. Cytochrome P450 2D6 (CYP2D6) plays a key role in the process of metabolizing tamoxifen to its active moiety, endoxifen. Patients with variants of the CYP2D6 gene may not receive the full benefit of tamoxifen treatment. The CYP2D6*10 variant (the most common variant in Asians) was analyzed to optimize the prescription of tamoxifen in China. To ensure referring clinicians have accurate information for genotype-guided tamoxifen treatment, the Chinese National Center for Clinical Laboratories (NCCL) organized a national proficiency testing (PT) to evaluate the performance of laboratories providing CYP2D6*10 genotyping. Ten genomic DNA samples with CYP2D6 wild-type or CYP2D6*10 variants were validated by PCR-sequencing and sent to 28 participant laboratories. The genotyping results and pharmacogenomic test reports were submitted and evaluated by NCCL experts. Additional information regarding the number of samples tested, the accreditation/certification status, and detecting technology was also requested. Thirty-one data sets were received, with a corresponding analytical sensitivity of 98.2% (548/558 challenges; 95% confidence interval: 96.7–99.1%) and an analytic specificity of 96.5% (675/682; 95% confidence interval: 97.9–99.5%). Overall, 25/28 participants correctly identified CYP2D6*10 status in 10 samples; however, two laboratories made serious genotyping errors. Most of the essential information was included in the 20 submitted CYP2D6*10 test reports. The majority of Chinese laboratories are reliable for detecting the CYP2D6*10 variant; however, several issues revealed in this study underline the importance of PT schemes in continued external assessment and provision of guidelines. PMID:27603206
Solution of the stationary 2D inverse heat conduction problem by Treffetz method
NASA Astrophysics Data System (ADS)
Cialkowski, Michael J.; Frąckowiak, Andrzej
2002-05-01
The paper presents analysis of a solution of Laplace equation with the use of FEM harmonic basic functions. The essence of the problem is aimed at presenting an approximate solution based on possibly large finite element. Introduction of harmonic functions allows to reduce the order of numerical integration as compared to a classical Finite Element Method. Numerical calculations conform good efficiency of the use of basic harmonic functions for resolving direct and inverse problems of stationary heat conduction. Further part of the paper shows the use of basic harmonic functions for solving Poisson’s equation and for drawing up a complete system of biharmonic and polyharmonic basic functions
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
NASA Astrophysics Data System (ADS)
Mo, Yike; Greenhalgh, Stewart A.; Robertsson, Johan O. A.; Karaman, Hakki
2015-05-01
Lateral velocity variations and low velocity near-surface layers can produce strong scattered and guided waves which interfere with reflections and lead to severe imaging problems in seismic exploration. In order to investigate these specific problems by laboratory seismic modelling, a simple 2D ultrasonic model facility has been recently assembled within the Wave Propagation Lab at ETH Zurich. The simulated geological structures are constructed from 2 mm thick metal and plastic sheets, cut and bonded together. The experiments entail the use of a piezoelectric source driven by a pulse amplifier at ultrasonic frequencies to generate Lamb waves in the plate, which are detected by piezoelectric receivers and recorded digitally on a National Instruments recording system, under LabVIEW software control. The 2D models employed were constructed in-house in full recognition of the similitude relations. The first heterogeneous model features a flat uniform low velocity near-surface layer and deeper dipping and flat interfaces separating different materials. The second model is comparable but also incorporates two rectangular shaped inserts, one of low velocity, the other of high velocity. The third model is identical to the second other than it has an irregular low velocity surface layer of variable thickness. Reflection as well as transmission experiments (crosshole & vertical seismic profiling) were performed on each model. The two dominant Lamb waves recorded are the fundamental symmetric mode (non-dispersive) and the fundamental antisymmetric (flexural) dispersive mode, the latter normally being absent when the source transducer is located on a model edge but dominant when it is on the flat planar surface of the plate. Experimental group and phase velocity dispersion curves were determined and plotted for both modes in a uniform aluminium plate. For the reflection seismic data, various processing techniques were applied, as far as pre-stack Kirchhoff migration. The
OECD/MCCI 2-D Core Concrete Interaction (CCI) tests : final report February 28, 2006.
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
reactor material database for dry cavity conditions is solely one-dimensional. Although the MACE Scoping Test was carried out with a two-dimensional concrete cavity, the interaction was flooded soon after ablation was initiated to investigate debris coolability. Moreover, due to the scoping nature of this test, the apparatus was minimally instrumented and therefore the results are of limited value from the code validation viewpoint. Aside from the MACE program, the COTELS test series also investigated 2-D CCI under flooded cavity conditions. However, the input power density for these tests was quite high relative to the prototypic case. Finally, the BETA test series provided valuable data on 2-D core concrete interaction under dry cavity conditions, but these tests focused on investigating the interaction of the metallic (steel) phase with concrete. Due to these limitations, there is significant uncertainty in the partition of energy dissipated for the ablation of concrete in the lateral and axial directions under dry cavity conditions for the case of a core oxide melt. Accurate knowledge of this 'power split' is important in the evaluation of the consequences of an ex-vessel severe accident; e.g., lateral erosion can undermine containment structures, while axial erosion can penetrate the basemat, leading to ground contamination and/or possible containment bypass. As a result of this uncertainty, there are still substantial differences among computer codes in the prediction of 2-D cavity erosion behavior under both wet and dry cavity conditions. In light of the above issues, the OECD-sponsored Melt Coolability and Concrete Interaction (MCCI) program was initiated at Argonne National Laboratory. The project conducted reactor materials experiments and associated analysis to achieve the following technical objectives: (1) resolve the ex-vessel debris coolability issue through a program that focused on providing both confirmatory evidence and test data for the coolability
A multiple-scale Pascal polynomial for 2D Stokes and inverse Cauchy-Stokes problems
NASA Astrophysics Data System (ADS)
Liu, Chein-Shan; Young, D. L.
2016-05-01
The polynomial expansion method is a useful tool for solving both the direct and inverse Stokes problems, which together with the pointwise collocation technique is easy to derive the algebraic equations for satisfying the Stokes differential equations and the specified boundary conditions. In this paper we propose two novel numerical algorithms, based on a third-first order system and a third-third order system, to solve the direct and the inverse Cauchy problems in Stokes flows by developing a multiple-scale Pascal polynomial method, of which the scales are determined a priori by the collocation points. To assess the performance through numerical experiments, we find that the multiple-scale Pascal polynomial expansion method (MSPEM) is accurate and stable against large noise.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
SP2DINV: A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2013-09-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the ground water. We can therefore apply the self-potential method to recover non-intrusively some information regarding the ground water flow. We first solve the forward problem starting with the solution of the ground water flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
Sweetser, John David
2013-10-01
This report details Sculpt's implementation from a user's perspective. Sculpt is an automatic hexahedral mesh generation tool developed at Sandia National Labs by Steve Owen. 54 predetermined test cases are studied while varying the input parameters (Laplace iterations, optimization iterations, optimization threshold, number of processors) and measuring the quality of the resultant mesh. This information is used to determine the optimal input parameters to use for an unknown input geometry. The overall characteristics are covered in Chapter 1. The speci c details of every case are then given in Appendix A. Finally, example Sculpt inputs are given in B.1 and B.2.
Veijola, Timo; Råback, Peter
2007-01-01
We present a straightforward method to solve gas damping problems for perforated structures in two dimensions (2D) utilising a Perforation Profile Reynolds (PPR) solver. The PPR equation is an extended Reynolds equation that includes additional terms modelling the leakage flow through the perforations, and variable diffusivity and compressibility profiles. The solution method consists of two phases: 1) determination of the specific admittance profile and relative diffusivity (and relative compressibility) profiles due to the perforation, and 2) solution of the PPR equation with a FEM solver in 2D. Rarefied gas corrections in the slip-flow region are also included. Analytic profiles for circular and square holes with slip conditions are presented in the paper. To verify the method, square perforated dampers with 16–64 holes were simulated with a three-dimensional (3D) Navier-Stokes solver, a homogenised extended Reynolds solver, and a 2D PPR solver. Cases for both translational (in normal to the surfaces) and torsional motion were simulated. The presented method extends the region of accurate simulation of perforated structures to cases where the homogenisation method is inaccurate and the full 3D Navier-Stokes simulation is too time-consuming.
Numerical solution of 2D and 3D turbulent internal flow problems
NASA Astrophysics Data System (ADS)
Chen, Naixing; Xu, Yanji
1991-08-01
The paper describes a method for solving numerically two-dimensional or axisymmetric, and three-dimensional turbulent internal flow problems. The method is based on an implicit upwinding relaxation scheme with an arbitrarily shaped conservative control volume. The compressible Reynolds-averaged Navier-Stokes equations are solved with a two-equation turbulence model. All these equations are expressed by using a nonorthogonal curvilinear coordinate system. The method is applied to study the compressible internal flow in modern power installations. It has been observed that predictions for two-dimensional and three-dimensional channels show very good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Velioǧlu, Deniz; Cevdet Yalçıner, Ahmet; Zaytsev, Andrey
2016-04-01
Tsunamis are huge waves with long wave periods and wave lengths that can cause great devastation and loss of life when they strike a coast. The interest in experimental and numerical modeling of tsunami propagation and inundation increased considerably after the 2011 Great East Japan earthquake. In this study, two numerical codes, FLOW 3D and NAMI DANCE, that analyze tsunami propagation and inundation patterns are considered. Flow 3D simulates linear and nonlinear propagating surface waves as well as long waves by solving three-dimensional Navier-Stokes (3D-NS) equations. NAMI DANCE uses finite difference computational method to solve 2D depth-averaged linear and nonlinear forms of shallow water equations (NSWE) in long wave problems, specifically tsunamis. In order to validate these two codes and analyze the differences between 3D-NS and 2D depth-averaged NSWE equations, two benchmark problems are applied. One benchmark problem investigates the runup of long waves over a complex 3D beach. The experimental setup is a 1:400 scale model of Monai Valley located on the west coast of Okushiri Island, Japan. Other benchmark problem is discussed in 2015 National Tsunami Hazard Mitigation Program (NTHMP) Annual meeting in Portland, USA. It is a field dataset, recording the Japan 2011 tsunami in Hilo Harbor, Hawaii. The computed water surface elevation and velocity data are compared with the measured data. The comparisons showed that both codes are in fairly good agreement with each other and benchmark data. The differences between 3D-NS and 2D depth-averaged NSWE equations are highlighted. All results are presented with discussions and comparisons. Acknowledgements: Partial support by Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region (JICA SATREPS - MarDiM Project), 603839 ASTARTE Project of EU, UDAP-C-12-14 project of AFAD Turkey, 108Y227, 113M556 and 213M534 projects of TUBITAK Turkey, RAPSODI (CONCERT_Dis-021) of CONCERT
Safgren, Stephanie L.; Suman, Vera J.; Kosel, Matthew L.; Gilbert, Judith A; Buhrow, Sarah A.; Black, John L.; Northfelt, Donald W.; Modak, Anil S.; Rosen, David; Ingle, James N.; Ames, Matthew M.; Reid, Joel M.; Goetz, Matthew P.
2015-01-01
Background In tamoxifen-treated patients, breast cancer recurrence differs according to CYP2D6 genotype and endoxifen steady state concentrations (Endx Css). The 13Cdextromethorphan breath test (DM-BT), labeled with 13C at the O-CH3 moiety, measures CYP2D6 enzyme activity. We sought to examine the ability of the DM-BT to identify known CYP2D6 genotypic poor metabolizers and examine the correlation between DMBT and Endx Css. Methods DM-BT and tamoxifen pharmacokinetics were obtained at baseline (b), 3 month (3m) and 6 months (6m) following tamoxifen initiation. Potent CYP2D6 inhibitors were prohibited. The correlation between bDM-BT with CYP2D6 genotype and Endx Css was determined. The association between bDM-BT (where values ≤0.9 is an indicator of poor in vivo CYP2D6 metabolism) and Endx Css (using values ≤ 11.2 known to be associated with poorer recurrence free survival) was explored. Results 91 patients were enrolled and 77 were eligible. CYP2D6 genotype was positively correlated with b, 3m and 6m DMBT (r ranging from 0.457-0. 60 p < 0.001). Both CYP2D6 genotype (r = 0.47; 0.56, p <.0001), and bDM-BT (r=0.60; 0.54; p<.001) were associated with 3m and 6m Endx Css respectively. Seven of 9 patients (78%) with low (≤11.2 nM) 3m Endx Css also had low DM-BT (≤0.9) including 2/2 CYP2D6 PM/PM and 5/5 IM/PM. In contrast, 1 of 48 pts (2%) with a low DM-BT had Endx Css > 11.2 nM. Conclusions In patients not taking potent CYP2D6 inhibitors, DM-BT was associated with CYP2D6 genotype and 3m and 6 m Endx Css but did not provide better discrimination of Endx Css compared to CYP2D6 genotype alone. Further studies are needed to identify additional factors which alter Endx Css. PMID:25714002
NASA Astrophysics Data System (ADS)
Cockmartin, Lesley; Marshall, Nicholas W.; Van Ongeval, Chantal; Aerts, Gwen; Stalmans, Davina; Zanca, Federica; Shaheen, Eman; De Keyzer, Frederik; Dance, David R.; Young, Kenneth C.; Bosmans, Hilde
2015-05-01
This paper introduces a hybrid method for performing detection studies in projection image based modalities, based on image acquisitions of target objects and patients. The method was used to compare 2D mammography and digital breast tomosynthesis (DBT) in terms of the detection performance of spherical densities and microcalcifications. The method starts with the acquisition of spheres of different glandular equivalent densities and microcalcifications of different sizes immersed in a homogeneous breast tissue simulating medium. These target objects are then segmented and the subsequent templates are fused in projection images of patients and processed or reconstructed. This results in hybrid images with true mammographic anatomy and clinically relevant target objects, ready for use in observer studies. The detection study of spherical densities used 108 normal and 178 hybrid 2D and DBT images; 156 normal and 321 hybrid images were used for the microcalcifications. Seven observers scored the presence/absence of the spheres/microcalcifications in a square region via a 5-point confidence rating scale. Detection performance in 2D and DBT was compared via ROC analysis with sub-analyses for the density of the spheres, microcalcification size, breast thickness and z-position. The study was performed on a Siemens Inspiration tomosynthesis system using patient acquisitions with an average age of 58 years and an average breast thickness of 53 mm providing mean glandular doses of 1.06 mGy (2D) and 2.39 mGy (DBT). Study results showed that breast tomosynthesis (AUC = 0.973) outperformed 2D (AUC = 0.831) for the detection of spheres (p < 0.0001) and this applied for all spherical densities and breast thicknesses. By way of contrast, DBT was worse than 2D for microcalcification detection (AUC2D = 0.974, AUCDBT = 0.838, p < 0.0001), with significant differences found for all sizes (150-354 µm), for breast thicknesses above 40 mm and for heights
2D-Raman-THz spectroscopy: a sensitive test of polarizable water models.
Hamm, Peter
2014-11-14
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
NASA Astrophysics Data System (ADS)
Hamm, Peter
2014-11-01
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
2D-Raman-THz spectroscopy: A sensitive test of polarizable water models
Hamm, Peter
2014-11-14
In a recent paper, the experimental 2D-Raman-THz response of liquid water at ambient conditions has been presented [J. Savolainen, S. Ahmed, and P. Hamm, Proc. Natl. Acad. Sci. U. S. A. 110, 20402 (2013)]. Here, all-atom molecular dynamics simulations are performed with the goal to reproduce the experimental results. To that end, the molecular response functions are calculated in a first step, and are then convoluted with the laser pulses in order to enable a direct comparison with the experimental results. The molecular dynamics simulation are performed with several different water models: TIP4P/2005, SWM4-NDP, and TL4P. As polarizability is essential to describe the 2D-Raman-THz response, the TIP4P/2005 water molecules are amended with either an isotropic or a anisotropic polarizability a posteriori after the molecular dynamics simulation. In contrast, SWM4-NDP and TL4P are intrinsically polarizable, and hence the 2D-Raman-THz response can be calculated in a self-consistent way, using the same force field as during the molecular dynamics simulation. It is found that the 2D-Raman-THz response depends extremely sensitively on details of the water model, and in particular on details of the description of polarizability. Despite the limited time resolution of the experiment, it could easily distinguish between various water models. Albeit not perfect, the overall best agreement with the experimental data is obtained for the TL4P water model.
OECD 2-D Core Concrete Interaction (CCI) tests : CCI-2 test plan, Rev. 0 January 31, 2004.
Farmer, M. T.; Kilsdonk, D. J.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. The first of these two tests, CCI-1, was conducted on December 19, 2003. This test investigated the interaction of a fully oxidized 400 kg PWR core melt, initially containing 8 wt % calcined siliceous concrete, with a specially designed two
Comparison between 2D and 3D Numerical Modelling of a hot forging simulative test
Croin, M.; Ghiotti, A.; Bruschi, S.
2007-04-07
The paper presents the comparative analysis between 2D and 3D modelling of a simulative experiment, performed in laboratory environment, in which operating conditions approximate hot forging of a turbine aerofoil section. The plane strain deformation was chosen as an ideal case to analyze the process because of the thickness variations in the final section and the consequent distributions of contact pressure and sliding velocity at the interface that are closed to the conditions of the real industrial process. In order to compare the performances of 2D and 3D approaches, two different analyses were performed and compared with the experiments in terms of loads and temperatures peaks at the interface between the dies and the workpiece.
HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2015-12-01
Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.
Simulation and Analysis of Converging Shock Wave Test Problems
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-06-21
Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the original problem, and minimally straining the general credibility of associated analysis and conclusions.
Fabrication and Testing of Low Cost 2D Carbon-Carbon Nozzle Extensions at NASA/MSFC
NASA Technical Reports Server (NTRS)
Greene, Sandra Elam; Shigley, John K.; George, Russ; Roberts, Robert
2015-01-01
Subscale liquid engine tests were conducted at NASA/MSFC using a 1.2 Klbf engine with liquid oxygen (LOX) and gaseous hydrogen. Testing was performed for main-stage durations ranging from 10 to 160 seconds at a chamber pressure of 550 psia and a mixture ratio of 5.7. Operating the engine in this manner demonstrated a new and affordable test capability for evaluating subscale nozzles by exposing them to long duration tests. A series of 2D C-C nozzle extensions were manufactured, oxidation protection applied and then tested on a liquid engine test facility at NASA/MSFC. The C-C nozzle extensions had oxidation protection applied using three very distinct methods with a wide range of costs and process times: SiC via Polymer Impregnation & Pyrolysis (PIP), Air Plasma Spray (APS) and Melt Infiltration. The tested extensions were about 6" long with an exit plane ID of about 6.6". The test results, material properties and performance of the 2D C-C extensions and attachment features will be discussed.
Liu, T.; Deptuch, G.; Hoff, J.; Jindariani, S.; Joshi, S.; Olsen, J.; Tran, N.; Trimpl, M.
2015-02-01
An associative memory-based track finding approach has been proposed for a Level 1 tracking trigger to cope with increasing luminosities at the LHC. The associative memory uses a massively parallel architecture to tackle the intrinsically complex combinatorics of track finding algorithms, thus avoiding the typical power law dependence of execution time on occupancy and solving the pattern recognition in times roughly proportional to the number of hits. This is of crucial importance given the large occupancies typical of hadronic collisions. The design of an associative memory system capable of dealing with the complexity of HL-LHC collisions and with the short latency required by Level 1 triggering poses significant, as yet unsolved, technical challenges. For this reason, an aggressive R&D program has been launched at Fermilab to advance state of-the-art associative memory technology, the so called VIPRAM (Vertically Integrated Pattern Recognition Associative Memory) project. The VIPRAM leverages emerging 3D vertical integration technology to build faster and denser Associative Memory devices. The first step is to implement in conventional VLSI the associative memory building blocks that can be used in 3D stacking, in other words, the building blocks are laid out as if it is a 3D design. In this paper, we report on the first successful implementation of a 2D VIPRAM demonstrator chip (protoVIPRAM00). The results show that these building blocks are ready for 3D stacking.
Simulation of growth normal fault sandbox tests using the 2D discrete element method
NASA Astrophysics Data System (ADS)
Chu, Sheng-Shin; Lin, Ming-Lang; Huang, Wen-Chao; Nien, Wei-Tung; Liu, Huan-Chi; Chan, Pei-Chen
2015-01-01
A fault slip can cause the deformation of shallow soil layers and destroy infrastructures. The Shanchiao Fault on the west side of the Taipei Basin is one such fault. The activities of the Shanchiao Fault have caused the quaternary sediment beneath the Taipei Basin to become deformed, damaging structures, traffic construction, and utility lines in the area. Data on geological drilling and dating have been used to determine that a growth fault exists in the Shanchiao Fault. In an experiment, a sandbox model was built using noncohesive sandy soil to simulate the existence of a growth fault in the Shanchiao Fault and forecast the effect of the growth fault on shear-band development and ground differential deformation. The experimental results indicated that when a normal fault contains a growth fault at the offset of the base rock, the shear band develops upward beside the weak side of the shear band of the original-topped soil layer, and surfaces considerably faster than that of the single-topped layer. The offset ratio required is approximately one-third that of the single-cover soil layer. In this study, a numerical simulation of the sandbox experiment was conducted using a discrete element method program, PFC2D, to simulate the upper-covering sand layer shear-band development pace and the scope of a growth normal fault slip. The simulation results indicated an outcome similar to that of the sandbox experiment, which can be applied to the design of construction projects near fault zones.
Analysis of high Reynolds numbers effects on a wind turbine airfoil using 2D wind tunnel test data
NASA Astrophysics Data System (ADS)
Pires, O.; Munduate, X.; Ceyhan, O.; Jacobs, M.; Snel, H.
2016-09-01
The aerodynamic behaviour of a wind turbine airfoil has been measured in a dedicated 2D wind tunnel test at the DNW High Pressure Wind Tunnel in Gottingen (HDG), Germany. The tests have been performed on the DU00W212 airfoil at different Reynolds numbers: 3, 6, 9, 12 and 15 million, and at low Mach numbers (below 0.1). Both clean and tripped conditions of the airfoil have been measured. An analysis of the impact of a wide Reynolds number variation over the aerodynamic characteristics of this airfoil has been performed.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2014-08-01
The {{\\chi }^{2}} principle generalizes the Morozov discrepancy principle to the augmented residual of the Tikhonov regularized least squares problem. For weighting of the data fidelity by a known Gaussian noise distribution on the measured data, when the stabilizing, or regularization, term is considered to be weighted by unknown inverse covariance information on the model parameters, the minimum of the Tikhonov functional becomes a random variable that follows a {{\\chi }^{2}}-distribution with m+p-n degrees of freedom for the model matrix G of size m\\times n, m\\geqslant n, and regularizer L of size p × n. Then, a Newton root-finding algorithm, employing the generalized singular value decomposition, or singular value decomposition when L = I, can be used to find the regularization parameter α. Here the result and algorithm are extended to the underdetermined case, m, with m+p\\geqslant n. Numerical results first contrast and verify the generalized cross validation, unbiased predictive risk estimation and {{\\chi }^{2}} algorithms when m, with regularizers L approximating zeroth to second order derivative approximations. The inversion of underdetermined 2D focusing gravity data produces models with non-smooth properties, for which typical solvers in the field use an iterative minimum support stabilizer, with both regularizer and regularizing parameter updated each iteration. The {{\\chi }^{2}} and unbiased predictive risk estimator of the regularization parameter are used for the first time in this context. For a simulated underdetermined data set with noise, these regularization parameter estimation methods, as well as the generalized cross validation method, are contrasted with the use of the L-curve and the Morozov discrepancy principle. Experiments demonstrate the efficiency and robustness of the {{\\chi }^{2}} principle and unbiased predictive risk estimator, moreover showing that the L-curve and Morozov discrepancy principle are outperformed in general
Fluctuating Pressure Analysis of a 2-D SSME Nozzle Air Flow Test
NASA Technical Reports Server (NTRS)
Reed, Darren; Hidalgo, Homero
1996-01-01
To better understand the Space Shuttle Main Engine (SSME) startup/shutdown tansients, an airflow test of a two dimensional nozzle was conducted at Marshall Space Flight Center's trisonic wind tunnel. Photographic and other instrumentation show during an SSME start large nozzle shell distortions occur as the Mach disk is passing through the nozzle. During earlier develop of the SSME, this startup transient resulted in low cycle fatigue failure of one of the LH2 feedlines. The two dimensional SSME nozzle test was designed to measure the static and fluctuating pressure environment and color Schlieren video during the startup and shutdown phases of the run profile.
NASA Astrophysics Data System (ADS)
Pires, O.; Munduate, X.; Ceyhan, O.; Jacobs, M.; Madsen, J.; Schepers, J. G.
2016-09-01
2D wind tunnel tests at high Reynolds numbers have been done within the EU FP7 AVATAR project (Advanced Aerodynamic Tools of lArge Rotors) on the DU00-W-212 airfoil and at two different test facilities: the DNW High Pressure Wind Tunnel in Gottingen (HDG) and the LM Wind Power in-house wind tunnel. Two conditions of Reynolds numbers have been performed in both tests: 3 and 6 million. The Mach number and turbulence intensity values are similar in both wind tunnels at the 3 million Reynolds number test, while they are significantly different at 6 million Reynolds number. The paper presents a comparison of the data obtained from the two wind tunnels, showing good repeatability at 3 million Reynolds number and differences at 6 million Reynolds number that are consistent with the different Mach number and turbulence intensity values.
Critical Heat Flux Experiments on the Reactor Vessel Wall Using 2-D Slice Test Section
Jeong, Yong Hoon; Chang, Soon Heung; Baek, Won-Pil
2005-11-15
The critical heat flux (CHF) on the reactor vessel outer wall was measured using the two-dimensional slice test section. The radius and the channel area of the test section were 2.5 m and 10 cm x 15 cm, respectively. The flow channel area and the heater width were smaller than those of the ULPU experiments, but the radius was greater than that of the ULPU. The CHF data under the inlet subcooling of 2 to 25 deg. C and the mass flux 0 to 300 kg/m{sup 2}.s had been acquired. The measured CHF value was generally slightly lower than that of the ULPU. The difference possibly comes from the difference of the test section material and the thickness. However, the general trend of CHF according to the mass flux was similar with that of the ULPU. The experimental CHF data were compared with the predicted values by SULTAN correlation. The SULTAN correlation predicted well this study's data only for the mass flux higher than 200 kg/m{sup 2}.s, and for the exit quality lower than 0.05. The local condition-based correlation was developed, and it showed good prediction capability for broad quality (-0.01 to 0.5) and mass flux (<300 kg/m{sup 2}.s) conditions with a root-mean-square error of 2.4%. There were increases in the CHF with trisodium phosphate-added water.
Ignition problems in scramjet testing
Mitani, Tohru
1995-05-01
Ignition of H{sub 2} in heated air containing H{sub 2}O, radicals, and dust was investigated for scramjet testing. Using a reduced kinetic model for H{sub 2}{minus}O{sub 2} systems, the effects of H{sub 2}O and radicals in nozzles are discussed in relation to engine testing with vitiation heaters. Analysis using linearized rate-equations suggested that the addition of O atoms was 1.5 times more effective than the addition of H atoms for ignition. This result can be applied to the problem of premature ignition caused by residual radicals and to plasma-jet igniters. Thermal and chemical effects of dust, inevitable in storage air heaters, were studied next. The effects of heat capacity and size of dust were expressed in terms of an exponential integral function. It was found that the radical termination on the surface of dust produces an effect equivalent to heat loss. The inhibition of ignition by dust may result, if the mass fraction of dust becomes 10{sup {minus}3}.
Najjar, F M; Solberg, J; White, D
2008-04-17
A verification test suite has been assessed with primary focus on low reynolds number flow of liquid metals. This is representative of the interface between the armature and rail in gun applications. The computational multiphysics framework, ALE3D, is used. The main objective of the current study is to provide guidance and gain confidence in the results obtained with ALE3D. A verification test suite based on 2-D cases is proposed and includes the lid-driven cavity and the Couette flow are investigated. The hydro and thermal fields are assumed to be steady and laminar in nature. Results are compared with analytical solutions and previously published data. Mesh resolution studies are performed along with various models for the equation of state.
NASA Astrophysics Data System (ADS)
Pérez-Corona, M.; García, J. A.; Taller, G.; Polgár, D.; Bustos, E.; Plank, Z.
2016-02-01
The purpose of geophysical electrical surveys is to determine the subsurface resistivity distribution by making measurements on the ground surface. From these measurements, the true resistivity of the subsurface can be estimated. The ground resistivity is related to various geological parameters, such as the mineral and fluid content, porosity and degree of water saturation in the rock. Electrical resistivity surveys have been used for many decades in hydrogeological, mining and geotechnical investigations. More recently, they have been used for environmental surveys. To obtain a more accurate subsurface model than is possible with a simple 1-D model, a more complex model must be used. In a 2-D model, the resistivity values are allowed to vary in one horizontal direction (usually referred to as the x direction) but are assumed to be constant in the other horizontal (the y) direction. A more realistic model would be a fully 3-D model where the resistivity values are allowed to change in all three directions. In this research, a simulation of the cone penetration test and 2D imaging resistivity are used as tools to simulate the distribution of hydrocarbons in soil.
Surrogate Guderley Test Problem Definition
Ramsey, Scott D.; Shashkov, Mikhail J.
2012-07-06
The surrogate Guderley problem (SGP) is a 'spherical shock tube' (or 'spherical driven implosion') designed to ease the notoriously subtle initialization of the true Guderley problem, while still maintaining a high degree of fidelity. In this problem (similar to the Guderley problem), an infinitely strong shock wave forms and converges in one-dimensional (1D) cylindrical or spherical symmetry through a polytropic gas with arbitrary adiabatic index {gamma}, uniform density {rho}{sub 0}, zero velocity, and negligible pre-shock pressure and specific internal energy (SIE). This shock proceeds to focus on the point or axis of symmetry at r = 0 (resulting in ostensibly infinite pressure, velocity, etc.) and reflect back out into the incoming perturbed gas.
Medical Tests for Prostate Problems
... appears to be related to urine blockage, the health care provider may recommend tests that measure bladder pressure and urine flow rate. ... including pain, chills, or fever—should call their health care provider ... soon will prostate test results be available? Results for simple medical tests ...
The dynamic sphere test problem
Chabaud, Brandon M.; Brock, Jerry S.; Smith, Brandon M.
2012-05-16
In this manuscript we define the dynamic sphere problem as a spherical shell composed of a homogeneous, linearly elastic material. The material exhibits either isotropic or transverse isotropic symmetry. When the problem is formulated in material coordinates, the balance of mass equation is satisfied automatically. Also, the material is assumed to be kept at constant temperature, so the only relevant equation is the equation of motion. The shell has inner radius r{sub i} and outer radius r{sub o}. Initially, the shell is at rest. We assume that the interior of the shell is a void and we apply a time-varying radial stress on the outer surface.
NASA Astrophysics Data System (ADS)
Kh., Lotfy
2012-06-01
In the present paper, we introduce the coupled theory (CD), Lord-Schulman (LS) theory, and Green-Lindsay (GL) theory to study the influences of a magnetic field and rotation on a two-dimensional problem of fibre-reinforced thermoelasticity. The material is a homogeneous isotropic elastic half-space. The method applied here is to use normal mode analysis to solve a thermal shock problem. Some particular cases are also discussed in the context of the problem. Deformation of a body depends on the nature of the force applied as well as the type of boundary conditions. Numerical results for the temperature, displacement, and thermal stress components are given and illustrated graphically in the absence and the presence of the magnetic field and rotation.
Problem-Solving Test: Pyrosequencing
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2013-01-01
Terms to be familiar with before you start to solve the test: Maxam-Gilbert sequencing, Sanger sequencing, gel electrophoresis, DNA synthesis reaction, polymerase chain reaction, template, primer, DNA polymerase, deoxyribonucleoside triphosphates, orthophosphate, pyrophosphate, nucleoside monophosphates, luminescence, acid anhydride bond,…
Leighty, Katherine A; Menzel, Charles R; Fragaszy, Dorothy M
2008-09-01
Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees (Pan troglodytes) and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old children utilize a response rule guided by local features to solve cross-dimensional tasks. Four-year-old toddlers and adult chimpanzees use information about object form and compositional structure from a 2D image to guide their search in three dimensions. Findings have specific implications to research conducted in object recognition/perception and broad relevance to all areas of research and daily living that incorporate 2D displays.
Techniques utilized in the simulated altitude testing of a 2D-CD vectoring and reversing nozzle
NASA Technical Reports Server (NTRS)
Block, H. Bruce; Bryant, Lively; Dicus, John H.; Moore, Allan S.; Burns, Maureen E.; Solomon, Robert F.; Sheer, Irving
1988-01-01
Simulated altitude testing of a two-dimensional, convergent-divergent, thrust vectoring and reversing exhaust nozzle was accomplished. An important objective of this test was to develop test hardware and techniques to properly operate a vectoring and reversing nozzle within the confines of an altitude test facility. This report presents detailed information on the major test support systems utilized, the operational performance of the systems and the problems encountered, and test equipment improvements recommended for future tests. The most challenging support systems included the multi-axis thrust measurement system, vectored and reverse exhaust gas collection systems, and infrared temperature measurement systems used to evaluate and monitor the nozzle. The feasibility of testing a vectoring and reversing nozzle of this type in an altitude chamber was successfully demonstrated. Supporting systems performed as required. During reverser operation, engine exhaust gases were successfully captured and turned downstream. However, a small amount of exhaust gas spilled out the collector ducts' inlet openings when the reverser was opened more than 60 percent. The spillage did not affect engine or nozzle performance. The three infrared systems which viewed the nozzle through the exhaust collection system worked remarkably well considering the harsh environment.
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1994-01-01
This paper presents multigrid (MG) techniques for nonlinear eigenvalue problems (EP) and emphasizes an MG algorithm for a nonlinear Schrodinger EP. The algorithm overcomes the mentioned difficulties combining the following techniques: an MG projection coupled with backrotations for separation of solutions and treatment of difficulties related to clusters of close and equal eigenvalues; MG subspace continuation techniques for treatment of the nonlinearity; an MG simultaneous treatment of the eigenvectors at the same time with the nonlinearity and with the global constraints. The simultaneous MG techniques reduce the large number of self consistent iterations to only a few or one MG simultaneous iteration and keep the solutions in a right neighborhood where the algorithm converges fast.
NASA Astrophysics Data System (ADS)
Humair, F.; Matasci, B.; Carrea, D.; Pedrazzini, A.; Loye, A.; Pedrozzi, G.; Nicolet, P.; Jaboyedoff, M.
2012-04-01
account the results of the experimental testing are performed and compared with the a-priori simulations. 3D simulations were performed using a software that takes into account the effect of the forest cover in the blocky trajectory (RockyFor 3D) and an other that neglects this aspect (Rotomap; geo&soft international). 2D simulation (RocFall; Rocscience) profiles were located in the blocks paths deduced from 3D simulations. The preliminary results show that: (1) high speed movies are promising and allow us to track the blocks using video software, (2) the a-priori simulations tend to overestimate the runout distance which is certainly due to an underestimation of the obstacles as well as the breaking of the failing rocks which is not taken into account in the models, (3) the trajectories deduced from both a-priori simulation and real size experiment highlights the major influence of the channelized slope morphology on rock paths as it tends to follow the flow direction. This indicates that the 2D simulation have to be performed along the line of flow direction.
Rua, Francesco; Sadeghi, Sheila J; Castrignanò, Silvia; Valetti, Francesca; Gilardi, Gianfranco
2015-10-01
This work reports for the first time the direct electron transfer of the Canis familiaris cytochrome P450 2D15 on glassy carbon electrodes to provide an analytical tool as an alternative to P450 animal testing in the drug discovery process. Cytochrome P450 2D15, that corresponds to the human homologue P450 2D6, was recombinantly expressed in Escherichia coli and entrapped on glassy carbon electrodes (GC) either with the cationic polymer polydiallyldimethylammonium chloride (PDDA) or in the presence of gold nanoparticles (AuNPs). Reversible electrochemical signals of P450 2D15 were observed with calculated midpoint potentials (E1/2) of −191 ± 5 and −233 ± 4 mV vs. Ag/AgCl for GC/PDDA/2D15 and GC/AuNPs/2D15, respectively. These experiments were then followed by the electro-catalytic activity of the immobilized enzyme in the presence of metoprolol. The latter drug is a beta-blocker used for the treatment of hypertension and is a specific marker of the human P450 2D6 activity. Electrocatalysis data showed that only in the presence of AuNps the expected α-hydroxy-metoprolol product was present as shown by HPLC. The successful immobilization of the electroactive C. familiaris cytochrome P450 2D15 on electrode surfaces addresses the ever increasing demand of developing alternative in vitromethods for amore detailed study of animal P450 enzymes' metabolism, reducing the number of animals sacrificed in preclinical tests.
NASA Astrophysics Data System (ADS)
Lotfy, Kh.; Othman, Mohamed I. A.
2014-01-01
In the present paper, the coupled theory, Lord-Şhulman theory, and Green-Lindsay theory are introduced to study the influence of a magnetic field on the 2-D problem of a fiber-reinforced thermoelastic. These theories are also applied to study the influence of reinforcement on the total deformation of an infinite space weakened by a finite linear opening Mode-I crack. The material is homogeneous and an isotropic elastic half-space. The crack is subjected to a prescribed temperature and stress distribution. Normal mode analysis is used to solve the problem of a Mode-I crack. Numerical results for the temperature, the displacement, and thermal stress components are given and illustrated graphically in the absence and the presence of the magnetic field. A comparison between the three theories is also made for different depths.
NASA Technical Reports Server (NTRS)
Miller, Franklin; Bagdanove, paul; Blake, Peter; Canavan, Ed; Cofie, Emmanuel; Crane, J. Allen; Dominquez, Kareny; Hagopian, John; Johnston, John; Madison, Tim; Miller, Dave; Oaks, Darrell; Williams, Pat; Young, Dan; Zukowski, Barbara; Zukowski, Tim
2007-01-01
The James Webb Space Telescope Instrument Support Integration Module (ISIM) is being designed and developed at the Goddard Space Flight Center. The ISM Thermal Distortion Testing (ITDT) program was started with the primary objective to validate the ISM mechanical design process. The ITDT effort seeks to establish confidence and demonstrate the ability to predict thermal distortion in composite structures at cryogenic temperatures using solid element models. This-program's goal is to better ensure that ISIM meets all the mechanical and structural requirements by using test results to verify or improve structural modeling techniques. The first step to accomplish the ITDT objectives was to design, and then construct solid element models of a series 2-D test assemblies that represent critical building blocks of the ISIM structure. Second, the actual test assemblies consisting of composite tubes and invar end fittings were fabricated and tested for thermal distortion. This paper presents the development of the GSFC Cryo Distortion Measurement Facility (CDMF) to meet the requirements of the ISIM 2-D test. assemblies, and other future ISIM testing needs. The CDMF provides efficient cooling with both a single, and two-stage cryo-cooler. Temperature uniformity of the test assemblies during thermal transients and at steady state is accomplished by using sapphire windows for all of the optical ports on the radiation shields and by using .thermal straps to cool the test assemblies. Numerical thermal models of the test assemblies were used to predict the temperature uniformity of the parts during cooldown and at steady state. Results of these models are compared to actual temperature data from the tests. Temperature sensors with a 0.25K precision were used to insure that test assembly gradients did not exceed 2K lateral, and 4K axially. The thermal distortions of two assemblies were measured during six thermal cycles from 320K to 35K using laser interferometers. The standard
New Approaches to Old Problems through Testing.
ERIC Educational Resources Information Center
Pimsleur, Paul
1970-01-01
Positive values inherent in well-planned tests used in language programs are discussed in this article. The author argues that the problems of dropout rate, underachievement, discipline, and negative student attitudes can be alleviated through testing programs which include: (1) an aptitude test for selecting and sectioning students and for…
The MINPACK-2 test problem collection
Averick, B.M.; Carter, R.G.; Xue, Guo-Liang; More, J.J.
1992-06-01
Optimization software has often been developed without any specific application in mind. This generic approach has worked well in many cases, but as we seek the solution of larger and more complex optimization problems on high-performance computers, the development of optimization software should take into account specific optimization problems that arise in a wide range of applications. This observation was the motivation for the development of the MINPACK-2 test problem collection. Each of the problems in this collection comes from a real application and is representative of other commonly encountered problems. There are problems from such diverse fields as fluid dynamics, medicine, elasticity, combustion, molecular conformation, nondestructive testing, chemical kinetics, lubrication, and superconductivity.
Franke-Gromberg, Christine; Schüler, Grit; Hermanussen, Michael; Scheffler, Christiane
2010-01-01
The aim of this methodological anthropometric study was to compare direct anthropometry and digital two-dimensional photogrammetry in 18 male and 27 female subjects, aged 24 to 65 years, from Potsdam, Germany. In view of the rising interest in reliable biometric kephalofacial data, we focussed on head and face measurements. Out of 34 classic facial anatomical landmarks, 27 landmarks were investigated both by direct anthropometry and 2D-photogrammetry; 7 landmarks could not be localized by 2D-photogrammetry. Twenty-six kephalofacial distances were analysed both by direct anthropometry and digital 2D-photogrammetry. Kephalofacial distances are on average 7.6% shorter when obtained by direct anthropometry. The difference between the two techniques is particularly evident in total head height (vertex-gnathion) due to the fact that vertex is usually covered by hair and escapes from photogrammetry. Also the distances photographic sellion-gnathion (1.3 cm, i. e. 11.6%) and nasal-gnathion (1.2 cm, i. e. 9.4%) differ by more than one centimetre. Differences below 0.5 cm between the two techniques were found when measuring mucosa-lip-height (2.2%), gonia (3.0%), glabella-stomion (3.9%), and nose height (glabella-subnasal) (4.0%). Only the estimates of forehead width were significantly narrower when obtained by 2D-photogrammetry (-1.4 cm, -13.1%). The methodological differences increased with increasing magnitude of the kephalometric distance. Apart from these limitations, both techniques are similarly valid and may replace each other.
Rogojerov, Marin; Keresztury, Gábor; Kamenova-Nacheva, Mariana; Sundius, Tom
2012-12-01
A new analytical approach for improving the precision in determination of vibrational transition moment directions of low symmetry molecules (lacking orthogonal axes) is discussed in this paper. The target molecules are partially uniaxially oriented in nematic liquid crystalline solvent and are studied by IR absorption spectroscopy using polarized light. The fundamental problem addressed is that IR linear dichroism measurements of low symmetry molecules alone cannot provide sufficient information on molecular orientation and transition moment directions. It is shown that computational prediction of these quantities can supply relevant complementary data, helping to reveal the hidden information content and achieve a more meaningful and more precise interpretation of the measured dichroic ratios. The combined experimental and theoretical/computational method proposed by us recently for determination of the average orientation of molecules with C(s) symmetry has now been replaced by a more precise analytical approach. The new method introduced and discussed in full detail here uses a mathematically evaluated angle between two vibrational transition moment vectors as a reference. The discussion also deals with error analysis and estimation of uncertainties of the orientational parameters. The proposed procedure has been tested in an analysis of the infrared linear dichroism (IR-LD) spectra of 1-D- and 2-D-naphthalene complemented with DFT calculations using the scaled quantum mechanical force field (SQM FF) method. PMID:22981590
Transport Test Problems for Hybrid Methods Development
Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.; McDonald, Benjamin S.
2011-12-28
This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.
Problem-Solving Test: Tryptophan Operon Mutants
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2010-01-01
This paper presents a problem-solving test that deals with the regulation of the "trp" operon of "Escherichia coli." Two mutants of this operon are described: in mutant A, the operator region of the operon carries a point mutation so that it is unable to carry out its function; mutant B expresses a "trp" repressor protein unable to bind…
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of a third long-term 2-D Core-Concrete Interaction (CCI) experiment designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-3 experiment, which was conducted on September 22, 2005. Test specifications for CCI-3 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 375
Farmer, M. T.; Lomperski, S.; Kilsdonk, D. J.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten core-concrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-2 experiment, which was conducted on August 24, 2004. Test specifications for CCI-2 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Farmer, M. T.; Lomperski, S.; Aeschlimann, R. W.; Basu, S.
2011-05-23
The Melt Attack and Coolability Experiments (MACE) program addressed the issue of the ability of water to cool and thermally stabilize a molten core-concrete interaction when the reactants are flooded from above. These tests provided data regarding the nature of corium interactions with concrete, the heat transfer rates from the melt to the overlying water pool, and the role of noncondensable gases in the mixing processes that contribute to melt quenching. As a follow-on program to MACE, The Melt Coolability and Concrete Interaction Experiments (MCCI) project is conducting reactor material experiments and associated analysis to achieve the following objectives: (1) resolve the ex-vessel debris coolability issue through a program that focuses on providing both confirmatory evidence and test data for the coolability mechanisms identified in MACE integral effects tests, and (2) address remaining uncertainties related to long-term two-dimensional molten coreconcrete interactions under both wet and dry cavity conditions. Achievement of these two program objectives will demonstrate the efficacy of severe accident management guidelines for existing plants, and provide the technical basis for better containment designs for future plants. In terms of satisfying these objectives, the Management Board (MB) approved the conduct of two long-term 2-D Core-Concrete Interaction (CCI) experiments designed to provide information in several areas, including: (i) lateral vs. axial power split during dry core-concrete interaction, (ii) integral debris coolability data following late phase flooding, and (iii) data regarding the nature and extent of the cooling transient following breach of the crust formed at the melt-water interface. This data report provides thermal hydraulic test results from the CCI-1 experiment, which was conducted on December 19, 2003. Test specifications for CCI-1 are provided in Table 1-1. This experiment investigated the interaction of a fully oxidized 400 kg
Knowledge dimensions in hypothesis test problems
NASA Astrophysics Data System (ADS)
Krishnan, Saras; Idris, Noraini
2012-05-01
The reformation in statistics education over the past two decades has predominantly shifted the focus of statistical teaching and learning from procedural understanding to conceptual understanding. The emphasis of procedural understanding is on the formulas and calculation procedures. Meanwhile, conceptual understanding emphasizes students knowing why they are using a particular formula or executing a specific procedure. In addition, the Revised Bloom's Taxonomy offers a twodimensional framework to describe learning objectives comprising of the six revised cognition levels of original Bloom's taxonomy and four knowledge dimensions. Depending on the level of complexities, the four knowledge dimensions essentially distinguish basic understanding from the more connected understanding. This study identifiesthe factual, procedural and conceptual knowledgedimensions in hypothesis test problems. Hypothesis test being an important tool in making inferences about a population from sample informationis taught in many introductory statistics courses. However, researchers find that students in these courses still have difficulty in understanding the underlying concepts of hypothesis test. Past studies also show that even though students can perform the hypothesis testing procedure, they may not understand the rationale of executing these steps or know how to apply them in novel contexts. Besides knowing the procedural steps in conducting a hypothesis test, students must have fundamental statistical knowledge and deep understanding of the underlying inferential concepts such as sampling distribution and central limit theorem. By identifying the knowledge dimensions of hypothesis test problems in this study, suitable instructional and assessment strategies can be developed in future to enhance students' learning of hypothesis test as a valuable inferential tool.
Motor operated valves problems tests and simulations
Pinier, D.; Haas, J.L.
1996-12-01
An analysis of the two refusals of operation of the EAS recirculation shutoff valves enabled two distinct problems to be identified on the motorized valves: the calculation methods for the operating torques of valves in use in the power plants are not conservative enough, which results in the misadjustement of the torque limiters installed on their motorizations, the second problem concerns the pressure locking phenomenon: a number of valves may entrap a pressure exceeding the in-line pressure between the disks, which may cause a jamming of the valve. EDF has made the following approach to settle the first problem: determination of the friction coefficients and the efficiency of the valve and its actuator through general and specific tests and models, definition of a new calculation method. In order to solve the second problem, EDF has made the following operations: identification of the valves whose technology enables the pressure to be entrapped: the tests and numerical simulations carried out in the Research and Development Division confirm the possibility of a {open_quotes}boiler{close_quotes} effect: determination of the necessary modifications: development and testing of anti-boiler effect systems.
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-09-06
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps.
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-01-01
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps. PMID:27608025
Llop, Jordi; Gil, Emilio; Llorens, Jordi; Miranda-Fuentes, Antonio; Gallart, Montserrat
2016-01-01
Canopy characterization is essential for pesticide dosage adjustment according to vegetation volume and density. It is especially important for fresh exportable vegetables like greenhouse tomatoes. These plants are thin and tall and are planted in pairs, which makes their characterization with electronic methods difficult. Therefore, the accuracy of the terrestrial 2D LiDAR sensor is evaluated for determining canopy parameters related to volume and density and established useful correlations between manual and electronic parameters for leaf area estimation. Experiments were performed in three commercial tomato greenhouses with a paired plantation system. In the electronic characterization, a LiDAR sensor scanned the plant pairs from both sides. The canopy height, canopy width, canopy volume, and leaf area were obtained. From these, other important parameters were calculated, like the tree row volume, leaf wall area, leaf area index, and leaf area density. Manual measurements were found to overestimate the parameters compared with the LiDAR sensor. The canopy volume estimated with the scanner was found to be reliable for estimating the canopy height, volume, and density. Moreover, the LiDAR scanner could assess the high variability in canopy density along rows and hence is an important tool for generating canopy maps. PMID:27608025
Radix, P.; Leonard, M.; Papantoniou, C.; Roman, G.; Saouter, E.; Gallotti-Schmitt, S.; Thiebaud, H.; Vasseur, P.
1999-10-01
The Daphnia magna 21-d test may be required by European authorities as a criterion for the assessment of aquatic chronic toxicity for the notification of new substances. However, this test has several drawbacks. It is labor-intensive, relatively expensive, and requires the breeding of test organisms. The Brachionous calyciflorus 2-d test and Microtox chronic 22-h test do not suffer from these disadvantages and could be used as substitutes for the Daphnia 21-d test for screening assays. During this study, the toxicity of 25 chemicals was measured using both the microtox chronic toxicity and B. calyciflorus 2-d tests, and the no-observed-effect concentrations (NOECs) were compared to the D. magna 21-d test. The Brachionus test was slightly less sensitive than the Daphnia test, but the correlation between the two tests was relatively good (r{sup 2} = 0.54). The B. calyciflorus 2-d test, and to a lesser extent the Microtox chronic 22-h test, were able to predict the chronic toxicity values of the Daphnia 21-d test. They constitute promising cost-effective tools for chronic toxicity screening.
Busch, Jan; Meißner, Tobias; Potthoff, Annegret; Oswald, Sascha E
2014-09-01
Nanoscale zero-valent iron (nZVI) has recently gained great interest in the scientific community as in situ reagent for installation of permeable reactive barriers in aquifer systems, since nZVI is highly reactive with chlorinated compounds and may render them to harmless substances. However, nZVI has a high tendency to agglomerate and sediment; therefore it shows very limited transport ranges. One new approach to overcome the limited transport of nZVI in porous media is using a suited carrier colloid. In this study we tested mobility of a carbon colloid supported nZVI particle "Carbo-Iron Colloids" (CIC) with a mean size of 0.63 μm in a column experiment of 40 cm length and an experiment in a two-dimensional (2D) aquifer test system with dimensions of 110 × 40 × 5 cm. Results show a breakthrough maximum of 82 % of the input concentration in the column experiment and 58 % in the 2D-aquifer test system. Detected residuals in porous media suggest a strong particle deposition in the first centimeters and few depositions in the porous media in the further travel path. Overall, this suggests a high mobility in porous media which might be a significant enhancement compared to bare or polyanionic stabilized nZVI.
NASA Astrophysics Data System (ADS)
Perez, Alex; Zhu, Cun; Xia, Younan; Khalil, Gamal; Dabiri, Dana
2011-11-01
Airborne temperature and pressure sensitive microbeads provide a vehicle with which to conduct two-dimensional flow characterization. An array of temperature and pressure sensitive dyes have been synthesized with microbeads (of silica, polystyrene, and polydimethylsiloxane) for this purpose. These microbeads were evaluated based on emission spectra, pressure response (0-760 torr), temperature response (5-45°C), and response time. Work will be presented showing the various combinations of dyes and microbead materials, as well as the testing process and examples of future application. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. #DEG-0718124, as well as National Science Foundation Grant No. NSF/CBET-IDR- 0929864.
Li, Yan; Zhu, Zhuo R; Ou, Bao C; Wang, Ya Q; Tan, Zhou B; Deng, Chang M; Gao, Yi Y; Tang, Ming; So, Ji H; Mu, Yang L; Zhang, Lan Q
2015-02-15
Major depressive disorder is one of the most prevalent and life-threatening forms of mental illnesses. The traditional antidepressants often take several weeks, even months, to obtain clinical effects. However, recent clinical studies have shown that ketamine, an N-methyl-D-aspartate (NMDA) receptor antagonist, exerts rapid antidepressant effects within 2h and are long-lasting. The aim of the present study was to investigate whether dopaminergic system was involved in the rapid antidepressant effects of ketamine. The acute administration of ketamine (20 mg/kg) significantly reduced the immobility time in the forced swim test. MK-801 (0.1 mg/kg), the more selective NMDA antagonist, also exerted rapid antidepressant-like effects. In contrast, fluoxetine (10 mg/kg) did not significantly reduced the immobility time in the forced swim test after 30 min administration. Notably, pretreatment with haloperidol (0.15 mg/kg, a nonselective dopamine D2/D3 antagonist), but not SCH23390 (0.04 and 0.1 mg/kg, a selective dopamine D1 receptor antagonist), significantly prevented the effects of ketamine or MK-801. Moreover, the administration of sub-effective dose of ketamine (10 mg/kg) in combination with pramipexole (0.3 mg/kg, a dopamine D2/D3 receptor agonist) exerted antidepressant-like effects compared with each drug alone. In conclusion, our results indicated that the dopamine D2/D3 receptors, but not D1 receptors, are involved in the rapid antidepressant-like effects of ketamine.
Word Problems: Where Test Bias Creeps In.
ERIC Educational Resources Information Center
Chipman, Susan F.
The problem of sex bias in mathematics word problems is discussed, with references to the appropriate literature. Word problems are assessed via cognitive science analysis of word problem solving. It has been suggested that five basic semantic relations are adequate to classify nearly all story problems, namely, change, combine, compare, vary, and…
Zelt, Colin A.; Haines, Seth; Powers, Michael H.; Sheehan, Jacob; Rohdewald, Siegfried; Link, Curtis; Hayashi, Koichi; Zhao, Don; Zhou, Hua-wei; Burton, Bethany L.; Petersen, Uni K.; Bonal, Nedra D.; Doll, William E.
2013-01-01
Seismic refraction methods are used in environmental and engineering studies to image the shallow subsurface. We present a blind test of inversion and tomographic refraction analysis methods using a synthetic first-arrival-time dataset that was made available to the community in 2010. The data are realistic in terms of the near-surface velocity model, shot-receiver geometry and the data's frequency and added noise. Fourteen estimated models were determined by ten participants using eight different inversion algorithms, with the true model unknown to the participants until it was revealed at a session at the 2011 SAGEEP meeting. The estimated models are generally consistent in terms of their large-scale features, demonstrating the robustness of refraction data inversion in general, and the eight inversion algorithms in particular. When compared to the true model, all of the estimated models contain a smooth expression of its two main features: a large offset in the bedrock and the top of a steeply dipping low-velocity fault zone. The estimated models do not contain a subtle low-velocity zone and other fine-scale features, in accord with conventional wisdom. Together, the results support confidence in the reliability and robustness of modern refraction inversion and tomographic methods.
Testing the continuum mu(I) rheology for 2D granular flows on avalanches and collapse of columns
NASA Astrophysics Data System (ADS)
Lagrée, Pierre-Yves; Staron, Lydie; Popinet, Stéphane
2010-11-01
There is a large amount of experimental work dealing with dry granular flows (such as sand, glass beads, small rocks...) supporting the so called μ(I) rheology. This rheology states that the ratio of the tangential to the normal constraints behaves as a Coulomb like friction depending on the Inertial number (this number is the product of the grain size by the shear of the velocity divided by the square root of pressure divided by the grain density). Hence, we propose the implementation of this non newtonian rheology in a Navier Stokes Solver (the Gerris Flow Solver uses a finite-volume approach with the Volume-of-Fluid (VOF) method to describe variable-density two-phase flows). First we apply it on a steady infinite bi dimensional avalanching granular flow over a constant slope covered by a passive light fluid (it allows for a zero pressure boundary condition at the surface, bypassing an up to now difficulty which was to impose this condition on a unknown moving boundary). The classical analytical solution, known as Bagnold solution, is recovered numerically. Then the rheology is tested on the collapse of granular columns and quantitative comparisons with numerical simulations from Contact Dynamics are done.
Item Characteristic Curve Solutions to Three Intractable Testing Problems
ERIC Educational Resources Information Center
Marco, Gary L.
1977-01-01
This paper summarizes three studies that illustrate how application of the three-parameter logistic test model helped solve three relatively intractable testing problems. The three problems are: designing a multi-purpose test, evaluating an multi-level test, and equating a test on the basis of pretest statistics. (Author/JKS)
Implict Monte Carlo Radiation Transport Simulations of Four Test Problems
Gentile, N
2007-08-01
Radiation transport codes, like almost all codes, are difficult to develop and debug. It is helpful to have small, easy to run test problems with known answers to use in development and debugging. It is also prudent to re-run test problems periodically during development to ensure that previous code capabilities have not been lost. We describe four radiation transport test problems with analytic or approximate analytic answers. These test problems are suitable for use in debugging and testing radiation transport codes. We also give results of simulations of these test problems performed with an Implicit Monte Carlo photonics code.
A class of ejecta transport test problems
Hammerberg, James E; Buttler, William T; Oro, David M; Rousculp, Christopher L; Morris, Christopher; Mariam, Fesseha G
2011-01-31
Hydro code implementations of ejecta dynamics at shocked interfaces presume a source distribution function ofparticulate masses and velocities, f{sub 0}(m, v;t). Some of the properties of this source distribution function have been determined from extensive Taylor and supported wave experiments on shock loaded Sn interfaces of varying surface and subsurface morphology. Such experiments measure the mass moment of f{sub o} under vacuum conditions assuming weak particle-particle interaction and, usually, fully inelastic capture by piezo-electric diagnostic probes. Recently, planar Sn experiments in He, Ar, and Kr gas atmospheres have been carried out to provide transport data both for machined surfaces and for coated surfaces. A hydro code model of ejecta transport usually specifies a criterion for the instantaneous temporal appearance of ejecta with source distribution f{sub 0}(m, v;t{sub 0}). Under the further assumption of separability, f{sub 0}(m,v;t{sub 0}) = f{sub 1}(m)f{sub 2}(v), the motion of particles under the influence of gas dynamic forces is calculated. For the situation of non-interacting particulates, interacting with a gas via drag forces, with the assumption of separability and simplified approximations to the Reynolds number dependence of the drag coefficient, the dynamical equation for the time evolution of the distribution function, f(r,v,m;t), can be resolved as a one-dimensional integral which can be compared to a direct hydro simulation as a test problem. Such solutions can also be used for preliminary analysis of experimental data. We report solutions for several shape dependent drag coefficients and analyze the results of recent planar dsh experiments in Ar and Xe.
Testing Developmental Pathways to Antisocial Personality Problems
ERIC Educational Resources Information Center
Diamantopoulou, Sofia; Verhulst, Frank C.; van der Ende, Jan
2010-01-01
This study examined the development of antisocial personality problems (APP) in young adulthood from disruptive behaviors and internalizing problems in childhood and adolescence. Parent ratings of 507 children's (aged 6-8 years) symptoms of attention deficit hyperactivity disorder, oppositional defiant disorder, and anxiety, were linked to…
ERIC Educational Resources Information Center
Hill, Kennedy T.; Horton, Margaret W.
Educational solutions to the problem of test anxiety were explored. Test anxiety has a debilitating effect on performance which increases over the school years. The solution is, first, to measure test anxiety so that the extent of it, as well as the effectiveness of programs designed to alleviate it, can be measured. The seven-item Comfort Index,…
Problems and Issues in Translating International Educational Achievement Tests
ERIC Educational Resources Information Center
Arffman, Inga
2013-01-01
The article reviews research and findings on problems and issues faced when translating international academic achievement tests. The purpose is to draw attention to the problems, to help to develop the procedures followed when translating the tests, and to provide suggestions for further research. The problems concentrate on the following: the…
Errors in Standardized Tests: A Systemic Problem.
ERIC Educational Resources Information Center
Rhoades, Kathleen; Madaus, George
The nature and extent of human error in educational testing over the past 25 years were studied. In contrast to the random measurement error expected in all tests, the presence of human error is unexpected and brings unknown, often harmful, consequences for students and their schools. Using data from a variety of sources, researchers found 103…
Problem-solving test: Tryptophan operon mutants.
Szeberényi, József
2010-09-01
Terms to be familiar with before you start to solve the test: tryptophan, operon, operator, repressor, inducer, corepressor, promoter, RNA polymerase, chromosome-polysome complex, regulatory gene, cis-acting element, trans-acting element, plasmid, transformation. PMID:21567855
Test problem construction for single-objective bilevel optimization.
Sinha, Ankur; Malo, Pekka; Deb, Kalyanmoy
2014-01-01
In this paper, we propose a procedure for designing controlled test problems for single-objective bilevel optimization. The construction procedure is flexible and allows its user to control the different complexities that are to be included in the test problems independently of each other. In addition to properties that control the difficulty in convergence, the procedure also allows the user to introduce difficulties caused by interaction of the two levels. As a companion to the test problem construction framework, the paper presents a standard test suite of 12 problems, which includes eight unconstrained and four constrained problems. Most of the problems are scalable in terms of variables and constraints. To provide baseline results, we have solved the proposed test problems using a nested bilevel evolutionary algorithm. The results can be used for comparison, while evaluating the performance of any other bilevel optimization algorithm. The code related to the paper may be accessed from the website http://bilevel.org .
2005-07-01
Aniso2d is a two-dimensional seismic forward modeling code. The earth is parameterized by an X-Z plane in which the seismic properties Can have monoclinic with x-z plane symmetry. The program uses a user define time-domain wavelet to produce synthetic seismograms anrwhere within the two-dimensional media.
NASA Astrophysics Data System (ADS)
Jang, Hyun-Sook; Yu, Changqian; Hayes, Robert; Granick, Steve
2015-03-01
Polymer vesicles (``polymersomes'') are an intriguing class of soft materials, commonly used to encapsulate small molecules or particles. Here we reveal they can also effectively incorporate nanoparticles inside their polymer membrane, leading to novel ``2D nanocomposites.'' The embedded nanoparticles alter the capacity of the polymersomes to bend and to stretch upon external stimuli.
DYNA3D Non-reflecting Boundary Conditions - Test Problems
Zywicz, E
2006-09-28
Two verification problems were developed to test non-reflecting boundary segments in DYNA3D (Whirley and Engelmann, 1993). The problems simulate 1-D wave propagation in a semi-infinite rod using a finite length rod and non-reflecting boundary conditions. One problem examines pure pressure wave propagation, and the other problem explores pure shear wave propagation. In both problems the non-reflecting boundary segments yield results that differ only slightly (less than 6%) during a short duration from their corresponding theoretical solutions. The errors appear to be due to the inability to generate a true step-function compressive wave in the pressure wave propagation problem and due to segment integration inaccuracies in the shear wave propagation problem. These problems serve as verification problems and as regression test problems for DYNA3D.
American History's Problem with Standardized Testing
ERIC Educational Resources Information Center
McCoog, Ian J.
2005-01-01
This article looks at current research concerning how students best learn the discipline of history, commentaries both in favor of and against standardized testing, and basic philosophical beliefs about the discipline. It explains methods of how to incorporate differentiated lessons and performance based assessments to NCLB standards and…
Problem-Solving Test: Restriction Endonuclease Mapping
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
The term "restriction endonuclease mapping" covers a number of related techniques used to identify specific restriction enzyme recognition sites on small DNA molecules. A method for restriction endonuclease mapping of a 1,000-basepair (bp)-long DNA molecule is described in the fictitious experiment of this test. The most important fact needed to…
Problem-Solving Test: Southwestern Blotting
ERIC Educational Resources Information Center
Szeberényi, József
2014-01-01
Terms to be familiar with before you start to solve the test: Southern blotting, Western blotting, restriction endonucleases, agarose gel electrophoresis, nitrocellulose filter, molecular hybridization, polyacrylamide gel electrophoresis, proto-oncogene, c-abl, Src-homology domains, tyrosine protein kinase, nuclear localization signal, cDNA,…
Computerized Diagnostic Testing: Problems and Possibilities.
ERIC Educational Resources Information Center
McArthur, David L.
The use of computers to build diagnostic inferences is explored in two contexts. In computerized monitoring of liquid oxygen systems for the space shuttle, diagnoses are exact because they can be derived within a world which is closed. In computerized classroom testing of reading comprehension, programs deliver a constrained form of adaptive…
Testing general relativity: Progress, problems, and prospects
NASA Technical Reports Server (NTRS)
Shapiro, I. I.
1971-01-01
The results from ground-based experimental testing are presented. Prospects for improving these experiments are discussed. Radar echo time delays, perihelion advance and solar oblateness, time variation of the gravitational constant, and radio wave deflection are considered. Ground-based and spacecraft techniques are compared on an accuracy vs. cost basis.
ERIC Educational Resources Information Center
Veldkamp, Bernard P.; Verschoor, Angela J.; Eggen, Theo J. H. M.
2010-01-01
Overexposure and underexposure of items in the bank are serious problems in operational computerized adaptive testing (CAT) systems. These exposure problems might result in item compromise, or point at a waste of investments. The exposure control problem can be viewed as a test assembly problem with multiple objectives. Information in the test has…
Crash test for the Copenhagen problem.
Nagler, Jan
2004-06-01
The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems. PMID:15244719
Crash test for the Copenhagen problem.
Nagler, Jan
2004-06-01
The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems.
Group Testing: Four Student Solutions to a Classic Optimization Problem
ERIC Educational Resources Information Center
Teague, Daniel
2006-01-01
This article describes several creative solutions developed by calculus and modeling students to the classic optimization problem of testing in groups to find a small number of individuals who test positive in a large population.
Guthrie test samples: is the problem solved?
Thomas, Cordelia
2004-06-01
Most babies born in New Zealand have a blood sample taken shortly after birth for the purposes of certain screening tests. The samples are retained indefinitely. This paper considers whether such samples are the property of the child and whether the present changes in the Health (National Cervical Screening Programme) Amendment Bill and the Code of Health and Disability Services Consumers' Rights of 1996 are sufficient to resolve the issues. The paper expresses concern about the delegation of decision-making in this area to ethics committees.
2011-12-31
Mesh2d is a Fortran90 program designed to generate two-dimensional structured grids of the form [x(i),y(i,j)] where [x,y] are grid coordinates identified by indices (i,j). The x(i) coordinates alone can be used to specify a one-dimensional grid. Because the x-coordinates vary only with the i index, a two-dimensional grid is composed in part of straight vertical lines. However, the nominally horizontal y(i,j0) coordinates along index i are permitted to undulate or otherwise vary. Mesh2d also assignsmore » an integer material type to each grid cell, mtyp(i,j), in a user-specified manner. The complete grid is specified through three separate input files defining the x(i), y(i,j), and mtyp(i,j) variations.« less
Clue Insensitivity in Remote Associates Test Problem Solving
ERIC Educational Resources Information Center
Smith, Steven M.; Sifonis, Cynthia M.; Angello, Genna
2012-01-01
Does spreading activation from incidentally encountered hints cause incubation effects? We used Remote Associates Test (RAT) problems to examine effects of incidental clues on impasse resolution. When solution words were seen incidentally 3-sec before initially unsolved problems were retested, more problems were resolved (Experiment 1). When…
Morgan, G.H. )
1992-01-01
This paper reports on the iterative design of the 2-dimensional cross section of a beam transport magnet having infinitely permeable iron boundaries which requires a fast means of computing the field of the conductors. Solutions in the form of series expansions are used for rectangular iron boundaries, and programs based on the method of images are used to simulate circular iron boundaries. A single procedure or program for dealing with an arbitrary iron boundary would be useful. The present program has been tested with rectangular and circular iron boundaries and provision has been made for the use of other curves. It uses complex contour integral equations for the field of the constant-current density conductors and complex line integrals for the field of the piecewise-linear boundary elements.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Group Work Tests for Context-Rich Problems
ERIC Educational Resources Information Center
Meyer, Chris
2016-01-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion…
Problems and Alternatives in Testing Mexican American Students.
ERIC Educational Resources Information Center
Cervantes, Robert A.
The problems of standardized tests with regard to Mexican American students, particularly "ethnic validity", are reviewed. Inadequate norm group representation, cultural bias, and language bias are purported by the author to be the most common faults of standardized tests. Suggested is the elimination of standardized testing as a principal means…
Brittle damage models in DYNA2D
Faux, D.R.
1997-09-01
DYNA2D is an explicit Lagrangian finite element code used to model dynamic events where stress wave interactions influence the overall response of the system. DYNA2D is often used to model penetration problems involving ductile-to-ductile impacts; however, with the advent of the use of ceramics in the armor-anti-armor community and the need to model damage to laser optics components, good brittle damage models are now needed in DYNA2D. This report will detail the implementation of four brittle damage models in DYNA2D, three scalar damage models and one tensor damage model. These new brittle damage models are then used to predict experimental results from three distinctly different glass damage problems.
Cattaneo, Cristina; Cantatore, Angela; Ciaffi, Romina; Gibelli, Daniele; Cigada, Alfredo; De Angelis, Danilo; Sala, Remo
2012-01-01
Identification from video surveillance systems is frequently requested in forensic practice. The "3D-2D" comparison has proven to be reliable in assessing identification but still requires standardization; this study concerns the validation of the 3D-2D profile comparison. The 3D models of the faces of five individuals were compared with photographs from the same subjects as well as from another 45 individuals. The difference in area and distance between maxima (glabella, tip of nose, fore point of upper and lower lips, pogonion) and minima points (selion, subnasale, stomion, suprapogonion) were measured. The highest difference in area between the 3D model and the 2D image was between 43 and 133 mm(2) in the five matches, always greater than 157 mm(2) in mismatches; the mean distance between the points was greater than 1.96 mm in mismatches, <1.9 mm in five matches (p < 0.05). These results indicate that this difference in areas may point toward a manner of distinguishing "correct" from "incorrect" matches.
Problems in Testing the Intonation of Advanced Foreign Learners.
ERIC Educational Resources Information Center
Mendelsohn, David
1978-01-01
It is argued that knowledge about the testing of intonation in English as a foreign language is inadequate; the major problems are outlined and tentative suggestions are given. The basic problem is that the traditional foreign language teacher's conception of intonation is limited. A three-part definition of intonation is favored, with suggestions…
Some Current Problems in Simulator Design, Testing and Use.
ERIC Educational Resources Information Center
Caro, Paul W.
Concerned with the general problem of the effectiveness of simulator training, this report reflects information developed during the conduct of aircraft simulator training research projects sponsored by the Air Force, Army, Navy, and Coast Guard. Problems are identified related to simulator design, testing, and use, all of which impact upon…
A description of the test problems in the TRAC-P standard test matrix
Steinke, R.G.
1996-03-01
This report describes 15 different test problems in the TRAC-P (Transient Reactor Analysis Code) standard test matrix of 42 test-problem calculations. Their TRACIN input-data files are listed in Appendix A. The description of each test problem includes the nature of what the test problem models and evaluates, the principal models of TRAC-P that the test problem serves to verify or validate, and the TRAC-P features and options that are being involved in its calculation. The test-problem calculations will determine the effect that changes made to a TRAC-P version have on the results. This will help the developers assess the acceptance of those changes to TRAC-P.
2-d Finite Element Code Postprocessor
1996-07-15
ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forcesmore » along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.« less
Internal Photoemission Spectroscopy of 2-D Materials
NASA Astrophysics Data System (ADS)
Nguyen, Nhan; Li, Mingda; Vishwanath, Suresh; Yan, Rusen; Xiao, Shudong; Xing, Huili; Cheng, Guangjun; Hight Walker, Angela; Zhang, Qin
Recent research has shown the great benefits of using 2-D materials in the tunnel field-effect transistor (TFET), which is considered a promising candidate for the beyond-CMOS technology. The on-state current of TFET can be enhanced by engineering the band alignment of different 2D-2D or 2D-3D heterostructures. Here we present the internal photoemission spectroscopy (IPE) approach to determine the band alignments of various 2-D materials, in particular SnSe2 and WSe2, which have been proposed for new TFET designs. The metal-oxide-2-D semiconductor test structures are fabricated and characterized by IPE, where the band offsets from the 2-D semiconductor to the oxide conduction band minimum are determined by the threshold of the cube root of IPE yields as a function of photon energy. In particular, we find that SnSe2 has a larger electron affinity than most semiconductors and can be combined with other semiconductors to form near broken-gap heterojunctions with low barrier heights which can produce a higher on-state current. The details of data analysis of IPE and the results from Raman spectroscopy and spectroscopic ellipsometry measurements will also be presented and discussed.
Generates 2D Input for DYNA NIKE & TOPAZ
Hallquist, J. O.; Sanford, Larry
1996-07-15
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
MAZE96. Generates 2D Input for DYNA NIKE & TOPAZ
Sanford, L.; Hallquist, J.O.
1992-02-24
MAZE is an interactive program that serves as an input and two-dimensional mesh generator for DYNA2D, NIKE2D, TOPAZ2D, and CHEMICAL TOPAZ2D. MAZE also generates a basic template for ISLAND input. MAZE has been applied to the generation of input data to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Group Work Tests for Context-Rich Problems
NASA Astrophysics Data System (ADS)
Meyer, Chris
2016-05-01
The group work test is an assessment strategy that promotes higher-order thinking skills for solving context-rich problems. With this format, teachers are able to pose challenging, nuanced questions on a test, while providing the support weaker students need to get started and show their understanding. The test begins with a group discussion phase, when students are given a "number-free" version of the problem. This phase allows students to digest the story-like problem, explore solution ideas, and alleviate some test anxiety. After 10-15 minutes of discussion, students inform the instructor of their readiness for the individual part of the test. What follows next is a pedagogical phase change from lively group discussion to quiet individual work. The group work test is a natural continuation of the group work in our daily physics classes and helps reinforce the importance of collaboration. This method has met with success at York Mills Collegiate Institute, in Toronto, Ontario, where it has been used consistently for unit tests and the final exam of the grade 12 university preparation physics course.
Behavioral tests of hippocampal function: simple paradigms complex problems.
Gerlai, R
2001-11-01
Behavioral tests have become important tools for the analysis of functional effects of induced mutations in transgenic mice. However, depending on the type of mutation and several experimental parameters, false positive or negative findings may be obtained. Given the fact that molecular neurobiologists now make increasing use of behavioral paradigms in their research, it is imperative to revisit such problems. In this review three tests, T-maze spontaneous alternation task (T-CAT), Context dependent fear conditioning (CDFC), and Morris water maze (MWM) sensitive to hippocampal function, serve as illustrative examples for the potential problems. Spontaneous alternation tests are sometimes flawed because the handling procedure makes the test dependent on fear rather than exploratory behavior leading to altered alternation rates independent of hippocampal function. CDFC can provide misleading results because the context test, assumed to be a configural task dependent on the hippocampus, may have a significant elemental, i.e. cued, component. MWM may pose problems if its visible platform task is disproportionately easier for the subjects to solve than the hidden platform task, if the order of administration of visible and hidden platform tasks is not counterbalanced, or if inappropriate parameters are measured. Without attempting to be exhaustive, this review discusses such experimental problems and gives examples on how to avoid them.
The measurand problem in infrared breath alcohol testing
NASA Astrophysics Data System (ADS)
Vosk, Ted
2012-02-01
Measurements are made to determine the value of a quantity known as a measurand. The measurand is not always the quantity subject to measurement, however. Often, a distinct quantity will be measured and related to the measurand through a measurement function. When the identities of the measurand and the quantity actually measured are not well defined or distinguished, it can lead to the misinterpretation of results. This is referred to as the measurand problem. The measurand problem can present significant difficulties when the law and not science determines the measurand. This arises when the law requires that a particular quantity be measured. Legal definitions are seldom as rigorous or complete as those utilized in science. Thus, legally defined measurands often fall prey to the measurand problem. An example is the measurement of breath alcohol concentration by infrared spectroscopy. All 50 states authorize such tests but the measurand differs by jurisdiction. This leads to misinterpretation of results in both the forensic and legal communities due to the measurand problem with the consequence that the innocent are convicted and guilty set free. Correct interpretation of breath test results requires that the measurand be properly understood and accounted for. I set forth the varying measurands defined by law, the impact these differing measurands have on the interpretation of breath test results and how the measurand problem can be avoided in the measurement of breath alcohol concentration.
Nyström, Monica E; Terris, Darcey D; Sparring, Vibeke; Tolf, Sara; Brown, Claire R
2012-01-01
Our objective was to test whether the Structured Problem and Success Inventory (SPI) instrument could capture mental representations of organizational and work-related problems as described by individuals working in health care organizations and to test whether these representations varied according to organizational position. A convenience sample (n = 56) of middle managers (n = 20), lower-level managers (n = 20), and staff (n = 16) from health care organizations in Stockholm (Sweden) attending organizational development courses during 2003-2004 was recruited. Participants used the SPI to describe the 3 most pressing organizational and work-related problems. Data were systematically reviewed to identify problem categories and themes. One hundred sixty-four problems were described, clustered into 13 problem categories. Generally, middle managers focused on organizational factors and managerial responsibilities, whereas lower-level managers and staff focused on operational issues and what others did or ought to do. Furthermore, we observed similarities and variation in perceptions and their association with respondents' position within an organization. Our results support the need for further evaluation of the SPI as a promising tool for health care organizations. Collecting structured inventories of organizational and work-related problems from multiple perspectives may assist in the development of shared understandings of organizational challenges and lead to more effective and efficient processes of solution planning and implementation.
The data aggregation problem in quantum hypothesis testing
NASA Astrophysics Data System (ADS)
Cialdi, Simone; Paris, Matteo G. A.
2015-01-01
We discuss the implications of quantum-classical Yule-Simpson effect for quantum hypothesis testing in the presence of noise, and provide an experimental demonstration of its occurrence in the problem of discriminating which polarization quantum measurement has been actually performed by a detector box designed to measure linear polarization of single-photon states along a fixed but unknown direction.
NASA Astrophysics Data System (ADS)
Wang, Jin; Ma, Jianyong; Zhou, Changhe
2014-11-01
A 3×3 high divergent 2D-grating with period of 3.842μm at wavelength of 850nm under normal incidence is designed and fabricated in this paper. This high divergent 2D-grating is designed by the vector theory. The Rigorous Coupled Wave Analysis (RCWA) in association with the simulated annealing (SA) is adopted to calculate and optimize this 2D-grating.The properties of this grating are also investigated by the RCWA. The diffraction angles are more than 10 degrees in the whole wavelength band, which are bigger than the traditional 2D-grating. In addition, the small period of grating increases the difficulties of fabrication. So we fabricate the 2D-gratings by direct laser writing (DLW) instead of traditional manufacturing method. Then the method of ICP etching is used to obtain the high divergent 2D-grating.
[Ethical problems of hygienic tests in occupational medicine].
At'kov, O Yu; Gorokhova, S G
2016-01-01
The authors discuss bioethical problems appearing in usage of genetic tests as a technology of personalized medicine for prevention and early diagnosis of occupational diseases, and connected with question "Who has a right to know results of genetic test?". Analysis covered principles and legal norms, regulating human rights for security of health information, and causes of anxiety about workers' discrimination due to genetic test results. The authors necessitate differentiation between discrimination and reasonable restrictions favorable for workers in cases when work conditions can be a health hazard for person due to genetic predisposition. PMID:27396148
NASA Astrophysics Data System (ADS)
Slanger, T. G.; Cosby, P. C.; Huestis, D. L.
2003-04-01
N(^2D) is an important species in the nighttime ionosphere, as its reaction with O_2 is a principal source of NO. Its modeled concentration peaks near 200 km, at approximately 4 × 10^5 cm-3. Nightglow emission in the optically forbidden lines at 519.8 and 520.0 nm is quite weak, a consequence of the combination of an extremely long radiative lifetime, about 10^5 sec, and quenching by O-atoms, O_2, and N_2. The radiative lifetime is known only from theory, and various calculations lead to a range of possible values for the intensity ratio R = I(519.8)/I(520.0) of 1.5-2.5. On the observational side, Hernandez and Turtle [1969] determined a range of R = 1.3-1.9 in the nightglow, and Sivjee et al. [1981] reported a variable ratio in aurorae, between 1.2 and 1.6. From sky spectra obtained at the Keck II telescope on Mauna Kea, we have accumulated eighty-five 30-60 minute data sets, from March and October, 2000, and April, 2001, over 13 nights of astronomical observations. We find R to have a quite precise value of 1.760± 0.012 (2-σ). There is no difference between the three data sets in terms of the extracted ratio, which therefore seems to be independent of external conditions. At the same time, determination of the O(^1D - ^3P) doublet intensity ratio, I(630.0)/I(636.4), gives a value of 3.03 ± 0.01, the statistical expectation. G. Hernandez and J. P. Turtle, Planet. Space Sci. 17, 675, 1969. G. G. Sivjee, C. S. Deehr, and K. Henricksen, J. Geophys. Res. 86, 1581, 1981.
Discuss the testing problems of ultraviolet irradiance meters
NASA Astrophysics Data System (ADS)
Ye, Jun'an; Lin, Fangsheng
2014-09-01
Ultraviolet irradiance meters are widely used in many areas such as medical treatment, epidemic prevention, energy conservation and environment protection, computers, manufacture, electronics, ageing of material and photo-electric effect, for testing ultraviolet irradiance intensity. So the accuracy of value directly affects the sterile control in hospital, treatment, the prevention level of CDC and the control accuracy of curing and aging in manufacturing industry etc. Because the display of ultraviolet irradiance meters is easy to change, in order to ensure the accuracy, it needs to be recalibrated after being used period of time. By the comparison with the standard ultraviolet irradiance meters, which are traceable to national benchmarks, we can acquire the correction factor to ensure that the instruments working under accurate status and giving the accurate measured data. This leads to an important question: what kind of testing device is more accurate and reliable? This article introduces the testing method and problems of the current testing device for ultraviolet irradiance meters. In order to solve these problems, we have developed a new three-dimensional automatic testing device. We introduce structure and working principle of this system and compare the advantages and disadvantages of two devices. In addition, we analyses the errors in the testing of ultraviolet irradiance meters.
ERIC Educational Resources Information Center
Marchis, Iuliana
2009-01-01
The results of the Romanian pupils on international tests PISA and TIMSS in Mathematics are below the average. These poor results have many explications. In this article we compare the Mathematics problems given on these international tests with those given on national tests in Romania.
Qualification tests and electrical measurements: Practice and problems
NASA Technical Reports Server (NTRS)
Smokler, M. I.
1983-01-01
As part of the Flat-Plate Solar Array Project, 138 different module designs were subjected to qualification tests. Electrical measurements were subjected on well over a thousand modules representing more than 150 designs. From this experience, conclusions are drawn regarding results and problems, with discussion of the need for change or improvement. The qualification test sequence incuded application of environmental and electrical stresses to the module. With few exceptions, the tests have revealed defects necessitation of environmental and electrical stresses to the module. With few exceptions, the tests have revealed defects necessitating module design or process changes. However, the continued need for these tests may be questioned on the basis of technical and logistical factors. Technically, the current test sequence does not cover all design characteristics, does not include all field conditions and is not known to represent the desired 30-year module life. Logistically, the tests are time-consuming and costly, and there is a lack of, fully qualified independent test organizations. Alternatives to the current test program include simplification based on design specification and site environment, and/or the use of warranties or other commercial practices.
Testing allele homogeneity: the problem of nested hypotheses
2012-01-01
Background The evaluation of associations between genotypes and diseases in a case-control framework plays an important role in genetic epidemiology. This paper focuses on the evaluation of the homogeneity of both genotypic and allelic frequencies. The traditional test that is used to check allelic homogeneity is known to be valid only under Hardy-Weinberg equilibrium, a property that may not hold in practice. Results We first describe the flaws of the traditional (chi-squared) tests for both allelic and genotypic homogeneity. Besides the known problem of the allelic procedure, we show that whenever these tests are used, an incoherence may arise: sometimes the genotypic homogeneity hypothesis is not rejected, but the allelic hypothesis is. As we argue, this is logically impossible. Some methods that were recently proposed implicitly rely on the idea that this does not happen. In an attempt to correct this incoherence, we describe an alternative frequentist approach that is appropriate even when Hardy-Weinberg equilibrium does not hold. It is then shown that the problem remains and is intrinsic of frequentist procedures. Finally, we introduce the Full Bayesian Significance Test to test both hypotheses and prove that the incoherence cannot happen with these new tests. To illustrate this, all five tests are applied to real and simulated datasets. Using the celebrated power analysis, we show that the Bayesian method is comparable to the frequentist one and has the advantage of being coherent. Conclusions Contrary to more traditional approaches, the Full Bayesian Significance Test for association studies provides a simple, coherent and powerful tool for detecting associations. PMID:23176636
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. House Committee on Education and the Workforce.
H.R. 2846, a bill to prohibit spending Federal education funds on national testing without explicit and specific legislation was referred to the Committee on Education and the Workforce of the U.S. House of Representatives. The Committee, having reviewed the bill, reports favorably on it in this document, proposes some amendments, and recommends…
Baiz, Carlos R.; Schach, Denise; Tokmakoff, Andrei
2014-01-01
We describe a microscope for measuring two-dimensional infrared (2D IR) spectra of heterogeneous samples with μm-scale spatial resolution, sub-picosecond time resolution, and the molecular structure information of 2D IR, enabling the measurement of vibrational dynamics through correlations in frequency, time, and space. The setup is based on a fully collinear “one beam” geometry in which all pulses propagate along the same optics. Polarization, chopping, and phase cycling are used to isolate the 2D IR signals of interest. In addition, we demonstrate the use of vibrational lifetime as a contrast agent for imaging microscopic variations in molecular environments. PMID:25089490
Test-state approach to the quantum search problem
Sehrawat, Arun; Nguyen, Le Huy; Englert, Berthold-Georg
2011-05-15
The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These test states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.
A test of the testing effect: acquiring problem-solving skills from worked examples.
van Gog, Tamara; Kester, Liesbeth
2012-01-01
The "testing effect" refers to the finding that after an initial study opportunity, testing is more effective for long-term retention than restudying. The testing effect seems robust and is a finding from the field of cognitive science that has important implications for education. However, it is unclear whether this effect also applies to the acquisition of problem-solving skills, which is important to establish given the key role problem solving plays in, for instance, math and science education. Worked examples are an effective and efficient way of acquiring problem-solving skills. Forty students either only studied worked examples (SSSS) or engaged in testing after studying an example by solving an isomorphic problem (STST). Surprisingly, results showed equal performance in both conditions on an immediate retention test after 5 min, but the SSSS condition outperformed the STST condition on a delayed retention test after 1 week. These findings suggest the testing effect might not apply to acquiring problem-solving skills from worked examples.
Phillips, Lawrence M; Hachamovitch, Rory; Berman, Daniel S; Iskandrian, Ami E; Min, James K; Picard, Michael H; Kwong, Raymond Y; Friedrich, Matthias G; Scherrer-Crosbie, Marielle; Hayes, Sean W; Sharir, Tali; Gosselin, Gilbert; Mazzanti, Marco; Senior, Roxy; Beanlands, Rob; Smanio, Paola; Goyal, Abhi; Al-Mallah, Mouaz; Reynolds, Harmony; Stone, Gregg W; Maron, David J; Shaw, Leslee J
2013-12-01
There is a preponderance of evidence that, in the setting of an acute coronary syndrome, an invasive approach using coronary revascularization has a morbidity and mortality benefit. However, recent stable ischemic heart disease (SIHD) randomized clinical trials testing whether the addition of coronary revascularization to guideline-directed medical therapy (GDMT) reduces death or major cardiovascular events have been negative. Based on the evidence from these trials, the primary role of GDMT as a front line medical management approach has been clearly defined in the recent SIHD clinical practice guideline; the role of prompt revascularization is less precisely defined. Based on data from observational studies, it has been hypothesized that there is a level of ischemia above which a revascularization strategy might result in benefit regarding cardiovascular events. However, eligibility for recent negative trials in SIHD has mandated at most minimal standards for ischemia. An ongoing randomized trial evaluating the effectiveness of randomization of patients to coronary angiography and revascularization as compared to no coronary angiography and GDMT in patients with moderate-severe ischemia will formally test this hypothesis. The current review will highlight the available evidence including a review of the published and ongoing SIHD trials.
2004-08-01
AnisWave2D is a 2D finite-difference code for a simulating seismic wave propagation in fully anisotropic materials. The code is implemented to run in parallel over multiple processors and is fully portable. A mesh refinement algorithm has been utilized to allow the grid-spacing to be tailored to the velocity model, avoiding the over-sampling of high-velocity materials that usually occurs in fixed-grid schemes.
Gaedigk, Andrea; Bradford, L Dianne; Alander, Sarah W; Leeder, J Steven
2006-04-01
Unexplained cases of CYP2D6 genotype/phenotype discordance continue to be discovered. In previous studies, several African Americans with a poor metabolizer phenotype carried the reduced function CYP2D6*10 allele in combination with a nonfunctional allele. We pursued the possibility that these alleles harbor either a known sequence variation (i.e., CYP2D6*36 carrying a gene conversion in exon 9 along the CYP2D6*10-defining 100C>T single-nucleotide polymorphism) or novel sequences variation(s). Discordant cases were evaluated by long-range polymerase chain reaction (PCR) to test for gene rearrangement events, and a 6.6-kilobase pair PCR product encompassing the CYP2D6 gene was cloned and entirely sequenced. Thereafter, allele frequencies were determined in different study populations comprising whites, African Americans, and Asians. Analyses covering the CYP2D7 to 2D6 gene region established that CYP2D6*36 did not only exist as a gene duplication (CYP2D6*36x2) or in tandem with *10 (CYP2D6*36+*10), as previously reported, but also by itself. This "single" CYP2D6*36 allele was found in nine African Americans and one Asian, but was absent in the whites tested. Ultimately, the presence of CYP2D6*36 resolved genotype/phenotype discordance in three cases. We also discovered an exon 9 conversion-positive CYP2D6*4 gene in a duplication arrangement (CYP2D6*4Nx2) and a CYP2D6*4 allele lacking 100C>T (CYP2D6*4M) in two white subjects. The discovery of an allele that carries only one CYP2D6*36 gene copy provides unequivocal evidence that both CYP2D6*36 and *36x2 are associated with a poor metabolizer phenotype. Given a combined frequency of between 0.5 and 3% in African Americans and Asians, genotyping for CYP2D6*36 should improve the accuracy of genotype-based phenotype prediction in these populations.
Review of measurement and testing problems. [of aircraft emissions
NASA Technical Reports Server (NTRS)
1976-01-01
Good instrumentation was required to obtain reliable and repeatable baseline data. Problems that were encountered in developing such a total system were: (1) accurate airflow measurement, (2) precise fuel flow measurement, and (3) the instrumentation used for pollutant measurement was susceptible to frequent malfunctions. Span gas quality had a significant effect on emissions test results. The Spindt method was used in the piston aircraft emissions program. The Spindt method provided a comparative computational procedure for fuel/air ratio based on measured emissions concentrations.
A class of self-similar hydrodynamics test problems
Ramsey, Scott D; Brown, Lowell S; Nelson, Eric M; Alme, Marv L
2010-12-08
We consider self-similar solutions to the gas dynamics equations. One such solution - a spherical geometry Gaussian density profile - has been analyzed in the existing literature, and a connection between it, a linear velocity profile, and a uniform specific internal energy profile has been identified. In this work, we assume the linear velocity profile to construct an entire class of self-similar sol utions in both cylindrical and spherical geometry, of which the Gaussian form is one possible member. After completing the derivation, we present some results in the context of a test problem for compressible flow codes.
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false How are drug test problems corrected? 40.205... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.205 How are drug test problems...), you must try to correct the problem promptly, if doing so is practicable. You may conduct...
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6*15 and *35 Genotyping
Riffel, Amanda K.; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C.; Leeder, J. Steven; Rosenblatt, Kevin P.; Gaedigk, Andrea
2016-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6*15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6*15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6*35) which is also located in exon 1. Although alternative CYP2D6*15 and *35 assays resolved the issue, we discovered a novel CYP2D6*15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6*15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6*43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer and/or probe regions can impact
CYP2D7 Sequence Variation Interferes with TaqMan CYP2D6 (*) 15 and (*) 35 Genotyping.
Riffel, Amanda K; Dehghani, Mehdi; Hartshorne, Toinette; Floyd, Kristen C; Leeder, J Steven; Rosenblatt, Kevin P; Gaedigk, Andrea
2015-01-01
TaqMan™ genotyping assays are widely used to genotype CYP2D6, which encodes a major drug metabolizing enzyme. Assay design for CYP2D6 can be challenging owing to the presence of two pseudogenes, CYP2D7 and CYP2D8, structural and copy number variation and numerous single nucleotide polymorphisms (SNPs) some of which reflect the wild-type sequence of the CYP2D7 pseudogene. The aim of this study was to identify the mechanism causing false-positive CYP2D6 (*) 15 calls and remediate those by redesigning and validating alternative TaqMan genotype assays. Among 13,866 DNA samples genotyped by the CompanionDx® lab on the OpenArray platform, 70 samples were identified as heterozygotes for 137Tins, the key SNP of CYP2D6 (*) 15. However, only 15 samples were confirmed when tested with the Luminex xTAG CYP2D6 Kit and sequencing of CYP2D6-specific long range (XL)-PCR products. Genotype and gene resequencing of CYP2D6 and CYP2D7-specific XL-PCR products revealed a CC>GT dinucleotide SNP in exon 1 of CYP2D7 that reverts the sequence to CYP2D6 and allows a TaqMan assay PCR primer to bind. Because CYP2D7 also carries a Tins, a false-positive mutation signal is generated. This CYP2D7 SNP was also responsible for generating false-positive signals for rs769258 (CYP2D6 (*) 35) which is also located in exon 1. Although alternative CYP2D6 (*) 15 and (*) 35 assays resolved the issue, we discovered a novel CYP2D6 (*) 15 subvariant in one sample that carries additional SNPs preventing detection with the alternate assay. The frequency of CYP2D6 (*) 15 was 0.1% in this ethnically diverse U.S. population sample. In addition, we also discovered linkage between the CYP2D7 CC>GT dinucleotide SNP and the 77G>A (rs28371696) SNP of CYP2D6 (*) 43. The frequency of this tentatively functional allele was 0.2%. Taken together, these findings emphasize that regardless of how careful genotyping assays are designed and evaluated before being commercially marketed, rare or unknown SNPs underneath primer
Chen, Xiang-yi; Chen, Yi-ping; Xia, Ze-min; Hu, Heng-bin; Sun, Yan-qiong; Huang, Wei-yuan
2012-09-01
Three α-Keggin heteropolymolybdates with the formula [(C(5)H(4)NH)COOH](3)[PMo(12)O(40)] 1, {[Sm(H(2)O)(4)(pdc)](3)}{[Sm(H(2)O)(3)(pdc)]}[SiMo(12)O(40)]·3H(2)O 2 and {[La(H(2)O)(4)(pdc)](4)}[PMo(12)O(40)]F 3 (H(2)pdc = pyridine-2,6-dicarboxylate), have been synthesized under hydrothermal condition and characterized by single crystal X-ray diffraction analyses, elemental analyses, inductively coupled plasma atomic emission spectroscopy (ICP-AES), IR, thermal gravimetric analyses, thermal infrared spectrum analyses and powder X-ray diffraction (PXRD) analyses. Single crystal X-ray diffraction indicates all three compounds comprise ball-shaped Keggin type [XMo(12)O(40)](n-) polyoxometalates (POMs) (n = 3, X = P; n = 4, X = Si, respectively) with different types of carboxylic ligands derived from H(2)pdc, and these cluster anions are isostructural. In order to explore structural characteristics, Rhodamine B photocatalytic (RhB) degradation and two-dimensional infrared correlation spectroscopy (2D-IR COS) tests, are investigated for 1, 2 and 3. In RhB degradation, all compounds show good photocatalytic activity. For 1, the activity mainly comes from POMs. While in 2 and 3, POMs' photocatalytic activity is enhanced by the Ln(iii)-pdc metal-organic frameworks. Structural properties like POM's stability and magnetic sensitivity are discussed by 2D-IR COS under thermal/magnetic perturbations.
Leak testing of cryogenic components — problems and solutions
NASA Astrophysics Data System (ADS)
Srivastava, S. P.; Pandarkar, S. P.; Unni, T. G.; Sinha, A. K.; Mahajan, K.; Suthar, R. L.
2008-05-01
moderator pot was driving the MSLD out of range. Since it was very difficult to locate the leak by Tracer Probe Method, some other technique was ventured to solve the problem of leak location. Finally, it was possible to locate the leak by observing the change in Helium background reading of MSLD during masking/unmasking of the welded joints. This paper, in general describes the design and leak testing aspects of cryogenic components of Cold Neutron Source and in particular, the problems and solutions for leak testing of transfer lines and moderator pot.
Mechanical modeling of porous oxide fuel pellet A Test Problem
Nukala, Phani K; Barai, Pallab; Simunovic, Srdjan; Ott, Larry J
2009-10-01
A poro-elasto-plastic material model has been developed to capture the response of oxide fuels inside the nuclear reactors under operating conditions. Behavior of the oxide fuel and variation in void volume fraction under mechanical loading as predicted by the developed model has been reported in this article. The significant effect of void volume fraction on the overall stress distribution of the fuel pellet has also been described. An important oxide fuel issue that can have significant impact on the fuel performance is the mechanical response of oxide fuel pellet and clad system. Specifically, modeling the thermo-mechanical response of the fuel pellet in terms of its thermal expansion, mechanical deformation, swelling due to void formation and evolution, and the eventual contact of the fuel with the clad is of significant interest in understanding the fuel-clad mechanical interaction (FCMI). These phenomena are nonlinear and coupled since reduction in the fuel-clad gap affects thermal conductivity of the gap, which in turn affects temperature distribution within the fuel and the material properties of the fuel. Consequently, in order to accurately capture fuel-clad gap closure, we need to account for fuel swelling due to generation, retention, and evolution of fission gas in addition to the usual thermal expansion and mechanical deformation. Both fuel chemistry and microstructure also have a significant effect on the nucleation and growth of fission gas bubbles. Fuel-clad gap closure leading to eventual contact of the fuel with the clad introduces significant stresses in the clad, which makes thermo-mechanical response of the clad even more relevant. The overall aim of this test problem is to incorporate the above features in order to accurately capture fuel-clad mechanical interaction. Because of the complex nature of the problem, a series of test problems with increasing multi-physics coupling features, modeling accuracy, and complexity are defined with the
DYNA2D96. Explicit 2-D Hydrodynamic FEM Program
Whirley, R.G.
1992-04-01
DYNA2D is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.
Report of the 1988 2-D Intercomparison Workshop, chapter 3
NASA Technical Reports Server (NTRS)
Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar
1989-01-01
Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.
QUENCH2D. Two-Dimensional IHCP Code
Osman, A.; Beck, J.V.
1995-01-01
QUENCH2D* is developed for the solution of general, non-linear, two-dimensional inverse heat transfer problems. This program provides estimates for the surface heat flux distribution and/or heat transfer coefficient as a function of time and space by using transient temperature measurements at appropriate interior points inside the quenched body. Two-dimensional planar and axisymmetric geometries such as turnbine disks and blades, clutch packs, and many other problems can be analyzed using QUENCH2D*.
Testing problem solving in turkey vultures (Cathartes aura) using the string-pulling test.
Ellison, Anne Margaret; Watson, Jane; Demers, Eric
2015-01-01
To examine problem solving in turkey vultures (Cathartes aura), six captive vultures were presented with a string-pulling task, which involved drawing a string up to access food. This test has been used to assess cognition in many bird species. A small piece of meat suspended by a string was attached to a perch. Two birds solved the problem without apparent trial-and-error learning; a third bird solved the problem after observing a successful bird, suggesting that this individual learned from the other vulture. The remaining birds failed to complete the task. The successful birds significantly reduced the time needed to solve the task from early trials compared to late trials, suggesting that they had learned to solve the problem and improved their technique. The successful vultures solved the problem in a novel way: they pulled the string through their beak with their tongue, and may have gathered the string in their crop until the food was in reach. In contrast, ravens, parrots and finches use a stepwise process; they pull the string up, tuck it under foot, and reach down to pull up another length. As scavengers, turkey vultures use their beak for tearing and ripping at carcasses, but possess large, flat, webbed feet that are ill-suited to pulling or grasping. The ability to solve this problem and the novel approach used by the turkey vultures in this study may be a result of the unique evolutionary pressures imposed on this scavenging species. PMID:25015133
Testing problem solving in turkey vultures (Cathartes aura) using the string-pulling test.
Ellison, Anne Margaret; Watson, Jane; Demers, Eric
2015-01-01
To examine problem solving in turkey vultures (Cathartes aura), six captive vultures were presented with a string-pulling task, which involved drawing a string up to access food. This test has been used to assess cognition in many bird species. A small piece of meat suspended by a string was attached to a perch. Two birds solved the problem without apparent trial-and-error learning; a third bird solved the problem after observing a successful bird, suggesting that this individual learned from the other vulture. The remaining birds failed to complete the task. The successful birds significantly reduced the time needed to solve the task from early trials compared to late trials, suggesting that they had learned to solve the problem and improved their technique. The successful vultures solved the problem in a novel way: they pulled the string through their beak with their tongue, and may have gathered the string in their crop until the food was in reach. In contrast, ravens, parrots and finches use a stepwise process; they pull the string up, tuck it under foot, and reach down to pull up another length. As scavengers, turkey vultures use their beak for tearing and ripping at carcasses, but possess large, flat, webbed feet that are ill-suited to pulling or grasping. The ability to solve this problem and the novel approach used by the turkey vultures in this study may be a result of the unique evolutionary pressures imposed on this scavenging species.
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false How are drug test problems corrected? 40.205 Section 40.205 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.205 How are drug test problems corrected? (a) As a collector, you have...
49 CFR 40.199 - What problems always cause a drug test to be cancelled?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false What problems always cause a drug test to be... TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.199 What problems always cause a drug test to be cancelled? (a) As the MRO, when the laboratory discovers a “fatal flaw”...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing problems corrected? (a) As a BAT or STT, you have the responsibility of trying to complete successfully...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
49 CFR 40.271 - How are alcohol testing problems corrected?
Code of Federal Regulations, 2012 CFR
2012-10-01
... 49 Transportation 1 2012-10-01 2012-10-01 false How are alcohol testing problems corrected? 40.271... WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing... alcohol test for each employee. (1) If, during or shortly after the testing process, you become aware...
2001-01-31
This software reduces the data from two-dimensional kSA MOS program, k-Space Associates, Ann Arbor, MI. Initial MOS data is recorded without headers in 38 columns, with one row of data per acquisition per lase beam tracked. The final MOSS 2d data file is reduced, graphed, and saved in a tab-delimited column format with headers that can be plotted in any graphing software.
Chimpanzee Problem-Solving: A Test for Comprehension.
ERIC Educational Resources Information Center
Premack, David; Woodruff, Guy
1978-01-01
Investigates a chimpanzee's capacity to recognize representations of problems and solutions, as well as its ability to perceive the relationship between each type of problem and its appropriate solutions using televised programs and photographic solutions. (HM)
MAGNUM-2D computer code: user's guide
England, R.L.; Kline, N.W.; Ekblad, K.J.; Baca, R.G.
1985-01-01
Information relevant to the general use of the MAGNUM-2D computer code is presented. This computer code was developed for the purpose of modeling (i.e., simulating) the thermal and hydraulic conditions in the vicinity of a waste package emplaced in a deep geologic repository. The MAGNUM-2D computer computes (1) the temperature field surrounding the waste package as a function of the heat generation rate of the nuclear waste and thermal properties of the basalt and (2) the hydraulic head distribution and associated groundwater flow fields as a function of the temperature gradients and hydraulic properties of the basalt. MAGNUM-2D is a two-dimensional numerical model for transient or steady-state analysis of coupled heat transfer and groundwater flow in a fractured porous medium. The governing equations consist of a set of coupled, quasi-linear partial differential equations that are solved using a Galerkin finite-element technique. A Newton-Raphson algorithm is embedded in the Galerkin functional to formulate the problem in terms of the incremental changes in the dependent variables. Both triangular and quadrilateral finite elements are used to represent the continuum portions of the spatial domain. Line elements may be used to represent discrete conduits. 18 refs., 4 figs., 1 tab.
Material behavior and materials problems in TFTR (Tokamak Fusion Test Reactor)
Dylla, H.F.; Ulrickson, M.A.; Owens, D.K.; Heifetz, D.B.; Mills, B.E.; Pontau, A.E.; Wampler, W.R.; Doyle, B.L.; Lee, S.R.; Watson, R.D.; Croessmann, C.D.
1988-05-01
This paper reviews the experience with first-wall materials over a 20-month period of operation spanning 1985--1987. Experience with the axisymmetric inner wall limiter, constructed of graphite tiles, will be described including the necessary conditioning procedures needed for impurity and particle control of high power ({le}20 MW) neutral injection experiments. The thermal effects in disruptions have been quantified and no significant damage to the bumper limiter has occurred as a result of disruptions. Carbon and metal impurity redeposition effects have been quantified through surface analysis of wall samples. Estimates of the tritium retention in the graphite limiter tiles and redeposited carbon films have been made based on analysis of deuterium retention in removed graphite tiles and wall samples. New limiter structures have been designed using a 2D carbon/carbon (C/C) composite material for RF antenna protection. Laboratory tests of the important thermal, mechanical and vacuum properties of C/C materials will be described. Finally, the last series of experiments in TFTR with in-situ Zr/Al surface pumps will be described. Problems with Ar/Al embrittlement have led to the removal of the getter material from the in-torus environment. 53 refs., 8 figs., 3 tabs.
49 CFR 40.199 - What problems always cause a drug test to be cancelled?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problems always cause a drug test to be cancelled? 40.199 Section 40.199 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.199 What problems...
Neuropsychological Testing of Developmentally Delayed Young Children: Problems and Progress.
ERIC Educational Resources Information Center
Stone, Nancy W.; Levin, Harvey S.
1979-01-01
The study involving 13 developmentally delayed children (36-66 months old) was conducted to determine the applicability of the Peabody Picture Vocabulary Test, the Motor Impersistence Test, Graphesthesia Test, and Stereognosis-Tactile Test with developmentally delayed infants and preschoolers. (SBH)
ERIC Educational Resources Information Center
Douglass, James B.
A general process for testing the feasibility of applying alternative mathematical or statistical models to the solution of a practical problem is presented and flowcharted. The system is used to compare five models for test equating: (1) anchor test equating using classical test theory; (2) anchor test equating using the one-parameter logistic…
Georgi, Howard; Kats, Yevgeny
2008-09-26
We discuss what can be learned about unparticle physics by studying simple quantum field theories in one space and one time dimension. We argue that the exactly soluble 2D theory of a massless fermion coupled to a massive vector boson, the Sommerfield model, is an interesting analog of a Banks-Zaks model, approaching a free theory at high energies and a scale-invariant theory with nontrivial anomalous dimensions at low energies. We construct a toy standard model coupling to the fermions in the Sommerfield model and study how the transition from unparticle behavior at low energies to free particle behavior at high energies manifests itself in interactions with the toy standard model particles.
A new inversion method for (T2, D) 2D NMR logging and fluid typing
NASA Astrophysics Data System (ADS)
Tan, Maojin; Zou, Youlong; Zhou, Cancan
2013-02-01
One-dimensional nuclear magnetic resonance (1D NMR) logging technology has some significant limitations in fluid typing. However, not only can two-dimensional nuclear magnetic resonance (2D NMR) provide some accurate porosity parameters, but it can also identify fluids more accurately than 1D NMR. In this paper, based on the relaxation mechanism of (T2, D) 2D NMR in a gradient magnetic field, a hybrid inversion method that combines least-squares-based QR decomposition (LSQR) and truncated singular value decomposition (TSVD) is examined in the 2D NMR inversion of various fluid models. The forward modeling and inversion tests are performed in detail with different acquisition parameters, such as magnetic field gradients (G) and echo spacing (TE) groups. The simulated results are discussed and described in detail, the influence of the above-mentioned observation parameters on the inversion accuracy is investigated and analyzed, and the observation parameters in multi-TE activation are optimized. Furthermore, the hybrid inversion can be applied to quantitatively determine the fluid saturation. To study the effects of noise level on the hybrid method and inversion results, the numerical simulation experiments are performed using different signal-to-noise-ratios (SNRs), and the effect of different SNRs on fluid typing using three fluid models are discussed and analyzed in detail.
Multi-Dimensional, Non-Pyrolyzing Ablation Test Problems
NASA Technical Reports Server (NTRS)
Risch, Tim; Kostyk, Chris
2016-01-01
Non-pyrolyzingcarbonaceous materials represent a class of candidate material for hypersonic vehicle components providing both structural and thermal protection system capabilities. Two problems relevant to this technology are presented. The first considers the one-dimensional ablation of a carbon material subject to convective heating. The second considers two-dimensional conduction in a rectangular block subject to radiative heating. Surface thermochemistry for both problems includes finite-rate surface kinetics at low temperatures, diffusion limited ablation at intermediate temperatures, and vaporization at high temperatures. The first problem requires the solution of both the steady-state thermal profile with respect to the ablating surface and the transient thermal history for a one-dimensional ablating planar slab with temperature-dependent material properties. The slab front face is convectively heated and also reradiates to a room temperature environment. The back face is adiabatic. The steady-state temperature profile and steady-state mass loss rate should be predicted. Time-dependent front and back face temperature, surface recession and recession rate along with the final temperature profile should be predicted for the time-dependent solution. The second problem requires the solution for the transient temperature history for an ablating, two-dimensional rectangular solid with anisotropic, temperature-dependent thermal properties. The front face is radiatively heated, convectively cooled, and also reradiates to a room temperature environment. The back face and sidewalls are adiabatic. The solution should include the following 9 items: final surface recession profile, time-dependent temperature history of both the front face and back face at both the centerline and sidewall, as well as the time-dependent surface recession and recession rate on the front face at both the centerline and sidewall. The results of the problems from all submitters will be
An empirical coverage test for the g-sample problem
Orlowski, L.A.; Grundy, W.D.; Mielke, P.W., Jr.
1991-01-01
A nonparametric g-sample empirical coverage test has recently been developed for univariate continuous data. It is based upon the empirical coverages which are spacings of multiple random samples. The test is capable of detecting any distributional differences which may exist among the parent populations, without additional assumptions beyond randomness and continuity. The test can be effective with the limited and/or unequal sample sizes most often encountered in geologic studies. A computer program for implementing this procedure, G-SECT 1, is available. ?? 1991 International Association for Mathematical Geology.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth; Dirkx, Kim; Hoogerheide, Vincent; Boerboom, Joris; Verkoeijen, Peter P. J. L.
2015-01-01
Four experiments investigated whether the testing effect also applies to the acquisition of problem-solving skills from worked examples. Experiment 1 (n?=?120) showed no beneficial effects of testing consisting of "isomorphic" problem solving or "example recall" on final test performance, which consisted of isomorphic problem…
ORION96. 2-d Finite Element Code Postprocessor
Sanford, L.A.; Hallquist, J.O.
1992-02-02
ORION is an interactive program that serves as a postprocessor for the analysis programs NIKE2D, DYNA2D, TOPAZ2D, and CHEMICAL TOPAZ2D. ORION reads binary plot files generated by the two-dimensional finite element codes currently used by the Methods Development Group at LLNL. Contour and color fringe plots of a large number of quantities may be displayed on meshes consisting of triangular and quadrilateral elements. ORION can compute strain measures, interface pressures along slide lines, reaction forces along constrained boundaries, and momentum. ORION has been applied to study the response of two-dimensional solids and structures undergoing finite deformations under a wide variety of large deformation transient dynamic and static problems and heat transfer analyses.
Extension and application of the Preissmann slot model to 2D transient mixed flows
NASA Astrophysics Data System (ADS)
Maranzoni, Andrea; Dazzi, Susanna; Aureli, Francesca; Mignosa, Paolo
2015-08-01
This paper presents an extension of the Preissmann slot concept for the modeling of highly transient two-dimensional (2D) mixed flows. The classic conservative formulation of the 2D shallow water equations for free surface flows is adapted by assuming that two fictitious vertical slots, aligned along the two Cartesian plane directions and normally intersecting, are added on the ceiling of each integration element. Accordingly, transitions between free surface and pressurized flow can be handled in a natural and straightforward way by using the same set of governing equations. The opportunity of coupling free surface and pressurized flows is actually useful not only in one-dimensional (1D) problems concerning sewer systems but also for modeling 2D flooding phenomena in which the pressurization of bridges, culverts, or other crossing hydraulic structures can be expected. Numerical simulations are performed by using a shock-capturing MUSCL-Hancock finite volume scheme combined with the FORCE (First-Order Centred) solver for the evaluation of the numerical fluxes. The validation of the mathematical model is accomplished on the basis of both exact solutions of 1D discontinuous initial value problems and reference radial solutions of idealized test cases with cylindrical symmetry. Furthermore, the capability of the model to deal with practical field-scale applications is assessed by simulating the transit of a bore under an arch bridge. Numerical results show that the proposed model is suitable for the prediction of highly transient 2D mixed flows.
Common Problems of Mobile Applications for Foreign Language Testing
ERIC Educational Resources Information Center
Garcia Laborda, Jesus; Magal-Royo, Teresa; Lopez, Jose Luis Gimenez
2011-01-01
As the use of mobile learning educational applications has become more common anywhere in the world, new concerns have appeared in the classroom, human interaction in software engineering and ergonomics. new tests of foreign languages for a number of purposes have become more and more common recently. However, studies interrelating language tests…
Differential Validity: A Problem with Tests or Criteria?
ERIC Educational Resources Information Center
Hollmann, Thomas D.
The evidence used in condemning a test as racially biased is usually a validity coefficient for one racial group that is significantly different from that of another racial group. However, both variables in the calculation of a validity coefficient should be examined to determine where the bias lies. A study was conducted to investigate the…
Language Testing in the Military: Problems, Politics and Progress
ERIC Educational Resources Information Center
Green, Rita; Wall, Dianne
2005-01-01
There appears to be little literature available -- either descriptive or research-related -- on language testing in the military. This form of specific purposes assessment affects both military personnel and civilians working within the military structure in terms of posting, promotion and remuneration, and it could be argued that it has serious…
Problem-Solving Test: Expression Cloning of the Erythropoietin Receptor
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
Terms to be familiar with before you start to solve the test: cytokines, cytokine receptors, cDNA library, cDNA synthesis, poly(A)[superscript +] RNA, primer, template, reverse transcriptase, restriction endonucleases, cohesive ends, expression vector, promoter, Shine-Dalgarno sequence, poly(A) signal, DNA helicase, DNA ligase, topoisomerases,…
Problem-Solving Test: Submitochondrial Localization of Proteins
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2011-01-01
Mitochondria are surrounded by two membranes (outer and inner mitochondrial membrane) that separate two mitochondrial compartments (intermembrane space and matrix). Hundreds of proteins are distributed among these submitochondrial components. A simple biochemical/immunological procedure is described in this test to determine the localization of…
Problem-Solving Test: Real-Time Polymerase Chain Reaction
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: polymerase chain reaction, DNA amplification, electrophoresis, breast cancer, "HER2" gene, genomic DNA, "in vitro" DNA synthesis, template, primer, Taq polymerase, 5[prime][right arrow]3[prime] elongation activity, 5[prime][right arrow]3[prime] exonuclease activity, deoxyribonucleoside…
Problem-Solving Test: The Mechanism of Protein Synthesis
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2009-01-01
Terms to be familiar with before you start to solve the test: protein synthesis, ribosomes, amino acids, peptides, peptide bond, polypeptide chain, N- and C-terminus, hemoglobin, [alpha]- and [beta]-globin chains, radioactive labeling, [[to the third power]H] and [[to the fourteenth power]C]leucine, cytosol, differential centrifugation, density…
Crash test for the restricted three-body problem.
Nagler, Jan
2005-02-01
The restricted three-body problem serves to investigate the chaotic behavior of a small body under the gravitational influence of two heavy primary bodies. We analyze numerically the phase space mixing of bounded motion, escape, and crash in this simple model of (chaotic) celestial mechanics. The presented extensive numerical analysis reveals a high degree of complexity. We extend the recently presented findings for the Copenhagen case of equal main masses to the general case of different primary body masses. Collisions of the small body onto the primaries are comparatively frequent, and their probability displays a scale-free dependence on the size of the primaries as shown for the Copenhagen case. Interpreting the crash as leaking in phase space the results are related to both chaotic scattering and the theory of leaking Hamiltonian systems. PMID:15783407
Crash test for the restricted three-body problem.
Nagler, Jan
2005-02-01
The restricted three-body problem serves to investigate the chaotic behavior of a small body under the gravitational influence of two heavy primary bodies. We analyze numerically the phase space mixing of bounded motion, escape, and crash in this simple model of (chaotic) celestial mechanics. The presented extensive numerical analysis reveals a high degree of complexity. We extend the recently presented findings for the Copenhagen case of equal main masses to the general case of different primary body masses. Collisions of the small body onto the primaries are comparatively frequent, and their probability displays a scale-free dependence on the size of the primaries as shown for the Copenhagen case. Interpreting the crash as leaking in phase space the results are related to both chaotic scattering and the theory of leaking Hamiltonian systems.
Development and Implementation of Radiation-Hydrodynamics Verification Test Problems
Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.
2012-08-22
Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.
Practical Algorithm For Computing The 2-D Arithmetic Fourier Transform
NASA Astrophysics Data System (ADS)
Reed, Irving S.; Choi, Y. Y.; Yu, Xiaoli
1989-05-01
Recently, Tufts and Sadasiv [10] exposed a method for computing the coefficients of a Fourier series of a periodic function using the Mobius inversion of series. They called this method of analysis the Arithmetic Fourier Transform(AFT). The advantage of the AFT over the FN 1' is that this method of Fourier analysis needs only addition operations except for multiplications by scale factors at one stage of the computation. The disadvantage of the AFT as they expressed it originally is that it could be used effectively only to compute finite Fourier coefficients of a real even function. To remedy this the AFT developed in [10] is extended in [11] to compute the Fourier coefficients of both the even and odd components of a periodic function. In this paper, the improved AFT [11] is extended to a two-dimensional(2-D) Arithmetic Fourier Transform for calculating the Fourier Transform of two-dimensional discrete signals. This new algorithm is based on both the number-theoretic method of Mobius inversion of double series and the complex conjugate property of Fourier coefficients. The advantage of this algorithm over the conventional 2-D FFT is that the corner-turning problem needed in a conventional 2-D Discrete Fourier Transform(DFT) can be avoided. Therefore, this new 2-D algorithm is readily suitable for VLSI implementation as a parallel architecture. Comparing the operations of 2-D AFT of a MxM 2-D data array with the conventional 2-D FFT, the number of multiplications is significantly reduced from (2log2M)M2 to (9/4)M2. Hence, this new algorithm is faster than the FFT algorithm. Finally, two simulation results of this new 2-D AFT algorithm for 2-D artificial and real images are given in this paper.
[Problem-solving in immunohematology: direct compatibility laboratory test ].
Mannessier, L; Roubinet, F; Chiaroni, J
2001-12-01
Cross-matching between the serum of a patient and the red blood cells to be transfused is most important for the prevention of hemolytic transfusion reactions in allo-immunized or new-born patients found positive with direct antiglobulin test. Cross-matching is a time-consuming and complex laboratory test. In order to obtain valid results, it is necessary to abide by some technical rules detailed in this article. The choice of the blood units to be cross-matched depends on the patient's clinical story and on the specificity of anti-erythrocyte antibodies present in the serum. The identification and the management of most frequent difficulties met by using the cross-match technique are discussed hereby. PMID:11802611
Significance testing of rules in rule-based models of human problem solving
NASA Technical Reports Server (NTRS)
Lewis, C. M.; Hammer, J. M.
1986-01-01
Rule-based models of human problem solving have typically not been tested for statistical significance. Three methods of testing rules - analysis of variance, randomization, and contingency tables - are presented. Advantages and disadvantages of the methods are also described.
Sex Differences and Self-Reported Attention Problems During Baseline Concussion Testing.
Brooks, Brian L; Iverson, Grant L; Atkins, Joseph E; Zafonte, Ross; Berkner, Paul D
2016-01-01
Amateur athletic programs often use computerized cognitive testing as part of their concussion management programs. There is evidence that athletes with preexisting attention problems will have worse cognitive performance and more symptoms at baseline testing. The purpose of this study was to examine whether attention problems affect assessments differently for male and female athletes. Participants were drawn from a database that included 6,840 adolescents from Maine who completed Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) at baseline (primary outcome measure). The final sample included 249 boys and 100 girls with self-reported attention problems. Each participant was individually matched for sex, age, number of past concussions, and sport to a control participant (249 boys, 100 girls). Boys with attention problems had worse reaction time than boys without attention problems. Girls with attention problems had worse visual-motor speed than girls without attention problems. Boys with attention problems reported more total symptoms, including more cognitive-sensory and sleep-arousal symptoms, compared with boys without attention problems. Girls with attention problems reported more cognitive-sensory, sleep-arousal, and affective symptoms than girls without attention problems. When considering the assessment, management, and outcome from concussions in adolescent athletes, it is important to consider both sex and preinjury attention problems regarding cognitive test results and symptom reporting. PMID:25923339
Canard configured aircraft with 2-D nozzle
NASA Technical Reports Server (NTRS)
Child, R. D.; Henderson, W. P.
1978-01-01
A closely-coupled canard fighter with vectorable two-dimensional nozzle was designed for enhanced transonic maneuvering. The HiMAT maneuver goal of a sustained 8g turn at a free-stream Mach number of 0.9 and 30,000 feet was the primary design consideration. The aerodynamic design process was initiated with a linear theory optimization minimizing the zero percent suction drag including jet effects and refined with three-dimensional nonlinear potential flow techniques. Allowances were made for mutual interference and viscous effects. The design process to arrive at the resultant configuration is described, and the design of a powered 2-D nozzle model to be tested in the LRC 16-foot Propulsion Wind Tunnel is shown.
2D Electrostatic Actuation of Microshutter Arrays
NASA Technical Reports Server (NTRS)
Burns, Devin E.; Oh, Lance H.; Li, Mary J.; Kelly, Daniel P.; Kutyrev, Alexander S.; Moseley, Samuel H.
2015-01-01
Electrostatically actuated microshutter arrays consisting of rotational microshutters (shutters that rotate about a torsion bar) were designed and fabricated through the use of models and experiments. Design iterations focused on minimizing the torsional stiffness of the microshutters, while maintaining their structural integrity. Mechanical and electromechanical test systems were constructed to measure the static and dynamic behavior of the microshutters. The torsional stiffness was reduced by a factor of four over initial designs without sacrificing durability. Analysis of the resonant behavior of the microshutters demonstrates that the first resonant mode is a torsional mode occurring around 3000 Hz. At low vacuum pressures, this resonant mode can be used to significantly reduce the drive voltage necessary for actuation requiring as little as 25V. 2D electrostatic latching and addressing was demonstrated using both a resonant and pulsed addressing scheme.
2D Electrostatic Actuation of Microshutter Arrays
NASA Technical Reports Server (NTRS)
Burns, Devin E.; Oh, Lance H.; Li, Mary J.; Jones, Justin S.; Kelly, Daniel P.; Zheng, Yun; Kutyrev, Alexander S.; Moseley, Samuel H.
2015-01-01
An electrostatically actuated microshutter array consisting of rotational microshutters (shutters that rotate about a torsion bar) were designed and fabricated through the use of models and experiments. Design iterations focused on minimizing the torsional stiffness of the microshutters, while maintaining their structural integrity. Mechanical and electromechanical test systems were constructed to measure the static and dynamic behavior of the microshutters. The torsional stiffness was reduced by a factor of four over initial designs without sacrificing durability. Analysis of the resonant behavior of the microshutter arrays demonstrates that the first resonant mode is a torsional mode occurring around 3000 Hz. At low vacuum pressures, this resonant mode can be used to significantly reduce the drive voltage necessary for actuation requiring as little as 25V. 2D electrostatic latching and addressing was demonstrated using both a resonant and pulsed addressing scheme.
Assessing corrosion problems in photovoltaic cells via electrochemical stress testing
NASA Technical Reports Server (NTRS)
Shalaby, H.
1985-01-01
A series of accelerated electrochemical experiments to study the degradation properties of polyvinylbutyral-encapsulated silicon solar cells has been carried out. The cells' electrical performance with silk screen-silver and nickel-solder contacts was evaluated. The degradation mechanism was shown to be electrochemical corrosion of the cell contacts; metallization elements migrate into the encapsulating material, which acts as an ionic conducting medium. The corrosion products form a conductive path which results in a gradual loss of the insulation characteristics of the encapsulant. The precipitation of corrosion products in the encapsulant also contributes to its discoloration which in turn leads to a reduction in its transparency and the consequent optical loss. Delamination of the encapsulating layers could be attributed to electrochemical gas evolution reactions. The usefulness of the testing technique in qualitatively establishing a reliability difference between metallizations and antireflection coating types is demonstrated.
ERIC Educational Resources Information Center
Hill, Kennedy T.
1983-01-01
Reviews a 20-year program of research on motivation and test performance, concluding that test anxiety and test-taking skill deficits are distorting factors in efforts to test student aptitude, achievement, and competency. (FL)
An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations
ERIC Educational Resources Information Center
Schochet, Peter Z.
2009-01-01
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
Medical Physics: Forming and testing solutions to clinical problems.
Tsapaki, Virginia; Bayford, Richard
2015-11-01
According to the European Federation of Organizations for Medical Physics (EFOMP) policy statement No. 13, "The rapid advance in the use of highly sophisticated equipment and procedures in the medical field increasingly depends on information and communication technology. In spite of the fact that the safety and quality of such technology is vigorously tested before it is placed on the market, it often turns out that the safety and quality is not sufficient when used under hospital working conditions. To improve safety and quality for patient and users, additional safeguards and related monitoring, as well as measures to enhance quality, are required. Furthermore a large number of accidents and incidents happen every year in hospitals and as a consequence a number of patients die or are injured. Medical Physicists are well positioned to contribute towards preventing these kinds of events". The newest developments related to this increasingly important medical speciality were presented during the 8th European Conference of Medical Physics 2014 which was held in Athens, 11-13 September 2014 and hosted by the Hellenic Association of Medical Physicists (HAMP) in collaboration with the EFOMP and are summarized in this issue.
Inverse problems in the design, modeling and testing of engineering systems
NASA Technical Reports Server (NTRS)
Alifanov, Oleg M.
1991-01-01
Formulations, classification, areas of application, and approaches to solving different inverse problems are considered for the design of structures, modeling, and experimental data processing. Problems in the practical implementation of theoretical-experimental methods based on solving inverse problems are analyzed in order to identify mathematical models of physical processes, aid in input data preparation for design parameter optimization, help in design parameter optimization itself, and to model experiments, large-scale tests, and real tests of engineering systems.
2D signature for detection and identification of drugs
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Shen, Jingling; Zhang, Cunlin; Zhou, Qingli; Shi, Yulei
2011-06-01
The method of spectral dynamics analysis (SDA-method) is used for obtaining the2D THz signature of drugs. This signature is used for the detection and identification of drugs with similar Fourier spectra by transmitted THz signal. We discuss the efficiency of SDA method for the identification problem of pure methamphetamine (MA), methylenedioxyamphetamine (MDA), 3, 4-methylenedioxymethamphetamine (MDMA) and Ketamine.
Perspectives for spintronics in 2D materials
NASA Astrophysics Data System (ADS)
Han, Wei
2016-03-01
The past decade has been especially creative for spintronics since the (re)discovery of various two dimensional (2D) materials. Due to the unusual physical characteristics, 2D materials have provided new platforms to probe the spin interaction with other degrees of freedom for electrons, as well as to be used for novel spintronics applications. This review briefly presents the most important recent and ongoing research for spintronics in 2D materials.
2d PDE Linear Symmetric Matrix Solver
1983-10-01
ICCG2 (Incomplete Cholesky factorized Conjugate Gradient algorithm for 2d symmetric problems) was developed to solve a linear symmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as resistive MHD, spatial diffusive transport, and phase space transport (Fokker-Planck equation) problems. These problems share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized withmore » finite-difference or finite-element methods,the resulting matrix system is frequently of block-tridiagonal form. To use ICCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. The incomplete Cholesky conjugate gradient algorithm is used to solve the linear symmetric matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For matrices lacking symmetry, ILUCG2 should be used. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
2d PDE Linear Asymmetric Matrix Solver
1983-10-01
ILUCG2 (Incomplete LU factorized Conjugate Gradient algorithm for 2d problems) was developed to solve a linear asymmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as plasma diffusion, equilibria, and phase space transport (Fokker-Planck equation) problems. These equations share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized with finite-difference or finite-elementmore » methods, the resulting matrix system is frequently of block-tridiagonal form. To use ILUCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. A generalization of the incomplete Cholesky conjugate gradient algorithm is used to solve the matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For problems having a symmetric matrix ICCG2 should be used since it runs up to four times faster and uses approximately 30% less storage. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source, containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
Quantitative 2D liquid-state NMR.
Giraudeau, Patrick
2014-06-01
Two-dimensional (2D) liquid-state NMR has a very high potential to simultaneously determine the absolute concentration of small molecules in complex mixtures, thanks to its capacity to separate overlapping resonances. However, it suffers from two main drawbacks that probably explain its relatively late development. First, the 2D NMR signal is strongly molecule-dependent and site-dependent; second, the long duration of 2D NMR experiments prevents its general use for high-throughput quantitative applications and affects its quantitative performance. Fortunately, the last 10 years has witnessed an increasing number of contributions where quantitative approaches based on 2D NMR were developed and applied to solve real analytical issues. This review aims at presenting these recent efforts to reach a high trueness and precision in quantitative measurements by 2D NMR. After highlighting the interest of 2D NMR for quantitative analysis, the different strategies to determine the absolute concentrations from 2D NMR spectra are described and illustrated by recent applications. The last part of the manuscript concerns the recent development of fast quantitative 2D NMR approaches, aiming at reducing the experiment duration while preserving - or even increasing - the analytical performance. We hope that this comprehensive review will help readers to apprehend the current landscape of quantitative 2D NMR, as well as the perspectives that may arise from it.
An inverse design method for 2D airfoil
NASA Astrophysics Data System (ADS)
Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao
2010-03-01
The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.
NASA Astrophysics Data System (ADS)
Raskin, Cody; Owen, J. Michael
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.
Some Problems of Computer-Aided Testing and "Interview-Like Tests"
ERIC Educational Resources Information Center
Smoline, D.V.
2008-01-01
Computer-based testing--is an effective teacher's tool, intended to optimize course goals and assessment techniques in a comparatively short time. However, this is accomplished only if we deal with high-quality tests. It is strange, but despite the 100-year history of Testing Theory (see, Anastasi, A., Urbina, S. (1997). Psychological testing.…
Beta/gamma test problems for ITS. [Integrated Tiger Series (ITS)
Mei, G.T.
1993-01-01
The Integrated Tiger Series of Coupled Electron/Photon Monte Carlo Transport Codes (ITS 3.0, PC Version) was used at Oak Ridge National Laboratory (ORNL) to compare with and extend the experimental findings of the beta/gamma response of selected health physics instruments. In order to assure that ITS gives correct results, several beta/gamma problems have been tested. ITS was used to simulate these problems numerically, and results for each were compared to the problem's experimental or analytical results. ITS successfully predicted the experimental or analytical results of all tested problems within the statistical uncertainty inherent in the Monte Carlo method.
Quantum process tomography by 2D fluorescence spectroscopy
Pachón, Leonardo A.; Marcus, Andrew H.; Aspuru-Guzik, Alán
2015-06-07
Reconstruction of the dynamics (quantum process tomography) of the single-exciton manifold in energy transfer systems is proposed here on the basis of two-dimensional fluorescence spectroscopy (2D-FS) with phase-modulation. The quantum-process-tomography protocol introduced here benefits from, e.g., the sensitivity enhancement ascribed to 2D-FS. Although the isotropically averaged spectroscopic signals depend on the quantum yield parameter Γ of the doubly excited-exciton manifold, it is shown that the reconstruction of the dynamics is insensitive to this parameter. Applications to foundational and applied problems, as well as further extensions, are discussed.
Quantum process tomography by 2D fluorescence spectroscopy
NASA Astrophysics Data System (ADS)
Pachón, Leonardo A.; Marcus, Andrew H.; Aspuru-Guzik, Alán
2015-06-01
Reconstruction of the dynamics (quantum process tomography) of the single-exciton manifold in energy transfer systems is proposed here on the basis of two-dimensional fluorescence spectroscopy (2D-FS) with phase-modulation. The quantum-process-tomography protocol introduced here benefits from, e.g., the sensitivity enhancement ascribed to 2D-FS. Although the isotropically averaged spectroscopic signals depend on the quantum yield parameter Γ of the doubly excited-exciton manifold, it is shown that the reconstruction of the dynamics is insensitive to this parameter. Applications to foundational and applied problems, as well as further extensions, are discussed.
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2011 CFR
2011-10-01
... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel an alcohol test if any of the following problems occur. These are “fatal flaws.” You must inform the DER that... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2014 CFR
2014-10-01
... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel an alcohol test if any of the following problems occur. These are “fatal flaws.” You must inform the DER that... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2013 CFR
2013-10-01
... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel an alcohol test if any of the following problems occur. These are “fatal flaws.” You must inform the DER that... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT...
49 CFR 40.267 - What problems always cause an alcohol test to be cancelled?
Code of Federal Regulations, 2012 CFR
2012-10-01
... always cause an alcohol test to be cancelled? As an employer, a BAT, or an STT, you must cancel an alcohol test if any of the following problems occur. These are “fatal flaws.” You must inform the DER that... the case of a screening test conducted on a saliva ASD or a breath tube ASD: (1) The STT or BAT...
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where
Staring 2-D hadamard transform spectral imager
Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.
2006-02-07
A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.
A Test of the Testing Effect: Acquiring Problem-Solving Skills from Worked Examples
ERIC Educational Resources Information Center
van Gog, Tamara; Kester, Liesbeth
2012-01-01
The "testing effect" refers to the finding that after an initial study opportunity, testing is more effective for long-term retention than restudying. The testing effect seems robust and is a finding from the field of cognitive science that has important implications for education. However, it is unclear whether this effect also applies to the…
Sample training based wildfire segmentation by 2D histogram θ-division with minimum error.
Zhao, Jianhui; Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong
2013-01-01
A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation.
Testing foreign language impact on engineering students' scientific problem-solving performance
NASA Astrophysics Data System (ADS)
Tatzl, Dietmar; Messnarz, Bernd
2013-12-01
This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in the Degree Programme of Aviation at the FH JOANNEUM University of Applied Sciences, Graz, Austria. Half of each test group were given a set of 12 physics problems described in German, the other half received the same set of problems described in English. It was the goal to test linguistic reading comprehension necessary for scientific problem solving instead of physics knowledge as such. The results imply that written undergraduate English-medium engineering tests and examinations may not require additional examination time or language-specific aids for students who have reached university-entrance proficiency in English as a foreign language.
Likelihood Methods for Testing Group Problem Solving Models with Censored Data.
ERIC Educational Resources Information Center
Regal, Ronald R.; Larntz, Kinley
1978-01-01
Models relating individual and group problem solving solution times under the condition of limited time (time limit censoring) are presented. Maximum likelihood estimation of parameters and a goodness of fit test are presented. (Author/JKS)
2D materials for nanophotonic devices
NASA Astrophysics Data System (ADS)
Xu, Renjing; Yang, Jiong; Zhang, Shuang; Pei, Jiajie; Lu, Yuerui
2015-12-01
Two-dimensional (2D) materials have become very important building blocks for electronic, photonic, and phononic devices. The 2D material family has four key members, including the metallic graphene, transition metal dichalcogenide (TMD) layered semiconductors, semiconducting black phosphorous, and the insulating h-BN. Owing to the strong quantum confinements and defect-free surfaces, these atomically thin layers have offered us perfect platforms to investigate the interactions among photons, electrons and phonons. The unique interactions in these 2D materials are very important for both scientific research and application engineering. In this talk, I would like to briefly summarize and highlight the key findings, opportunities and challenges in this field. Next, I will introduce/highlight our recent achievements. We demonstrated atomically thin micro-lens and gratings using 2D MoS2, which is the thinnest optical component around the world. These devices are based on our discovery that the elastic light-matter interactions in highindex 2D materials is very strong. Also, I would like to introduce a new two-dimensional material phosphorene. Phosphorene has strongly anisotropic optical response, which creates 1D excitons in a 2D system. The strong confinement in phosphorene also enables the ultra-high trion (charged exciton) binding energies, which have been successfully measured in our experiments. Finally, I will briefly talk about the potential applications of 2D materials in energy harvesting.
Almost but not quite 2D, Non-linear Bayesian Inversion of CSEM Data
NASA Astrophysics Data System (ADS)
Ray, A.; Key, K.; Bodin, T.
2013-12-01
efficiently evaluate the forward response using 1D profiles extracted from the model at the common-midpoints of the EM source-receiver pairs. Since the 1D approximation is locally valid at different midpoint locations, the computation time is far lower than is required by a full 2D or 3D simulation. We have applied this method to both synthetic and real CSEM survey data from the Scarborough gas field on the Northwest shelf of Australia, resulting in a spatially variable quantification of resistivity and its uncertainty in 2D. This Bayesian approach results in a large database of 2D models that comprise a posterior probability distribution, which we can subset to test various hypotheses about the range of model structures compatible with the data. For example, we can subset the model distributions to examine the hypothesis that a resistive reservoir extends overs a certain spatial extent. Depending on how this conditions other parts of the model space, light can be shed on the geological viability of the hypothesis. Since tackling spatially variable uncertainty and trade-offs in 2D and 3D is a challenging research problem, the insights gained from this work may prove valuable for subsequent full 2D and 3D Bayesian inversions.
2D materials: to graphene and beyond.
Mas-Ballesté, Rubén; Gómez-Navarro, Cristina; Gómez-Herrero, Julio; Zamora, Félix
2011-01-01
This review is an attempt to illustrate the different alternatives in the field of 2D materials. Graphene seems to be just the tip of the iceberg and we show how the discovery of alternative 2D materials is starting to show the rest of this iceberg. The review comprises the current state-of-the-art of the vast literature in concepts and methods already known for isolation and characterization of graphene, and rationalizes the quite disperse literature in other 2D materials such as metal oxides, hydroxides and chalcogenides, and metal-organic frameworks.
ELLIPT2D: A Flexible Finite Element Code Written Python
Pletzer, A.; Mollis, J.C.
2001-03-22
The use of the Python scripting language for scientific applications and in particular to solve partial differential equations is explored. It is shown that Python's rich data structure and object-oriented features can be exploited to write programs that are not only significantly more concise than their counter parts written in Fortran, C or C++, but are also numerically efficient. To illustrate this, a two-dimensional finite element code (ELLIPT2D) has been written. ELLIPT2D provides a flexible and easy-to-use framework for solving a large class of second-order elliptic problems. The program allows for structured or unstructured meshes. All functions defining the elliptic operator are user supplied and so are the boundary conditions, which can be of Dirichlet, Neumann or Robbins type. ELLIPT2D makes extensive use of dictionaries (hash tables) as a way to represent sparse matrices.Other key features of the Python language that have been widely used include: operator over loading, error handling, array slicing, and the Tkinter module for building graphical use interfaces. As an example of the utility of ELLIPT2D, a nonlinear solution of the Grad-Shafranov equation is computed using a Newton iterative scheme. A second application focuses on a solution of the toroidal Laplace equation coupled to a magnetohydrodynamic stability code, a problem arising in the context of magnetic fusion research.
49 CFR 40.203 - What problems cause a drug test to be cancelled unless they are corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problems cause a drug test to be cancelled... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.203 What problems cause a drug test to be cancelled unless they are corrected? (a) As the MRO, when...
49 CFR 40.269 - What problems cause an alcohol test to be cancelled unless they are corrected?
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problems cause an alcohol test to be... Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.269 What problems cause an alcohol test to be cancelled unless they are corrected? As a...
Design of the LRP airfoil series using 2D CFD
NASA Astrophysics Data System (ADS)
Zahle, Frederik; Bak, Christian; Sørensen, Niels N.; Vronsky, Tomas; Gaudern, Nicholas
2014-06-01
This paper describes the design and wind tunnel testing of a high-Reynolds number, high lift airfoil series designed for wind turbines. The airfoils were designed using direct gradient- based numerical multi-point optimization based on a Bezier parameterization of the shape, coupled to the 2D Navier-Stokes flow solver EllipSys2D. The resulting airfoils, the LRP2-30 and LRP2-36, achieve both higher operational lift coefficients and higher lift to drag ratios compared to the equivalent FFA-W3 airfoils.
Evaluation of 2D ceramic matrix composites in aeroconvective environments
NASA Technical Reports Server (NTRS)
Riccitiello, Salvatore R.; Love, Wendell L.; Balter-Peterson, Aliza
1992-01-01
An evaluation is conducted of a novel ceramic-matrix composite (CMC) material system for use in the aeroconvective-heating environments encountered by the nose caps and wing leading edges of such aerospace vehicles as the Space Shuttle, during orbit-insertion and reentry from LEO. These CMCs are composed of an SiC matrix that is reinforced with Nicalon, Nextel, or carbon refractory fibers in a 2D architecture. The test program conducted for the 2D CMCs gave attention to their subsurface oxidation.
Ginsparg, P.
1991-01-01
These are introductory lectures for a general audience that give an overview of the subject of matrix models and their application to random surfaces, 2d gravity, and string theory. They are intentionally 1.5 years out of date.
Ginsparg, P.
1991-12-31
These are introductory lectures for a general audience that give an overview of the subject of matrix models and their application to random surfaces, 2d gravity, and string theory. They are intentionally 1.5 years out of date.
Problem-Solving Test: RNA and Protein Synthesis in Bacteriophage-Infected "E. coli" Cells
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2008-01-01
The classic experiment presented in this problem-solving test was designed to identify the template molecules of translation by analyzing the synthesis of phage proteins in "Escherichia coli" cells infected with bacteriophage T4. The work described in this test led to one of the most seminal discoveries of early molecular biology: it dealt a…
Chemical Approaches to 2D Materials.
Samorì, Paolo; Palermo, Vincenzo; Feng, Xinliang
2016-08-01
Chemistry plays an ever-increasing role in the production, functionalization, processing and applications of graphene and other 2D materials. This special issue highlights a selection of enlightening chemical approaches to 2D materials, which nicely reflect the breadth of the field and convey the excitement of the individuals involved in it, who are trying to translate graphene and related materials from the laboratory into a real, high-impact technology. PMID:27478083
Chemical Approaches to 2D Materials.
Samorì, Paolo; Palermo, Vincenzo; Feng, Xinliang
2016-08-01
Chemistry plays an ever-increasing role in the production, functionalization, processing and applications of graphene and other 2D materials. This special issue highlights a selection of enlightening chemical approaches to 2D materials, which nicely reflect the breadth of the field and convey the excitement of the individuals involved in it, who are trying to translate graphene and related materials from the laboratory into a real, high-impact technology.
Testing Three Problem List Terminologies in a simulated data entry environment.
Fung, Kin Wah; Xu, Junchuan; Rosenbloom, S Trent; Mohr, David; Maram, Naveen; Suther, Thomas
2011-01-01
Three Problem List Terminologies (PLT) were tested using a web-based application simulating a clinical data entry environment to evaluate coverage and coding efficiency. The three PLTs were: the CORE Problem List Subset of SNOMED CT, a clinical subset extracted from the full SNOMED CT and the PLT currently used at the Mayo Clinic. Candidate problem statements were randomly extracted from free text problem list entries contained in two electronic medical record systems. Physician reviewers searched for concepts in one of the three PLTs that most closely matched a problem statement. Altogether 45 reviewers reviewed 15 problems each. The coverage of the much smaller CORE Subset was comparable to Clinical SNOMED for combined exact or partial matches. The CORE Subset required the shortest time to find a concept. This may be related to the smaller size of the pick lists for the CORE Subset.
Yang, Li-Ming; Dornfeld, Matthew; Frauenheim, Thomas; Ganz, Eric
2015-10-21
We predict a highly stable and robust atomically thin gold monolayer with a hexagonal close packed lattice stabilized by metallic bonding with contributions from strong relativistic effects and aurophilic interactions. We have shown that the framework of the Au monolayer can survive 10 ps MD annealing simulations up to 1400 K. The framework is also able to survive large motions out of the plane. Due to the smaller number of bonds per atom in the 2D layer compared to the 3D bulk we observe significantly enhanced energy per bond (0.94 vs. 0.52 eV per bond). This is similar to the increase in bond strength going from 3D diamond to 2D graphene. It is a non-magnetic metal, and was found to be the global minima in the 2D space. Phonon dispersion calculations demonstrate high kinetic stability with no negative modes. This 2D gold monolayer corresponds to the top monolayer of the bulk Au(111) face-centered cubic lattice. The close-packed lattice maximizes the aurophilic interactions. We find that the electrons are completely delocalized in the plane and behave as 2D nearly free electron gas. We hope that the present work can inspire the experimental fabrication of novel free standing 2D metal systems.
2d index and surface operators
NASA Astrophysics Data System (ADS)
Gadde, Abhijit; Gukov, Sergei
2014-03-01
In this paper we compute the superconformal index of 2d (2, 2) supersymmetric gauge theories. The 2d superconformal index, a.k.a. flavored elliptic genus, is computed by a unitary matrix integral much like the matrix integral that computes the 4d superconformal index. We compute the 2d index explicitly for a number of examples. In the case of abelian gauge theories we see that the index is invariant under flop transition and under CY-LG correspondence. The index also provides a powerful check of the Seiberg-type duality for non-abelian gauge theories discovered by Hori and Tong. In the later half of the paper, we study half-BPS surface operators in = 2 super-conformal gauge theories. They are engineered by coupling the 2d (2, 2) supersymmetric gauge theory living on the support of the surface operator to the 4d = 2 theory, so that different realizations of the same surface operator with a given Levi type are related by a 2d analogue of the Seiberg duality. The index of this coupled system is computed by using the tools developed in the first half of the paper. The superconformal index in the presence of surface defect is expected to be invariant under generalized S-duality. We demonstrate that it is indeed the case. In doing so the Seiberg-type duality of the 2d theory plays an important role.
A New 2D-Transport, 1D-Diffusion Approximation of the Boltzmann Transport equation
Larsen, Edward
2013-06-17
The work performed in this project consisted of the derivation, implementation, and testing of a new, computationally advantageous approximation to the 3D Boltz- mann transport equation. The solution of the Boltzmann equation is the neutron flux in nuclear reactor cores and shields, but solving this equation is difficult and costly. The new “2D/1D” approximation takes advantage of a special geometric feature of typical 3D reactors to approximate the neutron transport physics in a specific (ax- ial) direction, but not in the other two (radial) directions. The resulting equation is much less expensive to solve computationally, and its solutions are expected to be sufficiently accurate for many practical problems. In this project we formulated the new equation, discretized it using standard methods, developed a stable itera- tion scheme for solving the equation, implemented the new numerical scheme in the MPACT code, and tested the method on several realistic problems. All the hoped- for features of this new approximation were seen. For large, difficult problems, the resulting 2D/1D solution is highly accurate, and is calculated about 100 times faster than a 3D discrete ordinates simulation.
Visualization of 2-D and 3-D Tensor Fields
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1997-01-01
In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyper-streamlines about these points give rise to tri-sector or wedge points. These singularities and their connecting hyper-streamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find , analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.
Visualization of 2-D and 3-D Tensor Fields
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1995-01-01
In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyperstreamlines about these points give rise to trisector or wedge points. These singularities and their connecting hyperstreamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find, analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.
2D FEM Heat Transfer & E&M Field Code
1992-04-02
TOPAZ and TOPAZ2D are two-dimensional implicit finite element computer codes for heat transfer analysis. TOPAZ2D can also be used to solve electrostatic and magnetostatic problems. The programs solve for the steady-state or transient temperature or electrostatic and magnetostatic potential field on two-dimensional planar or axisymmetric geometries. Material properties may be temperature or potential-dependent and either isotropic or orthotropic. A variety of time and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation. By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functional representation of boundary conditions and internal heat generation. The programs can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.
2D FEM Heat Transfer & E&M Field Code
1992-04-02
TOPAZ and TOPAZ2D are two-dimensional implicit finite element computer codes for heat transfer analysis. TOPAZ2D can also be used to solve electrostatic and magnetostatic problems. The programs solve for the steady-state or transient temperature or electrostatic and magnetostatic potential field on two-dimensional planar or axisymmetric geometries. Material properties may be temperature or potential-dependent and either isotropic or orthotropic. A variety of time and temperature-dependent boundary conditions can be specified including temperature, flux, convection, and radiation.more » By implementing the user subroutine feature, users can model chemical reaction kinetics and allow for any type of functional representation of boundary conditions and internal heat generation. The programs can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in the material surrounding the enclosure. Additional features include thermal contact resistance across an interface, bulk fluids, phase change, and energy balances.« less
NASA Technical Reports Server (NTRS)
Bromley, L. K.; Travis, A. D.
1980-01-01
The compatibility and performance of the Shuttle communications system must be certified prior to operational missions. For this purpose, NASA has established the Electronics Systems Test Laboratory (ESTL) at the Johnson Space Center. This paper discusses the Shuttle communications system compatibility and performance testing being performed in the ESTL. The ESTL system verification test philosophy, including capabilities, procedures, and unique testing equipment are summarized. Summaries of the significant results of compatibility and performance tests of the Orbiter/Space-flight Tracking and Data Network, Orbiter/Air Force Remote Tracking Station, Orbiter/Tracking and Data Relay Satellite System and Orbiter/Shuttle Launch Support System interfaces are presented. The ESTL's unique ability to locate potential communication problems and participate in the resolution of these problems are discussed in detail.
Rowley-Neale, Samuel J; Fearn, Jamie M; Brownson, Dale A C; Smith, Graham C; Ji, Xiaobo; Banks, Craig E
2016-08-21
Two-dimensional molybdenum disulphide nanosheets (2D-MoS2) have proven to be an effective electrocatalyst, with particular attention being focused on their use towards increasing the efficiency of the reactions associated with hydrogen fuel cells. Whilst the majority of research has focused on the Hydrogen Evolution Reaction (HER), herein we explore the use of 2D-MoS2 as a potential electrocatalyst for the much less researched Oxygen Reduction Reaction (ORR). We stray from literature conventions and perform experiments in 0.1 M H2SO4 acidic electrolyte for the first time, evaluating the electrochemical performance of the ORR with 2D-MoS2 electrically wired/immobilised upon several carbon based electrodes (namely; Boron Doped Diamond (BDD), Edge Plane Pyrolytic Graphite (EPPG), Glassy Carbon (GC) and Screen-Printed Electrodes (SPE)) whilst exploring a range of 2D-MoS2 coverages/masses. Consequently, the findings of this study are highly applicable to real world fuel cell applications. We show that significant improvements in ORR activity can be achieved through the careful selection of the underlying/supporting carbon materials that electrically wire the 2D-MoS2 and utilisation of an optimal mass of 2D-MoS2. The ORR onset is observed to be reduced to ca. +0.10 V for EPPG, GC and SPEs at 2D-MoS2 (1524 ng cm(-2) modification), which is far closer to Pt at +0.46 V compared to bare/unmodified EPPG, GC and SPE counterparts. This report is the first to demonstrate such beneficial electrochemical responses in acidic conditions using a 2D-MoS2 based electrocatalyst material on a carbon-based substrate (SPEs in this case). Investigation of the beneficial reaction mechanism reveals the ORR to occur via a 4 electron process in specific conditions; elsewhere a 2 electron process is observed. This work offers valuable insights for those wishing to design, fabricate and/or electrochemically test 2D-nanosheet materials towards the ORR. PMID:27448174
Rowley-Neale, Samuel J; Fearn, Jamie M; Brownson, Dale A C; Smith, Graham C; Ji, Xiaobo; Banks, Craig E
2016-08-21
Two-dimensional molybdenum disulphide nanosheets (2D-MoS2) have proven to be an effective electrocatalyst, with particular attention being focused on their use towards increasing the efficiency of the reactions associated with hydrogen fuel cells. Whilst the majority of research has focused on the Hydrogen Evolution Reaction (HER), herein we explore the use of 2D-MoS2 as a potential electrocatalyst for the much less researched Oxygen Reduction Reaction (ORR). We stray from literature conventions and perform experiments in 0.1 M H2SO4 acidic electrolyte for the first time, evaluating the electrochemical performance of the ORR with 2D-MoS2 electrically wired/immobilised upon several carbon based electrodes (namely; Boron Doped Diamond (BDD), Edge Plane Pyrolytic Graphite (EPPG), Glassy Carbon (GC) and Screen-Printed Electrodes (SPE)) whilst exploring a range of 2D-MoS2 coverages/masses. Consequently, the findings of this study are highly applicable to real world fuel cell applications. We show that significant improvements in ORR activity can be achieved through the careful selection of the underlying/supporting carbon materials that electrically wire the 2D-MoS2 and utilisation of an optimal mass of 2D-MoS2. The ORR onset is observed to be reduced to ca. +0.10 V for EPPG, GC and SPEs at 2D-MoS2 (1524 ng cm(-2) modification), which is far closer to Pt at +0.46 V compared to bare/unmodified EPPG, GC and SPE counterparts. This report is the first to demonstrate such beneficial electrochemical responses in acidic conditions using a 2D-MoS2 based electrocatalyst material on a carbon-based substrate (SPEs in this case). Investigation of the beneficial reaction mechanism reveals the ORR to occur via a 4 electron process in specific conditions; elsewhere a 2 electron process is observed. This work offers valuable insights for those wishing to design, fabricate and/or electrochemically test 2D-nanosheet materials towards the ORR.
Finite Element Analysis of 2-D Elastic Contacts Involving FGMs
NASA Astrophysics Data System (ADS)
Abhilash, M. N.; Murthy, H.
2014-05-01
The response of elastic indenters in contact with Functionally Graded Material (FGM) coated homogeneous elastic half space has been presented in the current paper. Finite element analysis has been used due to its ability to handle complex geometry, material, and boundary conditions. Indenters of different typical surface profiles have been considered and the problem has been idealized as a two-dimensional (2D) plane strain problem considering only normal loads. Initially, indenters were considered to be rigid and the results were validated with the solutions presented in the literature. The analysis has then been extended to the case of elastic indenters on FGM-coated half spaces and the results are discussed.
A parallel splitting wavelet method for 2D conservation laws
NASA Astrophysics Data System (ADS)
Schmidt, Alex A.; Kozakevicius, Alice J.; Jakobsson, Stefan
2016-06-01
The current work presents a parallel formulation using the MPI protocol for an adaptive high order finite difference scheme to solve 2D conservation laws. Adaptivity is achieved at each time iteration by the application of an interpolating wavelet transform in each space dimension. High order approximations for the numerical fluxes are computed by ENO and WENO schemes. Since time evolution is made by a TVD Runge-Kutta space splitting scheme, the problem is naturally suitable for parallelization. Numerical simulations and speedup results are presented for Euler equations in gas dynamics problems.
Orthotropic Piezoelectricity in 2D Nanocellulose
NASA Astrophysics Data System (ADS)
García, Y.; Ruiz-Blanco, Yasser B.; Marrero-Ponce, Yovani; Sotomayor-Torres, C. M.
2016-10-01
The control of electromechanical responses within bonding regions is essential to face frontier challenges in nanotechnologies, such as molecular electronics and biotechnology. Here, we present Iβ-nanocellulose as a potentially new orthotropic 2D piezoelectric crystal. The predicted in-layer piezoelectricity is originated on a sui-generis hydrogen bonds pattern. Upon this fact and by using a combination of ab-initio and ad-hoc models, we introduce a description of electrical profiles along chemical bonds. Such developments lead to obtain a rationale for modelling the extended piezoelectric effect originated within bond scales. The order of magnitude estimated for the 2D Iβ-nanocellulose piezoelectric response, ~pm V‑1, ranks this material at the level of currently used piezoelectric energy generators and new artificial 2D designs. Such finding would be crucial for developing alternative materials to drive emerging nanotechnologies.
Orthotropic Piezoelectricity in 2D Nanocellulose
García, Y.; Ruiz-Blanco, Yasser B.; Marrero-Ponce, Yovani; Sotomayor-Torres, C. M.
2016-01-01
The control of electromechanical responses within bonding regions is essential to face frontier challenges in nanotechnologies, such as molecular electronics and biotechnology. Here, we present Iβ-nanocellulose as a potentially new orthotropic 2D piezoelectric crystal. The predicted in-layer piezoelectricity is originated on a sui-generis hydrogen bonds pattern. Upon this fact and by using a combination of ab-initio and ad-hoc models, we introduce a description of electrical profiles along chemical bonds. Such developments lead to obtain a rationale for modelling the extended piezoelectric effect originated within bond scales. The order of magnitude estimated for the 2D Iβ-nanocellulose piezoelectric response, ~pm V−1, ranks this material at the level of currently used piezoelectric energy generators and new artificial 2D designs. Such finding would be crucial for developing alternative materials to drive emerging nanotechnologies. PMID:27708364
2D microwave imaging reflectometer electronics
Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.
2014-11-15
A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.
Optical modulators with 2D layered materials
NASA Astrophysics Data System (ADS)
Sun, Zhipei; Martinez, Amos; Wang, Feng
2016-04-01
Light modulation is an essential operation in photonics and optoelectronics. With existing and emerging technologies increasingly demanding compact, efficient, fast and broadband optical modulators, high-performance light modulation solutions are becoming indispensable. The recent realization that 2D layered materials could modulate light with superior performance has prompted intense research and significant advances, paving the way for realistic applications. In this Review, we cover the state of the art of optical modulators based on 2D materials, including graphene, transition metal dichalcogenides and black phosphorus. We discuss recent advances employing hybrid structures, such as 2D heterostructures, plasmonic structures, and silicon and fibre integrated structures. We also take a look at the future perspectives and discuss the potential of yet relatively unexplored mechanisms, such as magneto-optic and acousto-optic modulation.
NASA Technical Reports Server (NTRS)
Gelinas, R. J.; Doss, S. K.; Vajk, J. P.; Djomehri, J.; Miller, K.
1983-01-01
The mathematical background regarding the moving finite element (MFE) method of Miller and Miller (1981) is discussed, taking into account a general system of partial differential equations (PDE) and the amenability of the MFE method in two dimensions to code modularization and to semiautomatic user-construction of numerous PDE systems for both Dirichlet and zero-Neumann boundary conditions. A description of test problem results is presented, giving attention to aspects of single square wave propagation, and a solution of the heat equation.
Inkjet printing of 2D layered materials.
Li, Jiantong; Lemme, Max C; Östling, Mikael
2014-11-10
Inkjet printing of 2D layered materials, such as graphene and MoS2, has attracted great interests for emerging electronics. However, incompatible rheology, low concentration, severe aggregation and toxicity of solvents constitute critical challenges which hamper the manufacturing efficiency and product quality. Here, we introduce a simple and general technology concept (distillation-assisted solvent exchange) to efficiently overcome these challenges. By implementing the concept, we have demonstrated excellent jetting performance, ideal printing patterns and a variety of promising applications for inkjet printing of 2D layered materials. PMID:25169938
Inkjet printing of 2D layered materials.
Li, Jiantong; Lemme, Max C; Östling, Mikael
2014-11-10
Inkjet printing of 2D layered materials, such as graphene and MoS2, has attracted great interests for emerging electronics. However, incompatible rheology, low concentration, severe aggregation and toxicity of solvents constitute critical challenges which hamper the manufacturing efficiency and product quality. Here, we introduce a simple and general technology concept (distillation-assisted solvent exchange) to efficiently overcome these challenges. By implementing the concept, we have demonstrated excellent jetting performance, ideal printing patterns and a variety of promising applications for inkjet printing of 2D layered materials.
Potential role of CYP2D6 in the central nervous system
Cheng, Jie; Zhen, Yueying; Miksys, Sharon; Beyoğlu, Diren; Krausz, Kristopher W.; Tyndale, Rachel F.; Yu, Aiming; Idle, Jeffrey R.; Gonzalez, Frank J.
2013-01-01
Cytochrome P450 2D6 (CYP2D6) is a pivotal enzyme responsible for a major human drug oxidation polymorphism in human populations. Distribution of CYP2D6 in brain and its role in serotonin metabolism suggest this CYP2D6 may have a function in central nervous system. To establish an efficient and accurate platform for the study of CYP2D6 in vivo, a transgenic human CYP2D6 (Tg-2D6) model was generated by transgenesis in wild-type C57BL/6 (WT) mice using a P1 phage artificial chromosome clone containing the complete human CYP2D locus, including CYP2D6 gene and 5’- and 3’- flanking sequences. Human CYP2D6 was expressed not only in the liver, but also in brain. The abundance of serotonin and 5-hydroxyindoleacetic acid in brain of Tg-2D6 is higher than in WT mice either basal levels or after harmaline induction. Metabolomics of brain homogenate and cerebrospinal fluid revealed a significant up-regulation of l-carnitine, acetyl-l-carnitine, pantothenic acid, dCDP, anandamide, N-acetylglucosaminylamine, and a down-regulation of stearoyl-l-carnitine in Tg-2D6 mice compared with WT mice. Anxiety tests indicate Tg-2D6 mice have a higher capability to adapt to anxiety. Overall, these findings indicate that the Tg-2D6 mouse model may serve as a valuable in vivo tool to determine CYP2D6-involved neurophysiological metabolism and function. PMID:23614566
Parallel stitching of 2D materials
Ling, Xi; Wu, Lijun; Lin, Yuxuan; Ma, Qiong; Wang, Ziqiang; Song, Yi; Yu, Lili; Huang, Shengxi; Fang, Wenjing; Zhang, Xu; et al
2016-01-27
Diverse parallel stitched 2D heterostructures, including metal–semiconductor, semiconductor–semiconductor, and insulator–semiconductor, are synthesized directly through selective “sowing” of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. Lastly, the methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.
Parallel Stitching of 2D Materials.
Ling, Xi; Lin, Yuxuan; Ma, Qiong; Wang, Ziqiang; Song, Yi; Yu, Lili; Huang, Shengxi; Fang, Wenjing; Zhang, Xu; Hsu, Allen L; Bie, Yaqing; Lee, Yi-Hsien; Zhu, Yimei; Wu, Lijun; Li, Ju; Jarillo-Herrero, Pablo; Dresselhaus, Mildred; Palacios, Tomás; Kong, Jing
2016-03-23
Diverse parallel stitched 2D heterostructures, including metal-semiconductor, semiconductor-semiconductor, and insulator-semiconductor, are synthesized directly through selective "sowing" of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. The methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.
Testing Foreign Language Impact on Engineering Students' Scientific Problem-Solving Performance
ERIC Educational Resources Information Center
Tatzl, Dietmar; Messnarz, Bernd
2013-01-01
This article investigates the influence of English as the examination language on the solution of physics and science problems by non-native speakers in tertiary engineering education. For that purpose, a statistically significant total number of 96 students in four year groups from freshman to senior level participated in a testing experiment in…
ERIC Educational Resources Information Center
Smith, Mike U.
Both teachers and students alike acknowledge that genetics and genetics problem-solving are extremely difficult to learn and to teach. Therefore, a number of recommendations for teaching college genetics are offered. Although few of these ideas have as yet been tested in controlled experiments, they are supported by research and experience and may…
Problem-solving deficits in alcoholics: evidence from the California Card Sorting Test.
Beatty, W W; Katzung, V M; Nixon, S J; Moreland, V J
1993-11-01
In an attempt to clarify the nature of the problem-solving deficits exhibited by chronic alcoholics, the California Card Sorting Test (CCST) and other measures of abstraction and problem solving were administered to 23 alcoholics and 16 nonalcoholic controls, equated for age, education and vocabulary. On the CCST, the alcoholics exhibited three types of deficits which appeared to be relatively independent. First, the alcoholics generated and identified fewer correct concepts than controls, although they executed concepts normally when cued by the examiner. Second, the alcoholics made more perseverative sorting responses and perseverative verbal explanations for their sorting behavior than did controls. Third, alcoholics provided less complete verbal explanations of the concepts that they correctly generated or identified. The differential importance of these factors on various measures of problem solving may help to explain the varied patterns of inefficient problem solving exhibited by alcoholics.
Optimal design of 2D digital filters based on neural networks
NASA Astrophysics Data System (ADS)
Wang, Xiao-hua; He, Yi-gang; Zheng, Zhe-zhao; Zhang, Xu-hong
2005-02-01
Two-dimensional (2-D) digital filters are widely useful in image processing and other 2-D digital signal processing fields,but designing 2-D filters is much more difficult than designing one-dimensional (1-D) ones.In this paper, a new design approach for designing linear-phase 2-D digital filters is described,which is based on a new neural networks algorithm (NNA).By using the symmetry of the given 2-D magnitude specification,a compact express for the magnitude response of a linear-phase 2-D finite impulse response (FIR) filter is derived.Consequently,the optimal problem of designing linear-phase 2-D FIR digital filters is turned to approximate the desired 2-D magnitude response by using the compact express.To solve the problem,a new NNA is presented based on minimizing the mean-squared error,and the convergence theorem is presented and proved to ensure the designed 2-D filter stable.Three design examples are also given to illustrate the effectiveness of the NNA-based design approach.
Emotional Intelligence and Problem Solving Strategy: Comparative Study Basedon "Tower of Hanoi" Test
Arefnasab, Zahra; Zare, Hosein; Babamahmoodi, Abdolreza
2012-01-01
Objective: The aim of this study was to compare problem solving strategies between peoples with high and low emotional intelligence (EI). Methods: This study is a cross sectional descriptive study.The sample groups include senior BS& BA between 20-30 years old into two with high and low emotional intelligence, each group had 30 subjects.Data was analyzed with non-parametric chi square test for main dependent variable (problem solving strategies) and accessory dependent variables(manner of starting and fulfillmentof the test).The Independent two group T-test was used for analyzing other accessory dependent variables(Number of errors and total time used for fulfillment of the test). Results: There was a significant difference between two groups in “number of errors” (t=-3.67,p=0) and “total time used for fulfillment of the test”(-6.17,p=0) and there was significant relation between EI and “problem solving strategies” (χ2=25.71, p<0.01) and (Cramer's v = 0.65, p<0.01) .Also there was significant relation between EI and “fulfillment of test” (χ2=20.31, p<0.01) and (φ=0.58, p<0.01). But the relation between EI and "manner of starting the test" was not significant (χ2=1.11, p=0.29). Subjects with high EI used more “insightful” strategy and subjects with low EI used more “trial- error” strategy. The first group completed the test more rapidlyand with fewer errors, compared with the second group. In addition the first group was more successful in performing the test than the second one. Conclusion: People with high EI significantly solve problems better than people with lowEI. PMID:24644484
NASA Astrophysics Data System (ADS)
Di Fiore, V.; Cavuoto, G.; Tarallo, D.; Punzo, M.; Evangelista, L.
2016-05-01
A joint analysis of down-hole (DH) and multichannel analysis of surface waves (MASW) measurements offers a complete evaluation of shear wave velocity profiles, especially for sites where a strong lateral variability is expected, such as archeological sites. In this complex stratigraphic setting, the high "subsoil anisotropy" (i.e., sharp lithological changes due to the presence of anthropogenic backfill deposits and/or buried man-made structures) implies a different role for DH and MASW tests. This paper discusses some results of a broad experimental program conducted on the Palatine Hill, one of the most ancient areas of the city of Rome (Italy). The experiments were part of a project on seismic microzoning and consisted of 20 MASW and 11 DH tests. The main objective of this study was to examine the difficulties related to the interpretation of the DH and MASW tests and the reliability limits inherent in the application of the noninvasive method in complex stratigraphic settings. As is well known, DH tests provide good determinations of shear wave velocities (Vs) for different lithologies and man-made materials, whereas MASW tests provide average values for the subsoil volume investigated. The data obtained from each method with blind tests were compared and were correlated to site-specific subsurface conditions, including lateral variability. Differences between punctual (DH) and global (MASW) Vs measurements are discussed, quantifying the errors by synthetic comparison and by site response analyses. This study demonstrates that, for archeological sites, VS profiles obtained from the DH and MASW methods differ by more than 15 %. However, the local site effect showed comparable results in terms of natural frequencies, whereas the resolution of the inverted shear wave velocity was influenced by the fundamental mode of propagation.
NASA Astrophysics Data System (ADS)
Rowley-Neale, Samuel J.; Fearn, Jamie M.; Brownson, Dale A. C.; Smith, Graham C.; Ji, Xiaobo; Banks, Craig E.
2016-08-01
Two-dimensional molybdenum disulphide nanosheets (2D-MoS2) have proven to be an effective electrocatalyst, with particular attention being focused on their use towards increasing the efficiency of the reactions associated with hydrogen fuel cells. Whilst the majority of research has focused on the Hydrogen Evolution Reaction (HER), herein we explore the use of 2D-MoS2 as a potential electrocatalyst for the much less researched Oxygen Reduction Reaction (ORR). We stray from literature conventions and perform experiments in 0.1 M H2SO4 acidic electrolyte for the first time, evaluating the electrochemical performance of the ORR with 2D-MoS2 electrically wired/immobilised upon several carbon based electrodes (namely; Boron Doped Diamond (BDD), Edge Plane Pyrolytic Graphite (EPPG), Glassy Carbon (GC) and Screen-Printed Electrodes (SPE)) whilst exploring a range of 2D-MoS2 coverages/masses. Consequently, the findings of this study are highly applicable to real world fuel cell applications. We show that significant improvements in ORR activity can be achieved through the careful selection of the underlying/supporting carbon materials that electrically wire the 2D-MoS2 and utilisation of an optimal mass of 2D-MoS2. The ORR onset is observed to be reduced to ca. +0.10 V for EPPG, GC and SPEs at 2D-MoS2 (1524 ng cm-2 modification), which is far closer to Pt at +0.46 V compared to bare/unmodified EPPG, GC and SPE counterparts. This report is the first to demonstrate such beneficial electrochemical responses in acidic conditions using a 2D-MoS2 based electrocatalyst material on a carbon-based substrate (SPEs in this case). Investigation of the beneficial reaction mechanism reveals the ORR to occur via a 4 electron process in specific conditions; elsewhere a 2 electron process is observed. This work offers valuable insights for those wishing to design, fabricate and/or electrochemically test 2D-nanosheet materials towards the ORR.Two-dimensional molybdenum disulphide nanosheets
NASA High-Speed 2D Photogrammetric Measurement System
NASA Technical Reports Server (NTRS)
Dismond, Harriett R.
2012-01-01
The object of this report is to provide users of the NASA high-speed 2D photogrammetric measurement system with procedures required to obtain drop-model trajectory and impact data for full-scale and sub-scale models. This guide focuses on use of the system for vertical drop testing at the NASA Langley Landing and Impact Research (LandIR) Facility.
Simulator test to study hot-flow problems related to a gas cooled reactor
NASA Technical Reports Server (NTRS)
Poole, J. W.; Freeman, M. P.; Doak, K. W.; Thorpe, M. L.
1973-01-01
An advance study of materials, fuel injection, and hot flow problems related to the gas core nuclear rocket is reported. The first task was to test a previously constructed induction heated plasma GCNR simulator above 300 kW. A number of tests are reported operating in the range of 300 kW at 10,000 cps. A second simulator was designed but not constructed for cold-hot visualization studies using louvered walls. A third task was a paper investigation of practical uranium feed systems, including a detailed discussion of related problems. The last assignment resulted in two designs for plasma nozzle test devices that could be operated at 200 atm on hydrogen.
Application of 2D Non-Graphene Materials and 2D Oxide Nanostructures for Biosensing Technology
Shavanova, Kateryna; Bakakina, Yulia; Burkova, Inna; Shtepliuk, Ivan; Viter, Roman; Ubelis, Arnolds; Beni, Valerio; Starodub, Nickolaj; Yakimova, Rositsa; Khranovskyy, Volodymyr
2016-01-01
The discovery of graphene and its unique properties has inspired researchers to try to invent other two-dimensional (2D) materials. After considerable research effort, a distinct “beyond graphene” domain has been established, comprising the library of non-graphene 2D materials. It is significant that some 2D non-graphene materials possess solid advantages over their predecessor, such as having a direct band gap, and therefore are highly promising for a number of applications. These applications are not limited to nano- and opto-electronics, but have a strong potential in biosensing technologies, as one example. However, since most of the 2D non-graphene materials have been newly discovered, most of the research efforts are concentrated on material synthesis and the investigation of the properties of the material. Applications of 2D non-graphene materials are still at the embryonic stage, and the integration of 2D non-graphene materials into devices is scarcely reported. However, in recent years, numerous reports have blossomed about 2D material-based biosensors, evidencing the growing potential of 2D non-graphene materials for biosensing applications. This review highlights the recent progress in research on the potential of using 2D non-graphene materials and similar oxide nanostructures for different types of biosensors (optical and electrochemical). A wide range of biological targets, such as glucose, dopamine, cortisol, DNA, IgG, bisphenol, ascorbic acid, cytochrome and estradiol, has been reported to be successfully detected by biosensors with transducers made of 2D non-graphene materials. PMID:26861346
Application of 2D Non-Graphene Materials and 2D Oxide Nanostructures for Biosensing Technology.
Shavanova, Kateryna; Bakakina, Yulia; Burkova, Inna; Shtepliuk, Ivan; Viter, Roman; Ubelis, Arnolds; Beni, Valerio; Starodub, Nickolaj; Yakimova, Rositsa; Khranovskyy, Volodymyr
2016-01-01
The discovery of graphene and its unique properties has inspired researchers to try to invent other two-dimensional (2D) materials. After considerable research effort, a distinct "beyond graphene" domain has been established, comprising the library of non-graphene 2D materials. It is significant that some 2D non-graphene materials possess solid advantages over their predecessor, such as having a direct band gap, and therefore are highly promising for a number of applications. These applications are not limited to nano- and opto-electronics, but have a strong potential in biosensing technologies, as one example. However, since most of the 2D non-graphene materials have been newly discovered, most of the research efforts are concentrated on material synthesis and the investigation of the properties of the material. Applications of 2D non-graphene materials are still at the embryonic stage, and the integration of 2D non-graphene materials into devices is scarcely reported. However, in recent years, numerous reports have blossomed about 2D material-based biosensors, evidencing the growing potential of 2D non-graphene materials for biosensing applications. This review highlights the recent progress in research on the potential of using 2D non-graphene materials and similar oxide nanostructures for different types of biosensors (optical and electrochemical). A wide range of biological targets, such as glucose, dopamine, cortisol, DNA, IgG, bisphenol, ascorbic acid, cytochrome and estradiol, has been reported to be successfully detected by biosensors with transducers made of 2D non-graphene materials.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problem requires corrective action but does... of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.208 What problem requires corrective action but does not result in...
Johansen, J D; Andersen, T F; Veien, N; Avnstorp, C; Andersen, K E; Menné, T
1997-03-01
The aim of the present study was to investigate the relationship between patients' own recognition of skin problems using consumer products and the results of patch testing with markers of fragrance sensitization. Eight hundred and eighty-four consecutive eczema patients, 18-69 years of age, filled in a questionnaire prior to patch testing with the European standard series. The questionnaire contained questions about skin symptoms from the use of scented and unscented products as well as skin reactions from contact with spices, flowers and citrus fruits that could indicate fragrance sensitivity. A highly significant association was found between reporting a history of visible skin symptoms from using scented products and a positive patch test to the fragrance mix, whereas no such relationship could be established to the Peru balsam in univariate or multivariate analysis. Our results suggest that the role of Peru balsam in detecting relevant fragrance contact allergy is limited, while most fragrance mix-positive patients are aware that the use of scented products may cause skin problems.
Stochastic Inversion of 2D Magnetotelluric Data
Chen, Jinsong
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function is explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows
Explicit 2-D Hydrodynamic FEM Program
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. Themore » isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less
Stochastic Inversion of 2D Magnetotelluric Data
2010-07-01
The algorithm is developed to invert 2D magnetotelluric (MT) data based on sharp boundary parametrization using a Bayesian framework. Within the algorithm, we consider the locations and the resistivity of regions formed by the interfaces are as unknowns. We use a parallel, adaptive finite-element algorithm to forward simulate frequency-domain MT responses of 2D conductivity structure. Those unknown parameters are spatially correlated and are described by a geostatistical model. The joint posterior probability distribution function ismore » explored by Markov Chain Monte Carlo (MCMC) sampling methods. The developed stochastic model is effective for estimating the interface locations and resistivity. Most importantly, it provides details uncertainty information on each unknown parameter. Hardware requirements: PC, Supercomputer, Multi-platform, Workstation; Software requirements C and Fortan; Operation Systems/version is Linux/Unix or Windows« less
Static & Dynamic Response of 2D Solids
1996-07-15
NIKE2D is an implicit finite-element code for analyzing the finite deformation, static and dynamic response of two-dimensional, axisymmetric, plane strain, and plane stress solids. The code is fully vectorized and available on several computing platforms. A number of material models are incorporated to simulate a wide range of material behavior including elasto-placicity, anisotropy, creep, thermal effects, and rate dependence. Slideline algorithms model gaps and sliding along material interfaces, including interface friction, penetration and single surfacemore » contact. Interactive-graphics and rezoning is included for analyses with large mesh distortions. In addition to quasi-Newton and arc-length procedures, adaptive algorithms can be defined to solve the implicit equations using the solution language ISLAND. Each of these capabilities and more make NIKE2D a robust analysis tool.« less
Static & Dynamic Response of 2D Solids
Lin, Jerry
1996-07-15
NIKE2D is an implicit finite-element code for analyzing the finite deformation, static and dynamic response of two-dimensional, axisymmetric, plane strain, and plane stress solids. The code is fully vectorized and available on several computing platforms. A number of material models are incorporated to simulate a wide range of material behavior including elasto-placicity, anisotropy, creep, thermal effects, and rate dependence. Slideline algorithms model gaps and sliding along material interfaces, including interface friction, penetration and single surface contact. Interactive-graphics and rezoning is included for analyses with large mesh distortions. In addition to quasi-Newton and arc-length procedures, adaptive algorithms can be defined to solve the implicit equations using the solution language ISLAND. Each of these capabilities and more make NIKE2D a robust analysis tool.
Explicit 2-D Hydrodynamic FEM Program
Lin, Jerry
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.
2D photonic-crystal optomechanical nanoresonator.
Makles, K; Antoni, T; Kuhn, A G; Deléglise, S; Briant, T; Cohadon, P-F; Braive, R; Beaudoin, G; Pinard, L; Michel, C; Dolique, V; Flaminio, R; Cagnoli, G; Robert-Philip, I; Heidmann, A
2015-01-15
We present the optical optimization of an optomechanical device based on a suspended InP membrane patterned with a 2D near-wavelength grating (NWG) based on a 2D photonic-crystal geometry. We first identify by numerical simulation a set of geometrical parameters providing a reflectivity higher than 99.8% over a 50-nm span. We then study the limitations induced by the finite value of the optical waist and lateral size of the NWG pattern using different numerical approaches. The NWG grating, pierced in a suspended InP 265-nm thick membrane, is used to form a compact microcavity involving the suspended nanomembrane as an end mirror. The resulting cavity has a waist size smaller than 10 μm and a finesse in the 200 range. It is used to probe the Brownian motion of the mechanical modes of the nanomembrane. PMID:25679837
Compact 2-D graphical representation of DNA
NASA Astrophysics Data System (ADS)
Randić, Milan; Vračko, Marjan; Zupan, Jure; Novič, Marjana
2003-05-01
We present a novel 2-D graphical representation for DNA sequences which has an important advantage over the existing graphical representations of DNA in being very compact. It is based on: (1) use of binary labels for the four nucleic acid bases, and (2) use of the 'worm' curve as template on which binary codes are placed. The approach is illustrated on DNA sequences of the first exon of human β-globin and gorilla β-globin.
2D materials: Graphene and others
NASA Astrophysics Data System (ADS)
Bansal, Suneev Anil; Singh, Amrinder Pal; Kumar, Suresh
2016-05-01
Present report reviews the recent advancements in new atomically thick 2D materials. Materials covered in this review are Graphene, Silicene, Germanene, Boron Nitride (BN) and Transition metal chalcogenides (TMC). These materials show extraordinary mechanical, electronic and optical properties which make them suitable candidates for future applications. Apart from unique properties, tune-ability of highly desirable properties of these materials is also an important area to be emphasized on.
Layer Engineering of 2D Semiconductor Junctions.
He, Yongmin; Sobhani, Ali; Lei, Sidong; Zhang, Zhuhua; Gong, Yongji; Jin, Zehua; Zhou, Wu; Yang, Yingchao; Zhang, Yuan; Wang, Xifan; Yakobson, Boris; Vajtai, Robert; Halas, Naomi J; Li, Bo; Xie, Erqing; Ajayan, Pulickel
2016-07-01
A new concept for junction fabrication by connecting multiple regions with varying layer thicknesses, based on the thickness dependence, is demonstrated. This type of junction is only possible in super-thin-layered 2D materials, and exhibits similar characteristics as p-n junctions. Rectification and photovoltaic effects are observed in chemically homogeneous MoSe2 junctions between domains of different thicknesses. PMID:27136275
Realistic and efficient 2D crack simulation
NASA Astrophysics Data System (ADS)
Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek
2010-04-01
Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.
[Relationship between ethanol patch test and problem drinkers among dental students].
Watanabe, Hisako; Nasu, Ikuo
2002-06-01
In 2000 and 2001, we carried out a drinking habit survey and the Ethanol Patch Test on 232 fourth-year dental students (128 males, 104 females). The results were statistically analyzed. For the survey, the students were asked to fill out, anonymously, the forms of the Tokyo-University ALDH2-Phenotype Screening Test (TAST), the Kurihama Alcoholism Screening Test (KAST), and the Adolescent Alcohol Involvement Scale (AAIS). The results of the subsequent Ethanol Patch test were evaluated by the students themselves. The Patch test demonstrated that 44.5% of males and 49.0% of females were positive to the test or ALDH2 deficient, the rest having the marker substance. According to the TAST results, ALDH2-deficient or TAST-positive (alcohol-intolerant) subjects accounted for 48.4% of males and 51.9% of females, the rest being ALDH2-present or TAST-negative students. Among the Patch test-positive group, the ratio of problem drinkers according to the KAST was 8.8% in males and 2.0% in females. The corresponding figures for the test-negatives group were; 22.5% in males; 7.5% in females, being higher than those for the test-positive group. Among the test-positive group, the ratio of problem drinkers scoring at least 42 points on the AAIS stood at 19.3% in males, 7.8% in females and among the test-negative group, the corresponding figures were; 21.1% in males; 13.2% in females, the difference from those for the other group being relatively small. The results of the Ethanol Patch test were related to those of the TAST and KAST, but not to the AAIS. The correlation between the Patch test and KAST indicates that the test-negatives are prone to become alcohol-dependant. Though the results of the Patch test and those of the AAIS were not related, the findings show that some alcohol-intolerant university students are drinking excessively. PMID:12138721
2D Spinodal Decomposition in Forced Turbulence
NASA Astrophysics Data System (ADS)
Fan, Xiang; Diamond, Patrick; Chacon, Luis; Li, Hui
2015-11-01
Spinodal decomposition is a second order phase transition for binary fluid mixture, from one thermodynamic phase to form two coexisting phases. The governing equation for this coarsening process below critical temperature, Cahn-Hilliard Equation, is very similar to 2D MHD Equation, especially the conserved quantities have a close correspondence between each other, so theories for MHD turbulence are used to study spinodal decomposition in forced turbulence. Domain size is increased with time along with the inverse cascade, and the length scale can be arrested by a forced turbulence with direct cascade. The two competing mechanisms lead to a stabilized domain size length scale, which can be characterized by Hinze Scale. The 2D spinodal decomposition in forced turbulence is studied by both theory and simulation with ``pixie2d.'' This work focuses on the relation between Hinze scale and spectra and cascades. Similarities and differences between spinodal decomposition and MHD are investigated. Also some transport properties are studied following MHD theories. This work is supported by the Department of Energy under Award Number DE-FG02-04ER54738.
Engineering light outcoupling in 2D materials.
Lien, Der-Hsien; Kang, Jeong Seuk; Amani, Matin; Chen, Kevin; Tosun, Mahmut; Wang, Hsin-Ping; Roy, Tania; Eggleston, Michael S; Wu, Ming C; Dubey, Madan; Lee, Si-Chen; He, Jr-Hau; Javey, Ali
2015-02-11
When light is incident on 2D transition metal dichalcogenides (TMDCs), it engages in multiple reflections within underlying substrates, producing interferences that lead to enhancement or attenuation of the incoming and outgoing strength of light. Here, we report a simple method to engineer the light outcoupling in semiconducting TMDCs by modulating their dielectric surroundings. We show that by modulating the thicknesses of underlying substrates and capping layers, the interference caused by substrate can significantly enhance the light absorption and emission of WSe2, resulting in a ∼11 times increase in Raman signal and a ∼30 times increase in the photoluminescence (PL) intensity of WSe2. On the basis of the interference model, we also propose a strategy to control the photonic and optoelectronic properties of thin-layer WSe2. This work demonstrates the utilization of outcoupling engineering in 2D materials and offers a new route toward the realization of novel optoelectronic devices, such as 2D LEDs and solar cells.
TOPAZ2D validation status report, August 1990
Davis, B.
1990-08-01
Analytic solutions to two heat transfer problems were used to partially evaluate the performance TOPAZ, and LLNL finite element heat transfer code. The two benchmark analytic solutions were for: 2D steady state slab, with constant properties, constant uniform temperature boundary conditions on three sides, and constant temperature distribution according to a sine function on the fourth side; 1D transient non-linear, with temperature dependent conductivity and specific heat (varying such that the thermal diffusivity remained constant), constant heat flux on the front face and adiabatic conditions on the other face. The TOPAZ solution converged to the analytic solution in both the transient and the steady state problem. Consistent mass matrix type of analysis yielded best performance for the transient problem, in the late-time response; but notable unnatural anomalies were observed in the early-time temperature response at nodal locations near the front face. 5 refs., 22 figs.
49 CFR 40.203 - What problems cause a drug test to be cancelled unless they are corrected?
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 1 2011-10-01 2011-10-01 false What problems cause a drug test to be cancelled unless they are corrected? 40.203 Section 40.203 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests §...
NASA Technical Reports Server (NTRS)
Willsky, A. S.; Deyst, J. J.; Crawford, B. S.
1975-01-01
The paper describes two self-test procedures applied to the problem of estimating the biases in accelerometers and gyroscopes on an inertial platform. The first technique is the weighted sum-squared residual (WSSR) test, with which accelerator bias jumps are easily isolated, but gyro bias jumps are difficult to isolate. The WSSR method does not take full advantage of the knowledge of system dynamics. The other technique is a multiple hypothesis method developed by Buxbaum and Haddad (1969). It has the advantage of directly providing jump isolation information, but suffers from computational problems. It might be possible to use the WSSR to detect state jumps and then switch to the BH system for jump isolation and estimate compensation.
Interpretation of Magnetic Phase Anomalies over 2D Tabular Bodies
NASA Astrophysics Data System (ADS)
Subrahmanyam, M.
2016-05-01
In this study, phase angle (inverse tangent of the ratio of the horizontal to vertical gradients of magnetic anomalies) profile over two-dimensional tabular bodies has been subjected to detailed analysis for determining the source parameters. Distances between certain characteristic positions on this phase curve are related to the parameters of two-dimensional tabular magnetic sources. In this paper, I have derived the mathematical expressions for these relations. It has been demonstrated here that for locating the origin of the 2D tabular source, knowledge on the type of the model (contact, sheet, dyke, and fault) is not necessary. A procedure is evolved to determine the location, depth, width and magnetization angle of the 2D sources from the mathematical expressions. The method is tested on real field data. The effect of the overlapping bodies is also discussed with two synthetic examples. The interpretation technique is developed for contact, sheet, dike and inclined fault bodies.
FPCAS2D user's guide, version 1.0
NASA Astrophysics Data System (ADS)
Bakhle, Milind A.
1994-12-01
The FPCAS2D computer code has been developed for aeroelastic stability analysis of bladed disks such as those in fans, compressors, turbines, propellers, or propfans. The aerodynamic analysis used in this code is based on the unsteady two-dimensional full potential equation which is solved for a cascade of blades. The structural analysis is based on a two degree-of-freedom rigid typical section model for each blade. Detailed explanations of the aerodynamic analysis, the numerical algorithms, and the aeroelastic analysis are not given in this report. This guide can be used to assist in the preparation of the input data required by the FPCAS2D code. A complete description of the input data is provided in this report. In addition, four test cases, including inputs and outputs, are provided.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Cryogenic cavitating flow in 2D laval nozzle
NASA Astrophysics Data System (ADS)
Tani, Naoki; Nagashima, Toshio
2003-05-01
Cavitation is one of the troublesome problems in rocket turbo pumps, and since most of high-efficiency rocket propellants are cryogenic fluids, so called “thermodynamic effect” becomes more evident than in water. In the present study, numerical and experimental study of liquid nitrogen cavitation in 2D Laval nozzle was carried out, so that the influence of thermodynamic effect was examined. It was revealed that temperature and cavitation have strong inter-relationship with each other in thermo-sensitive cryogenic fluids.
The transonic Reynolds number problem. [limitations of transonic aerodynamic test facilities
NASA Technical Reports Server (NTRS)
Jones, J. L.
1977-01-01
Problems in modeling the complex interacting flow fields in the transonic speed regime are reviewed. The limitations of wind tunnel test capabilities are identified, and options for resolving the deficiency are examined. The evolution of the National Transonic Facility, and the various needs for research investigations to be done there are discussed. The relative priorities that should be given within and across subdisciplines for guidance in planning for the most effective use of the facility are considered.
TOPAZ2D heat transfer code users manual and thermal property data base
NASA Astrophysics Data System (ADS)
Shapiro, A. B.; Edwards, A. L.
1990-05-01
TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependent boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available.
GBL-2D Version 1.0: a 2D geometry boolean library.
McBride, Cory L. (Elemental Technologies, American Fort, UT); Schmidt, Rodney Cannon; Yarberry, Victor R.; Meyers, Ray J.
2006-11-01
This report describes version 1.0 of GBL-2D, a geometric Boolean library for 2D objects. The library is written in C++ and consists of a set of classes and routines. The classes primarily represent geometric data and relationships. Classes are provided for 2D points, lines, arcs, edge uses, loops, surfaces and mask sets. The routines contain algorithms for geometric Boolean operations and utility functions. Routines are provided that incorporate the Boolean operations: Union(OR), XOR, Intersection and Difference. A variety of additional analytical geometry routines and routines for importing and exporting the data in various file formats are also provided. The GBL-2D library was originally developed as a geometric modeling engine for use with a separate software tool, called SummitView [1], that manipulates the 2D mask sets created by designers of Micro-Electro-Mechanical Systems (MEMS). However, many other practical applications for this type of software can be envisioned because the need to perform 2D Boolean operations can arise in many contexts.
McDermott, K B; Roediger, H L
1996-03-01
Three experiments examined whether a conceptual implicit memory test (specifically, category instance generation) would exhibit repetition effects similar to those found in free recall. The transfer appropriate processing account of dissociations among memory tests led us to predict that the tests would show parallel effects; this prediction was based upon the theory's assumption that conceptual tests will behave similarly as a function of various independent variables. In Experiment 1, conceptual repetition (i.e., following a target word [e.g., puzzles] with an associate [e.g., jigsaw]) did not enhance priming on the instance generation test relative to the condition of simply presenting the target word once, although this manipulation did affect free recall. In Experiment 2, conceptual repetition was achieved by following a picture with its corresponding word (or vice versa). In this case, there was an effect of conceptual repetition on free recall but no reliable effect on category instance generation or category cued recall. In addition, we obtained a picture superiority effect in free recall but not in category instance generation. In the third experiment, when the same study sequence was used as in Experiment 1, but with instructions that encouraged relational processing, priming on the category instance generation task was enhanced by conceptual repetition. Results demonstrate that conceptual memory tests can be dissociated and present problems for Roediger's (1990) transfer appropriate processing account of dissociations between explicit and implicit tests. PMID:8653098
Residual tests in the analysis of planned contrasts: Problems and solutions.
Richter, Michael
2016-03-01
It is current practice that researchers testing specific, theory-driven predictions do not only use a planned contrast to model and test their hypotheses, but also test the residual variance (the C+R approach). This analysis strategy relies on work by Abelson and Prentice (1997), who suggested that the result of a planned contrast needs to be interpreted in light of the variance that is left after the variance explained by the contrast has been subtracted from the variance explained by the factors of the statistical model. Unfortunately, the C + R approach leads to 6 fundamental problems. In particular, the C + R approach (a) relies on the interpretation of a nonsignificant result as evidence for no effect, (b) neglects the impact of sample size, (c) creates problems for a priori power analyses, (d) may lead to significant effects that lack a meaningful interpretation, (e) may give rise to misinterpretations, and (f) is inconsistent with the interpretation of other statistical analyses. Given these flaws, researchers should refrain from testing the residual variance when conducting planned contrasts. Single contrasts, Bayes factors, and likelihood ratios provide reasonable alternatives that are less problematic.
'SGoFicance Trace': assessing significance in high dimensional testing problems.
de Uña-Alvarez, Jacobo; Carvajal-Rodriguez, Antonio
2010-01-01
Recently, an exact binomial test called SGoF (Sequential Goodness-of-Fit) has been introduced as a new method for handling high dimensional testing problems. SGoF looks for statistical significance when comparing the amount of null hypotheses individually rejected at level γ = 0.05 with the expected amount under the intersection null, and then proceeds to declare a number of effects accordingly. SGoF detects an increasing proportion of true effects with the number of tests, unlike other methods for which the opposite is true. It is worth mentioning that the choice γ = 0.05 is not essential to the SGoF procedure, and more power may be reached at other values of γ depending on the situation. In this paper we enhance the possibilities of SGoF by letting the γ vary on the whole interval (0,1). In this way, we introduce the 'SGoFicance Trace' (from SGoF's significance trace), a graphical complement to SGoF which can help to make decisions in multiple-testing problems. A script has been written for the computation in R of the SGoFicance Trace. This script is available from the web site http://webs.uvigo.es/acraaj/SGoFicance.htm.
‘SGoFicance Trace’: Assessing Significance in High Dimensional Testing Problems
de Uña-Alvarez, Jacobo; Carvajal-Rodriguez, Antonio
2010-01-01
Recently, an exact binomial test called SGoF (Sequential Goodness-of-Fit) has been introduced as a new method for handling high dimensional testing problems. SGoF looks for statistical significance when comparing the amount of null hypotheses individually rejected at level γ = 0.05 with the expected amount under the intersection null, and then proceeds to declare a number of effects accordingly. SGoF detects an increasing proportion of true effects with the number of tests, unlike other methods for which the opposite is true. It is worth mentioning that the choice γ = 0.05 is not essential to the SGoF procedure, and more power may be reached at other values of γ depending on the situation. In this paper we enhance the possibilities of SGoF by letting the γ vary on the whole interval (0,1). In this way, we introduce the ‘SGoFicance Trace’ (from SGoF's significance trace), a graphical complement to SGoF which can help to make decisions in multiple-testing problems. A script has been written for the computation in R of the SGoFicance Trace. This script is available from the web site http://webs.uvigo.es/acraaj/SGoFicance.htm. PMID:21209966
A new algorithm for generating highly accurate benchmark solutions to transport test problems
Azmy, Y.Y.
1997-06-01
We present a new algorithm for solving the neutron transport equation in its discrete-variable form. The new algorithm is based on computing the full matrix relating the scalar flux spatial moments in all cells to the fixed neutron source spatial moments, foregoing the need to compute the angular flux spatial moments, and thereby eliminating the need for sweeping the spatial mesh in each discrete-angular direction. The matrix equation is solved exactly in test cases, producing a solution vector that is free from iteration convergence error, and subject only to truncation and roundoff errors. Our algorithm is designed to provide method developers with a quick and simple solution scheme to test their new methods on difficult test problems without the need to develop sophisticated solution techniques, e.g. acceleration, before establishing the worthiness of their innovation. We demonstrate the utility of the new algorithm by applying it to the Arbitrarily High Order Transport Nodal (AHOT-N) method, and using it to solve two of Burre`s Suite of Test Problems (BSTP). Our results provide highly accurate benchmark solutions, that can be distributed electronically and used to verify the pointwise accuracy of other solution methods and algorithms.
Periodically sheared 2D Yukawa systems
Kovács, Anikó Zsuzsa; Hartmann, Peter; Donkó, Zoltán
2015-10-15
We present non-equilibrium molecular dynamics simulation studies on the dynamic (complex) shear viscosity of a 2D Yukawa system. We have identified a non-monotonic frequency dependence of the viscosity at high frequencies and shear rates, an energy absorption maximum (local resonance) at the Einstein frequency of the system at medium shear rates, an enhanced collective wave activity, when the excitation is near the plateau frequency of the longitudinal wave dispersion, and the emergence of significant configurational anisotropy at small frequencies and high shear rates.
ENERGY LANDSCAPE OF 2D FLUID FORMS
Y. JIANG; ET AL
2000-04-01
The equilibrium states of 2D non-coarsening fluid foams, which consist of bubbles with fixed areas, correspond to local minima of the total perimeter. (1) The authors find an approximate value of the global minimum, and determine directly from an image how far a foam is from its ground state. (2) For (small) area disorder, small bubbles tend to sort inwards and large bubbles outwards. (3) Topological charges of the same sign repel while charges of opposite sign attract. (4) They discuss boundary conditions and the uniqueness of the pattern for fixed topology.
WFR-2D: an analytical model for PWAS-generated 2D ultrasonic guided wave propagation
NASA Astrophysics Data System (ADS)
Shen, Yanfeng; Giurgiutiu, Victor
2014-03-01
This paper presents WaveFormRevealer 2-D (WFR-2D), an analytical predictive tool for the simulation of 2-D ultrasonic guided wave propagation and interaction with damage. The design of structural health monitoring (SHM) systems and self-aware smart structures requires the exploration of a wide range of parameters to achieve best detection and quantification of certain types of damage. Such need for parameter exploration on sensor dimension, location, guided wave characteristics (mode type, frequency, wavelength, etc.) can be best satisfied with analytical models which are fast and efficient. The analytical model was constructed based on the exact 2-D Lamb wave solution using Bessel and Hankel functions. Damage effects were inserted in the model by considering the damage as a secondary wave source with complex-valued directivity scattering coefficients containing both amplitude and phase information from wave-damage interaction. The analytical procedure was coded with MATLAB, and a predictive simulation tool called WaveFormRevealer 2-D was developed. The wave-damage interaction coefficients (WDICs) were extracted from harmonic analysis of local finite element model (FEM) with artificial non-reflective boundaries (NRB). The WFR-2D analytical simulation results were compared and verified with full scale multiphysics finite element models and experiments with scanning laser vibrometer. First, Lamb wave propagation in a pristine aluminum plate was simulated with WFR-2D, compared with finite element results, and verified by experiments. Then, an inhomogeneity was machined into the plate to represent damage. Analytical modeling was carried out, and verified by finite element simulation and experiments. This paper finishes with conclusions and suggestions for future work.
Microwave Assisted 2D Materials Exfoliation
NASA Astrophysics Data System (ADS)
Wang, Yanbin
Two-dimensional materials have emerged as extremely important materials with applications ranging from energy and environmental science to electronics and biology. Here we report our discovery of a universal, ultrafast, green, solvo-thermal technology for producing excellent-quality, few-layered nanosheets in liquid phase from well-known 2D materials such as such hexagonal boron nitride (h-BN), graphite, and MoS2. We start by mixing the uniform bulk-layered material with a common organic solvent that matches its surface energy to reduce the van der Waals attractive interactions between the layers; next, the solutions are heated in a commercial microwave oven to overcome the energy barrier between bulk and few-layers states. We discovered the minutes-long rapid exfoliation process is highly temperature dependent, which requires precise thermal management to obtain high-quality inks. We hypothesize a possible mechanism of this proposed solvo-thermal process; our theory confirms the basis of this novel technique for exfoliation of high-quality, layered 2D materials by using an as yet unknown role of the solvent.
Multienzyme Inkjet Printed 2D Arrays.
Gdor, Efrat; Shemesh, Shay; Magdassi, Shlomo; Mandler, Daniel
2015-08-19
The use of printing to produce 2D arrays is well established, and should be relatively facile to adapt for the purpose of printing biomaterials; however, very few studies have been published using enzyme solutions as inks. Among the printing technologies, inkjet printing is highly suitable for printing biomaterials and specifically enzymes, as it offers many advantages. Formulation of the inkjet inks is relatively simple and can be adjusted to a variety of biomaterials, while providing nonharmful environment to the enzymes. Here we demonstrate the applicability of inkjet printing for patterning multiple enzymes in a predefined array in a very straightforward, noncontact method. Specifically, various arrays of the enzymes glucose oxidase (GOx), invertase (INV) and horseradish peroxidase (HP) were printed on aminated glass surfaces, followed by immobilization using glutardialdehyde after printing. Scanning electrochemical microscopy (SECM) was used for imaging the printed patterns and to ascertain the enzyme activity. The successful formation of 2D arrays consisting of enzymes was explored as a means of developing the first surface confined enzyme based logic gates. Principally, XOR and AND gates, each consisting of two enzymes as the Boolean operators, were assembled, and their operation was studied by SECM. PMID:26214072
Interface adhesion between 2D materials and elastomers measured by buckle delamination
NASA Astrophysics Data System (ADS)
Brennan, Christopher; Lu, Nanshu
2015-03-01
A major application for 2D materials is creating electronic devices, including flexible and wearable devices. These applications require complicated fabrication processes where 2D materials are either mechanically exfoliated or grown via chemical vapor deposition and then transferred to a host substrate. Both processes require intimate knowledge of the interactions between the 2D material and the substrate to allow for a controllable transfer. Although adhesion between 2D materials and stiff substrates such as silicon and copper have been measured by bulge or peeling tests, adhesion between 2D materials and soft polymer substrates are hard to measure by conventional methods. Here we propose a simple way of measuring the adhesion between 2D materials and soft, stretchable elastomers using mature continuum mechanics equations. By creating buckle delamination in 2D atomic layers and measuring the buckle profile using an atomic force microscope, we can readily extract 2D-elastomer adhesion energy. Here we look at the adhesion of MoS2 and graphene to PDMS. The measured adhesion values are found insensitive to the applied strains in the substrate and are one order smaller than 2D-silicon oxide adhesion which is mainly attributed substrate surface roughness differences.
Targeting multiple types of tumors using NKG2D-coated iron oxide nanoparticles.
Wu, Ming-Ru; Cook, W James; Zhang, Tong; Sentman, Charles L
2014-11-28
Iron oxide nanoparticles (IONPs) hold great potential for cancer therapy. Actively targeting IONPs to tumor cells can further increase therapeutic efficacy and decrease off-target side effects. To target tumor cells, a natural killer (NK) cell activating receptor, NKG2D, was utilized to develop pan-tumor targeting IONPs. NKG2D ligands are expressed on many tumor types and its ligands are not found on most normal tissues under steady state conditions. The data showed that mouse and human fragment crystallizable (Fc)-fusion NKG2D (Fc-NKG2D) coated IONPs (NKG2D/NPs) can target multiple NKG2D ligand positive tumor types in vitro in a dose dependent manner by magnetic cell sorting. Tumor targeting effect was robust even under a very low tumor cell to normal cell ratio and targeting efficiency correlated with NKG2D ligand expression level on tumor cells. Furthermore, the magnetic separation platform utilized to test NKG2D/NP specificity has the potential to be developed into high throughput screening strategies to identify ideal fusion proteins or antibodies for targeting IONPs. In conclusion, NKG2D/NPs can be used to target multiple tumor types and magnetic separation platform can facilitate the proof-of-concept phase of tumor targeting IONP development.
NASA Technical Reports Server (NTRS)
Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.
1991-01-01
The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.
ERIC Educational Resources Information Center
Bakker, Martin P.; Ormel, Johan; Verhulst, Frank C.; Oldehinkel, Albertine J.
2012-01-01
This study tested whether childhood family instability is associated with mental health problems during adolescence through continued family instability and/or through a preadolescent onset of mental health problems. This test use data from a prospective population cohort of 2,230 Dutch adolescents ("M" age = 11.09, "SD" = 0.56 at the initial…
NASA Astrophysics Data System (ADS)
Lacava, C.; Carrol, L.; Bozzola, A.; Marchetti, R.; Minzioni, P.; Cristiani, I.; Fournier, M.; Bernabe, S.; Gerace, D.; Andreani, L. C.
2016-03-01
We present the characterization of Silicon-on-insulator (SOI) photonic-crystal based 2D grating-couplers (2D-GCs) fabricated by CEA-Leti in the frame of the FP7 Fabulous project, which is dedicated to the realization of devices and systems for low-cost and high-performance passives-optical-networks. On the analyzed samples different test structures are present, including 2D-GC connected to another 2D-GC by different waveguides (in a Mach-Zehnder like configuration), and 2D-GC connected to two separate 2D-GCs, so as to allow a complete assessment of different parameters. Measurements were carried out using a tunable laser source operating in the extended telecom bandwidth and a fiber-based polarization controlling system at the input of device-under-test. The measured data yielded an overall fiber-to-fiber loss of 7.5 dB for the structure composed by an input 2D-GC connected to two identical 2D-GCs. This value was obtained at the peak wavelength of the grating, and the 3-dB bandwidth of the 2D-GC was assessed to be 43 nm. Assuming that the waveguide losses are negligible, so as to make a worst-case analysis, the coupling efficiency of the single 2D-GC results to be equal to -3.75 dB, constituting, to the best of our knowledge, the lowest value ever reported for a fully CMOS compatible 2D-GC. It is worth noting that both the obtained values are in good agreement with those expected by the numerical simulations performed using full 3D analysis by Lumerical FDTD-solutions.
2D quantum gravity from quantum entanglement.
Gliozzi, F
2011-01-21
In quantum systems with many degrees of freedom the replica method is a useful tool to study the entanglement of arbitrary spatial regions. We apply it in a way that allows them to backreact. As a consequence, they become dynamical subsystems whose position, form, and extension are determined by their interaction with the whole system. We analyze, in particular, quantum spin chains described at criticality by a conformal field theory. Its coupling to the Gibbs' ensemble of all possible subsystems is relevant and drives the system into a new fixed point which is argued to be that of the 2D quantum gravity coupled to this system. Numerical experiments on the critical Ising model show that the new critical exponents agree with those predicted by the formula of Knizhnik, Polyakov, and Zamolodchikov.
Graphene suspensions for 2D printing
NASA Astrophysics Data System (ADS)
Soots, R. A.; Yakimchuk, E. A.; Nebogatikova, N. A.; Kotin, I. A.; Antonova, I. V.
2016-04-01
It is shown that, by processing a graphite suspension in ethanol or water by ultrasound and centrifuging, it is possible to obtain particles with thicknesses within 1-6 nm and, in the most interesting cases, 1-1.5 nm. Analogous treatment of a graphite suspension in organic solvent yields eventually thicker particles (up to 6-10 nm thick) even upon long-term treatment. Using the proposed ink based on graphene and aqueous ethanol with ethylcellulose and terpineol additives for 2D printing, thin (~5 nm thick) films with sheet resistance upon annealing ~30 MΩ/□ were obtained. With the ink based on aqueous graphene suspension, the sheet resistance was ~5-12 kΩ/□ for 6- to 15-nm-thick layers with a carrier mobility of ~30-50 cm2/(V s).
Metrology for graphene and 2D materials
NASA Astrophysics Data System (ADS)
Pollard, Andrew J.
2016-09-01
The application of graphene, a one atom-thick honeycomb lattice of carbon atoms with superlative properties, such as electrical conductivity, thermal conductivity and strength, has already shown that it can be used to benefit metrology itself as a new quantum standard for resistance. However, there are many application areas where graphene and other 2D materials, such as molybdenum disulphide (MoS2) and hexagonal boron nitride (h-BN), may be disruptive, areas such as flexible electronics, nanocomposites, sensing and energy storage. Applying metrology to the area of graphene is now critical to enable the new, emerging global graphene commercial world and bridge the gap between academia and industry. Measurement capabilities and expertise in a wide range of scientific areas are required to address this challenge. The combined and complementary approach of varied characterisation methods for structural, chemical, electrical and other properties, will allow the real-world issues of commercialising graphene and other 2D materials to be addressed. Here, examples of metrology challenges that have been overcome through a multi-technique or new approach are discussed. Firstly, the structural characterisation of defects in both graphene and MoS2 via Raman spectroscopy is described, and how nanoscale mapping of vacancy defects in graphene is also possible using tip-enhanced Raman spectroscopy (TERS). Furthermore, the chemical characterisation and removal of polymer residue on chemical vapour deposition (CVD) grown graphene via secondary ion mass spectrometry (SIMS) is detailed, as well as the chemical characterisation of iron films used to grow large domain single-layer h-BN through CVD growth, revealing how contamination of the substrate itself plays a role in the resulting h-BN layer. In addition, the role of international standardisation in this area is described, outlining the current work ongoing in both the International Organization of Standardization (ISO) and the
ERIC Educational Resources Information Center
Almquist, Alan J.; Cronin, John E.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the origin of man. Part I, The Biochemical Evidence for Human Evolution, contains four sections: (1) Introduction; (2) Macromolecular Data; (3)…
Quantum damped oscillator II: Bateman's Hamiltonian vs. 2D parabolic potential barrier
Chruscinski, Dariusz . E-mail: darch@phys.uni.torun.pl
2006-04-15
We show that quantum Bateman's system which arises in the quantization of a damped harmonic oscillator is equivalent to a quantum problem with 2D parabolic potential barrier known also as 2D inverted isotropic oscillator. It turns out that this system displays the family of complex eigenvalues corresponding to the poles of analytical continuation of the resolvent operator to the complex energy plane. It is shown that this representation is more suitable than the hyperbolic one used recently by Blasone and Jizba.
NASA Astrophysics Data System (ADS)
Torgoev, Almaz; Havenith, Hans-Balder
2016-07-01
A 2D elasto-dynamic modelling of the pure topographic seismic response is performed for six models with a total length of around 23.0 km. These models are reconstructed from the real topographic settings of the landslide-prone slopes situated in the Mailuu-Suu River Valley, Southern Kyrgyzstan. The main studied parameter is the Arias Intensity (Ia, m/sec), which is applied in the GIS-based Newmark method to regionally map the seismically-induced landslide susceptibility. This method maps the Ia values via empirical attenuation laws and our studies investigate a potential to include topographic input into them. Numerical studies analyse several signals with varying shape and changing central frequency values. All tests demonstrate that the spectral amplification patterns directly affect the amplification of the Ia values. These results let to link the 2D distribution of the topographically amplified Ia values with the parameter called as smoothed curvature. The amplification values for the low-frequency signals are better correlated with the curvature smoothed over larger spatial extent, while those values for the high-frequency signals are more linked to the curvature with smaller smoothing extent. The best predictions are provided by the curvature smoothed over the extent calculated according to Geli's law. The sample equations predicting the Ia amplification based on the smoothed curvature are presented for the sinusoid-shape input signals. These laws cannot be directly implemented in the regional Newmark method, as 3D amplification of the Ia values addresses more problem complexities which are not studied here. Nevertheless, our 2D results prepare the theoretical framework which can potentially be applied to the 3D domain and, therefore, represent a robust basis for these future research targets.
Pre-test CFD Calculations for a Bypass Flow Standard Problem
Rich Johnson
2011-11-01
The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacent graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.
Gurev, Viatcheslav; Arens, Sander; Augustin, Christoph M.; Baron, Lukas; Blake, Robert; Bradley, Chris; Castro, Sebastian; Crozier, Andrew; Favino, Marco; Fastl, Thomas E.; Fritz, Thomas; Gao, Hao; Gizzi, Alessio; Griffith, Boyce E.; Hurtado, Daniel E.; Krause, Rolf; Luo, Xiaoyu; Nash, Martyn P.; Pezzuto, Simone; Plank, Gernot; Rossi, Simone; Ruprecht, Daniel; Seemann, Gunnar; Smith, Nicolas P.; Sundnes, Joakim; Rice, J. Jeremy; Trayanova, Natalia; Wang, Dafang; Jenny Wang, Zhinuo; Niederer, Steven A.
2015-01-01
Models of cardiac mechanics are increasingly used to investigate cardiac physiology. These models are characterized by a high level of complexity, including the particular anisotropic material properties of biological tissue and the actively contracting material. A large number of independent simulation codes have been developed, but a consistent way of verifying the accuracy and replicability of simulations is lacking. To aid in the verification of current and future cardiac mechanics solvers, this study provides three benchmark problems for cardiac mechanics. These benchmark problems test the ability to accurately simulate pressure-type forces that depend on the deformed objects geometry, anisotropic and spatially varying material properties similar to those seen in the left ventricle and active contractile forces. The benchmark was solved by 11 different groups to generate consensus solutions, with typical differences in higher-resolution solutions at approximately 0.5%, and consistent results between linear, quadratic and cubic finite elements as well as different approaches to simulating incompressible materials. Online tools and solutions are made available to allow these tests to be effectively used in verification of future cardiac mechanics software. PMID:26807042
Segmentation of 2D gel electrophoresis spots using a Markov random field
NASA Astrophysics Data System (ADS)
Hoeflich, Christopher S.; Corso, Jason J.
2009-02-01
We propose a statistical model-based approach for the segmentation of fragments of DNA as a first step in the automation of the primarily manual process of comparing two or more images resulting from the Restriction Landmark Genomic Scanning (RLGS) method. These 2D gel electrophoresis images are the product of the separation of DNA into fragments that appear as spots on X-ray films. The goal is to find instances where a spot appears in one image and not in another since a missing spot can be correlated with a region of DNA that has been affected by a disease such as cancer. The entire comparison process is typically done manually, which is tedious and very error prone. We pose the problem as the labeling of each image pixel as either a spot or non-spot and use a Markov Random Field (MRF) model and simulated annealing for inference. Neighboring spot labels are then connected to form spot regions. The MRF based model was tested on actual 2D gel electrophoresis images.
2D to 3D conversion implemented in different hardware
NASA Astrophysics Data System (ADS)
Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli
2015-02-01
Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What problems always cause a drug test to be... TESTING PROGRAMS Problems in Drug Tests § 40.201 What problems always cause a drug test to be cancelled... laboratory reports that any of the following problems have occurred. You must inform the DER that the...
Modeling of Gap Closure in Uranium-Zirconium Alloy Metal Fuel - A Test Problem
Simunovic, Srdjan; Ott, Larry J; Gorti, Sarma B; Nukala, Phani K; Radhakrishnan, Balasubramaniam; Turner, John A
2009-10-01
Uranium based binary and ternary alloy fuel is a possible candidate for advanced fast spectrum reactors with long refueling intervals and reduced liner heat rating [1]. An important metal fuel issue that can impact the fuel performance is the fuel-cladding gap closure, and fuel axial growth. The dimensional change in the fuel during irradiation is due to a superposition of the thermal expansion of the fuel due to heating, volumetric changes due to possible phase transformations that occur during heating and the swelling due to fission gas retention. The volumetric changes due to phase transformation depend both on the thermodynamics of the alloy system and the kinetics of phase change reactions that occur at the operating temperature. The nucleation and growth of fission gas bubbles that contributes to fuel swelling is also influenced by the local fuel chemistry and the microstructure. Once the fuel expands and contacts the clad, expansion in the radial direction is constrained by the clad, and the overall deformation of the fuel clad assembly depends upon the dynamics of the contact problem. The neutronics portion of the problem is also inherently coupled with microstructural evolution in terms of constituent redistribution and phase transformation. Because of the complex nature of the problem, a series of test problems have been defined with increasing complexity with the objective of capturing the fuel-clad interaction in complex fuels subjected to a wide range of irradiation and temperature conditions. The abstract, if short, is inserted here before the introduction section. If the abstract is long, it should be inserted with the front material and page numbered as such, then this page would begin with the introduction section.
ERIC Educational Resources Information Center
Douglass, James B.
A general process for testing the feasibility of applying alternative mathematical or statistical models to the solution of a practical problem is presented and flowcharted. The system is used to develop a plan to compare models for test equating. The five alternative models to be considered for equating are: (1) anchor test equating using…
Computing 2D constrained delaunay triangulation using the GPU.
Qi, Meng; Cao, Thanh-Tung; Tan, Tiow-Seng
2013-05-01
We propose the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation (DT) problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.
2D/3D image (facial) comparison using camera matching.
Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C
2006-11-10
A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353
2D Gridded Surface Data Value-Added Product
Tang, Q; Xie, S
2015-08-30
This report describes the Atmospheric Radiation Measurement (ARM) Best Estimate (ARMBE) 2-dimensional (2D) gridded surface data (ARMBE2DGRID) value-added product. Spatial variability is critically important to many scientific studies, especially those that involve processes of great spatial variations at high temporal frequency (e.g., precipitation, clouds, radiation, etc.). High-density ARM sites deployed at the Southern Great Plains (SGP) allow us to observe the spatial patterns of variables of scientific interests. The upcoming megasite at SGP with its enhanced spatial density will facilitate the studies at even finer scales. Currently, however, data are reported only at individual site locations at different time resolutions for different datastreams. It is difficult for users to locate all the data they need and requires extra effort to synchronize the data. To address these problems, the ARMBE2DGRID value-added product merges key surface measurements at the ARM SGP sites and interpolates the data to a regular 2D grid to facilitate the data application.
Modelling RF sources using 2-D PIC codes
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
Modelling RF sources using 2-D PIC codes
Eppley, K.R.
1993-03-01
In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.
TOPAZ2D heat transfer code users manual and thermal property data base
Shapiro, A.B.; Edwards, A.L.
1990-05-01
TOPAZ2D is a two dimensional implicit finite element computer code for heat transfer analysis. This user's manual provides information on the structure of a TOPAZ2D input file. Also included is a material thermal property data base. This manual is supplemented with The TOPAZ2D Theoretical Manual and the TOPAZ2D Verification Manual. TOPAZ2D has been implemented on the CRAY, SUN, and VAX computers. TOPAZ2D can be used to solve for the steady state or transient temperature field on two dimensional planar or axisymmetric geometries. Material properties may be temperature dependent and either isotropic or orthotropic. A variety of time and temperature dependent boundary conditions can be specified including temperature, flux, convection, and radiation. Time or temperature dependent internal heat generation can be defined locally be element or globally by material. TOPAZ2D can solve problems of diffuse and specular band radiation in an enclosure coupled with conduction in material surrounding the enclosure. Additional features include thermally controlled reactive chemical mixtures, thermal contact resistance across an interface, bulk fluid flow, phase change, and energy balances. Thermal stresses can be calculated using the solid mechanics code NIKE2D which reads the temperature state data calculated by TOPAZ2D. A three dimensional version of the code, TOPAZ3D is available. The material thermal property data base, Chapter 4, included in this manual was originally published in 1969 by Art Edwards for use with his TRUMP finite difference heat transfer code. The format of the data has been altered to be compatible with TOPAZ2D. Bob Bailey is responsible for adding the high explosive thermal property data.
Impact of CYP2D6 polymorphisms on clinical efficacy and tolerability of metoprolol tartrate.
Hamadeh, I S; Langaee, T Y; Dwivedi, R; Garcia, S; Burkley, B M; Skaar, T C; Chapman, A B; Gums, J G; Turner, S T; Gong, Y; Cooper-DeHoff, R M; Johnson, J A
2014-08-01
Metoprolol is a selective β-1 adrenergic receptor blocker that undergoes extensive metabolism by the polymorphic enzyme cytochrome P450 2D6 (CYP2D6). Our objective was to investigate the influence of CYP2D6 polymorphisms on the efficacy and tolerability of metoprolol tartrate. Two hundred and eighty-one participants with uncomplicated hypertension received 50 mg of metoprolol twice daily followed by response-guided titration to 100 mg twice daily. Phenotypes were assigned based on results of CYP2D6 genotyping and copy number variation assays. Clinical response to metoprolol and adverse effect rates were analyzed in relation to CYP2D6 phenotypes using appropriate statistical tests. Heart rate response differed significantly by CYP2D6 phenotype (P < 0.0001), with poor and intermediate metabolizers showing greater reduction. However, blood pressure response and adverse effect rates were not significantly different by CYP2D6 phenotype. Other than a significant difference in heart rate response, CYP2D6 polymorphisms were not determinants of variability in metoprolol response or tolerability.
NASA Astrophysics Data System (ADS)
Cheng, Chingyun; Kangara, Jayampathi; Arakelyan, Ilya; Thomas, John
2016-05-01
We tune the dimensionality of a strongly interacting degenerate 6 Li Fermi gas from 2D to quasi-2D, by adjusting the radial confinement of pancake-shaped clouds to control the radial chemical potential. In the 2D regime with weak radial confinement, the measured pair binding energies are in agreement with 2D-BCS mean field theory, which predicts dimer pairing energies in the many-body regime. In the qausi-2D regime obtained with increased radial confinement, the measured pairing energy deviates significantly from 2D-BCS theory. In contrast to the pairing energy, the measured radii of the cloud profiles are not fit by 2D-BCS theory in either the 2D or quasi-2D regimes, but are fit in both regimes by a beyond mean field polaron-model of the free energy. Supported by DOE, ARO, NSF, and AFOSR.
A HUPO test sample study reveals common problems in mass spectrometry-based proteomics
Bell, Alexander W.; Deutsch, Eric W.; Au, Catherine E.; Kearney, Robert E.; Beavis, Ron; Sechi, Salvatore; Nilsson, Tommy; Bergeron, John J.M.
2009-01-01
We carried out a test sample study to try to identify errors leading to irreproducibility, including incompleteness of peptide sampling, in LC-MS-based proteomics. We distributed a test sample consisting of an equimolar mix of 20 highly purified recombinant human proteins, to 27 laboratories for identification. Each protein contained one or more unique tryptic peptides of 1250 Da to also test for ion selection and sampling in the mass spectrometer. Of the 27 labs, initially only 7 labs reported all 20 proteins correctly, and only 1 lab reported all the tryptic peptides of 1250 Da. Nevertheless, a subsequent centralized analysis of the raw data revealed that all 20 proteins and most of the 1250 Da peptides had in fact been detected by all 27 labs. The centralized analysis allowed us to determine sources of problems encountered in the study, which include missed identifications (false negatives), environmental contamination, database matching, and curation of protein identifications. Improved search engines and databases are likely to increase the fidelity of mass spectrometry-based proteomics. PMID:19448641
Competing coexisting phases in 2D water
Zanotti, Jean-Marc; Judeinstein, Patrick; Dalla-Bernardina, Simona; Creff, Gaëlle; Brubach, Jean-Blaise; Roy, Pascale; Bonetti, Marco; Ollivier, Jacques; Sakellariou, Dimitrios; Bellissent-Funel, Marie-Claire
2016-01-01
The properties of bulk water come from a delicate balance of interactions on length scales encompassing several orders of magnitudes: i) the Hydrogen Bond (HBond) at the molecular scale and ii) the extension of this HBond network up to the macroscopic level. Here, we address the physics of water when the three dimensional extension of the HBond network is frustrated, so that the water molecules are forced to organize in only two dimensions. We account for the large scale fluctuating HBond network by an analytical mean-field percolation model. This approach provides a coherent interpretation of the different events experimentally (calorimetry, neutron, NMR, near and far infra-red spectroscopies) detected in interfacial water at 160, 220 and 250 K. Starting from an amorphous state of water at low temperature, these transitions are respectively interpreted as the onset of creation of transient low density patches of 4-HBonded molecules at 160 K, the percolation of these domains at 220 K and finally the total invasion of the surface by them at 250 K. The source of this surprising behaviour in 2D is the frustration of the natural bulk tetrahedral local geometry and the underlying very significant increase in entropy of the interfacial water molecules. PMID:27185018
Phase Engineering of 2D Tin Sulfides.
Mutlu, Zafer; Wu, Ryan J; Wickramaratne, Darshana; Shahrezaei, Sina; Liu, Chueh; Temiz, Selcuk; Patalano, Andrew; Ozkan, Mihrimah; Lake, Roger K; Mkhoyan, K A; Ozkan, Cengiz S
2016-06-01
Tin sulfides can exist in a variety of phases and polytypes due to the different oxidation states of Sn. A subset of these phases and polytypes take the form of layered 2D structures that give rise to a wide host of electronic and optical properties. Hence, achieving control over the phase, polytype, and thickness of tin sulfides is necessary to utilize this wide range of properties exhibited by the compound. This study reports on phase-selective growth of both hexagonal tin (IV) sulfide SnS2 and orthorhombic tin (II) sulfide SnS crystals with diameters of over tens of microns on SiO2 substrates through atmospheric pressure vapor-phase method in a conventional horizontal quartz tube furnace with SnO2 and S powders as the source materials. Detailed characterization of each phase of tin sulfide crystals is performed using various microscopy and spectroscopy methods, and the results are corroborated by ab initio density functional theory calculations. PMID:27099950
Phase Engineering of 2D Tin Sulfides.
Mutlu, Zafer; Wu, Ryan J; Wickramaratne, Darshana; Shahrezaei, Sina; Liu, Chueh; Temiz, Selcuk; Patalano, Andrew; Ozkan, Mihrimah; Lake, Roger K; Mkhoyan, K A; Ozkan, Cengiz S
2016-06-01
Tin sulfides can exist in a variety of phases and polytypes due to the different oxidation states of Sn. A subset of these phases and polytypes take the form of layered 2D structures that give rise to a wide host of electronic and optical properties. Hence, achieving control over the phase, polytype, and thickness of tin sulfides is necessary to utilize this wide range of properties exhibited by the compound. This study reports on phase-selective growth of both hexagonal tin (IV) sulfide SnS2 and orthorhombic tin (II) sulfide SnS crystals with diameters of over tens of microns on SiO2 substrates through atmospheric pressure vapor-phase method in a conventional horizontal quartz tube furnace with SnO2 and S powders as the source materials. Detailed characterization of each phase of tin sulfide crystals is performed using various microscopy and spectroscopy methods, and the results are corroborated by ab initio density functional theory calculations.
Competing coexisting phases in 2D water
NASA Astrophysics Data System (ADS)
Zanotti, Jean-Marc; Judeinstein, Patrick; Dalla-Bernardina, Simona; Creff, Gaëlle; Brubach, Jean-Blaise; Roy, Pascale; Bonetti, Marco; Ollivier, Jacques; Sakellariou, Dimitrios; Bellissent-Funel, Marie-Claire
2016-05-01
The properties of bulk water come from a delicate balance of interactions on length scales encompassing several orders of magnitudes: i) the Hydrogen Bond (HBond) at the molecular scale and ii) the extension of this HBond network up to the macroscopic level. Here, we address the physics of water when the three dimensional extension of the HBond network is frustrated, so that the water molecules are forced to organize in only two dimensions. We account for the large scale fluctuating HBond network by an analytical mean-field percolation model. This approach provides a coherent interpretation of the different events experimentally (calorimetry, neutron, NMR, near and far infra-red spectroscopies) detected in interfacial water at 160, 220 and 250 K. Starting from an amorphous state of water at low temperature, these transitions are respectively interpreted as the onset of creation of transient low density patches of 4-HBonded molecules at 160 K, the percolation of these domains at 220 K and finally the total invasion of the surface by them at 250 K. The source of this surprising behaviour in 2D is the frustration of the natural bulk tetrahedral local geometry and the underlying very significant increase in entropy of the interfacial water molecules.
Repression of multiple CYP2D genes in mouse primary hepatocytes with a single siRNA construct.
Elraghy, Omaima; Baldwin, William S
2015-01-01
The Cyp2d subfamily is the second most abun-dant subfamily of hepatic drug-metabolizing CYPs. In mice, there are nine Cyp2d members that are believed to have redundant catalytic activity. We are testing and optimizing the ability of one short interfering RNA (siRNA) construct to knockdown the expression of multiple mouse Cyp2ds in primary hepatocytes. Expression of Cyp2d10, Cyp2d11, Cyp2d22, and Cyp2d26 was observed in the primary male mouse hepatocytes. Cyp2d9, which is male-specific and growth hormone-dependent, was not expressed in male primary hepatocytes, potentially because of its dependence on pulsatile growth hormone release from the anterior pituitary. Several different siRNAs at different concentrations and with different reagents were used to knockdown Cyp2d expression. siRNA constructs designed to repress only one construct often mildly repressed several Cyp2d isoforms. A construct designed to knockdown every Cyp2d isoform provided the best results, especially when incubated with transfection reagents designed specifically for primary cell culture. Interestingly, a construct designed to knockdown all Cyp2d isoforms, except Cyp2d10, caused a 2.5× increase in Cyp2d10 expression, presumably because of a compensatory response. However, while RNA expression is repressed 24 h after siRNA treatment, associated changes in Cyp2d-mediated metabolism are tenuous. Overall, this study provides data on the expression of murine Cyp2ds in primary cell lines, valuable information on designing siRNAs for silencing multiple murine CYPs, and potential pros and cons of using siRNA as a tool for repressing Cyp2d and estimating Cyp2d's role in murine xenobiotic metabolism. PMID:25124873
Code of Federal Regulations, 2013 CFR
2013-10-01
... 49 Transportation 1 2013-10-01 2013-10-01 false What problems always cause a drug test to be... TESTING PROGRAMS Problems in Drug Tests § 40.201 What problems always cause a drug test to be cancelled and may result in a requirement for another collection? As the MRO, you must cancel a drug test when...
Advecting Procedural Textures for 2D Flow Animation
NASA Technical Reports Server (NTRS)
Kao, David; Pang, Alex; Moran, Pat (Technical Monitor)
2001-01-01
This paper proposes the use of specially generated 3D procedural textures for visualizing steady state 2D flow fields. We use the flow field to advect and animate the texture over time. However, using standard texture advection techniques and arbitrary textures will introduce some undesirable effects such as: (a) expanding texture from a critical source point, (b) streaking pattern from the boundary of the flowfield, (c) crowding of advected textures near an attracting spiral or sink, and (d) absent or lack of textures in some regions of the flow. This paper proposes a number of strategies to solve these problems. We demonstrate how the technique works using both synthetic data and computational fluid dynamics data.
Pilot study risk assessment for selected problems at the Nevada Test Site (NTS)
Daniels, J.I.
1993-06-01
The Nevada Test Site (NTS) is located in southwestern Nevada, about 105 km (65 mi) northwest of the city of Las Vegas. A series of tests was conducted in the late 1950s and early 1960s at or near the NTS to study issues involving plutonium-bearing devices. These tests resulted in the dispersal of about 5 TBq of [sup 239,24O]Pu on the surficial soils at the test locations. Additionally, underground tests of nuclear weapons devices have been conducted at the NTS since late 1962; ground water beneath the NTS has been contaminated with radionuclides produced by these tests. These two important problems have been selected for assessment. Regarding the plutonium contamination, because the residual [sup 239]Pu decays slowly (half-life of 24,110 y), these sites could represent a long-term hazard if they are not remediated and if institutional controls are lost. To investigate the magnitude of the potential health risks for this no-remediation case, three basic exposure scenarios were defined that could bring individuals in contact with [sup 239,24O]Pu at the sites: (1) a resident living in a subdivision, (2) a resident farmer, and (3) a worker at a commercial facility -- all located at a test site. The predicted cancer risks for the resident farmer were more than a factor of three times higher than the suburban resident at the median risk level, and about a factor of ten greater than the reference worker at a commercial facility. At 100 y from the present, the 5, 50, and 95th percentile risks for the resident farmer at the most contaminated site were 4 x 10[sup [minus]6], 6 x 10[sup [minus]5], and 5 x 10[sup [minus]4], respectively. For the assessment of Pu in surface soil, the principal sources of uncertainty in the estimated risks were population mobility, the relationship between indoor and outdoor contaminant levels, and the dose and risk factors for bone, liver, and lung.
Pilot study risk assessment for selected problems at the Nevada Test Site (NTS)
Daniels, J.I.; Anspaugh, L.R.; Bogen, K.T.; Daniels, J.I.; Layton, D.W.; Straume, T.; Andricevic, R.; Jacobson, R.L.; Meinhold, A.F.; Holtzman, S.; Morris, S.C.; Hamilton, L.D.
1993-06-01
The Nevada Test Site (NTS) is located in southwestern Nevada, about 105 km (65 mi) northwest of the city of Las Vegas. A series of tests was conducted in the late 1950s and early 1960s at or near the NTS to study issues involving plutonium-bearing devices. These tests resulted in the dispersal of about 5 TBq of {sup 239,24O}Pu on the surficial soils at the test locations. Additionally, underground tests of nuclear weapons devices have been conducted at the NTS since late 1962; ground water beneath the NTS has been contaminated with radionuclides produced by these tests. These two important problems have been selected for assessment. Regarding the plutonium contamination, because the residual {sup 239}Pu decays slowly (half-life of 24,110 y), these sites could represent a long-term hazard if they are not remediated and if institutional controls are lost. To investigate the magnitude of the potential health risks for this no-remediation case, three basic exposure scenarios were defined that could bring individuals in contact with {sup 239,24O}Pu at the sites: (1) a resident living in a subdivision, (2) a resident farmer, and (3) a worker at a commercial facility -- all located at a test site. The predicted cancer risks for the resident farmer were more than a factor of three times higher than the suburban resident at the median risk level, and about a factor of ten greater than the reference worker at a commercial facility. At 100 y from the present, the 5, 50, and 95th percentile risks for the resident farmer at the most contaminated site were 4 x 10{sup {minus}6}, 6 x 10{sup {minus}5}, and 5 x 10{sup {minus}4}, respectively. For the assessment of Pu in surface soil, the principal sources of uncertainty in the estimated risks were population mobility, the relationship between indoor and outdoor contaminant levels, and the dose and risk factors for bone, liver, and lung.
Allison, Scott A; Sweet, Clifford F; Beall, Douglas P; Lewis, Thomas E; Monroe, Thomas
2005-09-01
The PACS implementation process is complicated requiring a tremendous amount of time, resources, and planning. The Department of Defense (DOD) has significant experience in developing and refining PACS acceptance testing (AT) protocols that assure contract compliance, clinical safety, and functionality. The DOD's AT experience under the initial Medical Diagnostic Imaging Support System contract led to the current Digital Imaging Network-Picture Archiving and Communications Systems (DIN-PACS) contract AT protocol. To identify the most common system and component deficiencies under the current DIN-PACS AT protocol, 14 tri-service sites were evaluated during 1998-2000. Sixteen system deficiency citations with 154 separate types of limitations were noted with problems involving the workstation, interfaces, and the Radiology Information System comprising more than 50% of the citations. Larger PACS deployments were associated with a higher number of deficiencies. The most commonly cited systems deficiencies were among the most expensive components of the PACS. PMID:15924273
Catching fly balls in virtual reality: a critical test of the outfielder problem.
Fink, Philip W; Foo, Patrick S; Warren, William H
2009-01-01
How does a baseball outfielder know where to run to catch a fly ball? The "outfielder problem" remains unresolved, and its solution would provide a window into the visual control of action. It may seem obvious that human action is based on an internal model of the physical world, such that the fielder predicts the landing point based on a mental model of the ball's trajectory (TP). However, two alternative theories, Optical Acceleration Cancellation (OAC) and Linear Optical Trajectory (LOT), propose that fielders are led to the right place at the right time by coupling their movements to visual information in a continuous "online" manner. All three theories predict successful catches and similar running paths. We provide a critical test by using virtual reality to perturb the vertical motion of the ball in mid-flight. The results confirm the predictions of OAC but are at odds with LOT and TP. PMID:20055547
Catching fly balls in virtual reality: a critical test of the outfielder problem.
Fink, Philip W; Foo, Patrick S; Warren, William H
2009-01-01
How does a baseball outfielder know where to run to catch a fly ball? The "outfielder problem" remains unresolved, and its solution would provide a window into the visual control of action. It may seem obvious that human action is based on an internal model of the physical world, such that the fielder predicts the landing point based on a mental model of the ball's trajectory (TP). However, two alternative theories, Optical Acceleration Cancellation (OAC) and Linear Optical Trajectory (LOT), propose that fielders are led to the right place at the right time by coupling their movements to visual information in a continuous "online" manner. All three theories predict successful catches and similar running paths. We provide a critical test by using virtual reality to perturb the vertical motion of the ball in mid-flight. The results confirm the predictions of OAC but are at odds with LOT and TP.
2-D Animation's Not Just for Mickey Mouse.
ERIC Educational Resources Information Center
Weinman, Lynda
1995-01-01
Discusses characteristics of two-dimensional (2-D) animation; highlights include character animation, painting issues, and motion graphics. Sidebars present Silicon Graphics animations tools and 2-D animation programs for the desktop computer. (DGM)
CAST2D: A finite element computer code for casting process modeling
Shapiro, A.B.; Hallquist, J.O.
1991-10-01
CAST2D is a coupled thermal-stress finite element computer code for casting process modeling. This code can be used to predict the final shape and stress state of cast parts. CAST2D couples the heat transfer code TOPAZ2D and solid mechanics code NIKE2D. CAST2D has the following features in addition to all the features contained in the TOPAZ2D and NIKE2D codes: (1) a general purpose thermal-mechanical interface algorithm (i.e., slide line) that calculates the thermal contact resistance across the part-mold interface as a function of interface pressure and gap opening; (2) a new phase change algorithm, the delta function method, that is a robust method for materials undergoing isothermal phase change; (3) a constitutive model that transitions between fluid behavior and solid behavior, and accounts for material volume change on phase change; and (4) a modified plot file data base that allows plotting of thermal variables (e.g., temperature, heat flux) on the deformed geometry. Although the code is specialized for casting modeling, it can be used for other thermal stress problems (e.g., metal forming).
Position control using 2D-to-2D feature correspondences in vision guided cell micromanipulation.
Zhang, Yanliang; Han, Mingli; Shee, Cheng Yap; Ang, Wei Tech
2007-01-01
Conventional camera calibration that utilizes the extrinsic and intrinsic parameters of the camera and the objects has certain limitations for micro-level cell operations due to the presence of hardware deviations and external disturbances during the experimental process, thereby invalidating the extrinsic parameters. This invalidation is often neglected in macro-world visual servoing and affects the visual image processing quality, causing deviation from the desired position in micro-level cell operations. To increase the success rate of vision guided biological micromanipulations, a novel algorithm monitoring the changing image pattern of the manipulators including the injection micropipette and cell holder is designed and implemented based on 2 dimensional (2D)-to 2D feature correspondences and can adjust the manipulator and perform position control simultaneously. When any deviation is found, the manipulator is retracted to the initial focusing plane before continuing the operation.
Generation and Radiation of Acoustic Waves from a 2D Shear Layer
NASA Technical Reports Server (NTRS)
Dahl, Milo D.
2000-01-01
A thin free shear layer containing an inflection point in the mean velocity profile is inherently unstable. Disturbances in the flow field can excite the unstable behavior of a shear layer, if the appropriate combination of frequencies and shear layer thicknesses exists, causing instability waves to grow. For other combinations of frequencies and thicknesses, these instability waves remain neutral in amplitude or decay in the downstream direction. A growing instability wave radiates noise when its phase velocity becomes supersonic relative to the ambient speed of sound. This occurs primarily when the mean jet flow velocity is supersonic. Thus, the small disturbances in the flow, which themselves may generate noise, have generated an additional noise source. It is the purpose of this problem to test the ability of CAA to compute this additional source of noise. The problem is idealized such that the exciting disturbance is a fixed known acoustic source pulsating at a single frequency. The source is placed inside of a 2D jet with parallel flow; hence, the shear layer thickness is constant. With the source amplitude small enough, the problem is governed by the following set of linear equations given in dimensional form.
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1994-01-01
A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.
A Planar Quantum Transistor Based on 2D-2D Tunneling in Double Quantum Well Heterostructures
Baca, W.E.; Blount, M.A.; Hafich, M.J.; Lyo, S.K.; Moon, J.S.; Reno, J.L.; Simmons, J.A.; Wendt, J.R.
1998-12-14
We report on our work on the double electron layer tunneling transistor (DELTT), based on the gate-control of two-dimensional -- two-dimensional (2D-2D) tunneling in a double quantum well heterostructure. While previous quantum transistors have typically required tiny laterally-defined features, by contrast the DELTT is entirely planar and can be reliably fabricated in large numbers. We use a novel epoxy-bond-and-stop-etch (EBASE) flip-chip process, whereby submicron gating on opposite sides of semiconductor epitaxial layers as thin as 0.24 microns can be achieved. Because both electron layers in the DELTT are 2D, the resonant tunneling features are unusually sharp, and can be easily modulated with one or more surface gates. We demonstrate DELTTs with peak-to-valley ratios in the source-drain I-V curve of order 20:1 below 1 K. Both the height and position of the resonant current peak can be controlled by gate voltage over a wide range. DELTTs with larger subband energy offsets ({approximately} 21 meV) exhibit characteristics that are nearly as good at 77 K, in good agreement with our theoretical calculations. Using these devices, we also demonstrate bistable memories operating at 77 K. Finally, we briefly discuss the prospects for room temperature operation, increases in gain, and high-speed.
'Brukin2D': a 2D visualization and comparison tool for LC-MS data
Tsagkrasoulis, Dimosthenis; Zerefos, Panagiotis; Loudos, George; Vlahou, Antonia; Baumann, Marc; Kossida, Sophia
2009-01-01
Background Liquid Chromatography-Mass Spectrometry (LC-MS) is a commonly used technique to resolve complex protein mixtures. Visualization of large data sets produced from LC-MS, namely the chromatogram and the mass spectra that correspond to its compounds is the focus of this work. Results The in-house developed 'Brukin2D' software, built in Matlab 7.4, which is presented here, uses the compound data that are exported from the Bruker 'DataAnalysis' program, and depicts the mean mass spectra of all the chromatogram compounds from one LC-MS run, in one 2D contour/density plot. Two contour plots from different chromatograph runs can then be viewed in the same window and automatically compared, in order to find their similarities and differences. The results of the comparison can be examined through detailed mass quantification tables, while chromatogram compound statistics are also calculated during the procedure. Conclusion 'Brukin2D' provides a user-friendly platform for quick, easy and integrated view of complex LC-MS data. The software is available at . PMID:19534737
Inhibition of human cytochrome P450 2D6 (CYP2D6) by methadone.
Wu, D; Otton, S V; Sproule, B A; Busto, U; Inaba, T; Kalow, W; Sellers, E M
1993-01-01
1. In microsomes prepared from three human livers, methadone competitively inhibited the O-demethylation of dextromethorphan, a marker substrate for CYP2D6. The apparent Ki value of methadone ranged from 2.5 to 5 microM. 2. Two hundred and fifty-two (252) white Caucasians, including 210 unrelated healthy volunteers and 42 opiate abusers undergoing treatment with methadone were phenotyped using dextromethorphan as the marker drug. Although the frequency of poor metabolizers was similar in both groups, the extensive metabolizers among the opiate abusers tended to have higher O-demethylation metabolic ratios and to excrete less of the dose as dextromethorphan metabolites than control extensive metabolizer subjects. These data suggest inhibition of CYP2D6 by methadone in vivo as well. 3. Because methadone is widely used in the treatment of opiate abuse, inhibition of CYP2D6 activity in these patients might contribute to exaggerated response or unexpected toxicity from drugs that are substrates of this enzyme. PMID:8448065
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 1 2010-10-01 2010-10-01 false What procedural problems do not result in the... Problems in Drug Tests § 40.209 What procedural problems do not result in the cancellation of a test and do... aware, even if they are not considered problems that will cause a test to be cancelled as listed in...
Efficiency of Pareto joint inversion of 2D geophysical data using global optimization methods
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2016-04-01
Pareto joint inversion of two or more sets of data is a promising new tool of modern geophysical exploration. In the first stage of our investigation we created software enabling execution of forward solvers of two geophysical methods (2D magnetotelluric and gravity) as well as inversion with possibility of constraining solution with seismic data. In the algorithm solving MT forward solver Helmholtz's equations, finite element method and Dirichlet's boundary conditions were applied. Gravity forward solver was based on Talwani's algorithm. To limit dimensionality of solution space we decided to describe model as sets of polygons, using Sharp Boundary Interface (SBI) approach. The main inversion engine was created using Particle Swarm Optimization (PSO) algorithm adapted to handle two or more target functions and to prevent acceptance of solutions which are non - realistic or incompatible with Pareto scheme. Each inversion run generates single Pareto solution, which can be added to Pareto Front. The PSO inversion engine was parallelized using OpenMP standard, what enabled execution code for practically unlimited amount of threads at once. Thereby computing time of inversion process was significantly decreased. Furthermore, computing efficiency increases with number of PSO iterations. In this contribution we analyze the efficiency of created software solution taking under consideration details of chosen global optimization engine used as a main joint minimization engine. Additionally we study the scale of possible decrease of computational time caused by different methods of parallelization applied for both forward solvers and inversion algorithm. All tests were done for 2D magnetotelluric and gravity data based on real geological media. Obtained results show that even for relatively simple mid end computational infrastructure proposed solution of inversion problem can be applied in practice and used for real life problems of geophysical inversion and interpretation.
Is There a Space-Based Technology Solution to Problems with Preclinical Drug Toxicity Testing?
Hammond, Timothy; Allen, Patricia; Birdsall, Holly
2016-07-01
Even the finest state-of-the art preclinical drug testing, usually in primary hepatocytes, remains an imperfect science. Drugs continue to be withdrawn from the market due to unforeseen toxicity, side effects, and drug interactions. The space program may be able to provide a lifeline. Best known for rockets, space shuttles, astronauts and engineering, the space program has also delivered some serious medical science. Optimized suspension culture in NASA's specialized suspension culture devices, known as rotating wall vessels, uniquely maintains Phase I and Phase II drug metabolizing pathways in hepatocytes for weeks in cell culture. Previously prohibitively expensive, new materials and 3D printing techniques have the potential to make the NASA rotating wall vessel available inexpensively on an industrial scale. Here we address the tradeoffs inherent in the rotating wall vessel, limitations of alternative approaches for drug metabolism studies, and the market to be addressed. Better pre-clinical drug testing has the potential to significantly reduce the morbidity and mortality of one of the most common problems in modern medicine: adverse events related to pharmaceuticals.
Is There a Space-Based Technology Solution to Problems with Preclinical Drug Toxicity Testing?
Hammond, Timothy; Allen, Patricia; Birdsall, Holly
2016-07-01
Even the finest state-of-the art preclinical drug testing, usually in primary hepatocytes, remains an imperfect science. Drugs continue to be withdrawn from the market due to unforeseen toxicity, side effects, and drug interactions. The space program may be able to provide a lifeline. Best known for rockets, space shuttles, astronauts and engineering, the space program has also delivered some serious medical science. Optimized suspension culture in NASA's specialized suspension culture devices, known as rotating wall vessels, uniquely maintains Phase I and Phase II drug metabolizing pathways in hepatocytes for weeks in cell culture. Previously prohibitively expensive, new materials and 3D printing techniques have the potential to make the NASA rotating wall vessel available inexpensively on an industrial scale. Here we address the tradeoffs inherent in the rotating wall vessel, limitations of alternative approaches for drug metabolism studies, and the market to be addressed. Better pre-clinical drug testing has the potential to significantly reduce the morbidity and mortality of one of the most common problems in modern medicine: adverse events related to pharmaceuticals. PMID:27183841
Facial biometrics based on 2D vector geometry
NASA Astrophysics Data System (ADS)
Malek, Obaidul; Venetsanopoulos, Anastasios; Androutsos, Dimitrios
2014-05-01
The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates' biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public "Put Face Database". The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.
New Approach for 2D Readout of GEM Detectors
Hasell, Douglas K
2011-10-29
Detectors based on Gas Electron Multiplication (GEM) technology are becoming more and more widely used in nuclear and high energy physics and are being applied in astronomy, medical physics, industry, and homeland security. GEM detectors are thin, low mass, insensitive to magnetic fields, and can currently provide position resolutions down to {approx}50 microns. However, the designs for reconstructing the position, in two dimensions (2D), of the charged particles striking a GEM detector are often complicated to fabricate and expensive. The objective of this proposal is to investigate a simpler procedure for producing the two dimensional readout layer of GEM detectors using readily available printed circuit board technology which can be tailored to the detector requirements. We will use the established GEM laboratory and facilities at M.I.T. currently employed in developing GEM detectors for the STAR forward tracking upgrade to simplify the testing and evaluation of the new 2D readout designs. If this new design proves successful it will benefit future nuclear and high energy physics experiments already being planned and will similarly extend and simplify the application of GEM technology to other branches of science, medicine, and industry. These benefits would be not only in lower costs for fabrication but also it increased flexibility for design and application.
ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data
NASA Astrophysics Data System (ADS)
Akca, Irfan
2016-04-01
ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discretized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.
Correlated Electron Phenomena in 2D Materials
NASA Astrophysics Data System (ADS)
Lambert, Joseph G.
In this thesis, I present experimental results on coherent electron phenomena in layered two-dimensional materials: single layer graphene and van der Waals coupled 2D TiSe2. Graphene is a two-dimensional single-atom thick sheet of carbon atoms first derived from bulk graphite by the mechanical exfoliation technique in 2004. Low-energy charge carriers in graphene behave like massless Dirac fermions, and their density can be easily tuned between electron-rich and hole-rich quasiparticles with electrostatic gating techniques. The sharp interfaces between regions of different carrier densities form barriers with selective transmission, making them behave as partially reflecting mirrors. When two of these interfaces are set at a separation distance within the phase coherence length of the carriers, they form an electronic version of a Fabry-Perot cavity. I present measurements and analysis of multiple Fabry-Perot modes in graphene with parallel electrodes spaced a few hundred nanometers apart. Transition metal dichalcogenide (TMD) TiSe2 is part of the family of materials that coined the term "materials beyond graphene". It contains van der Waals coupled trilayer stacks of Se-Ti-Se. Many TMD materials exhibit a host of interesting correlated electronic phases. In particular, TiSe2 exhibits chiral charge density waves (CDW) below TCDW ˜ 200 K. Upon doping with copper, the CDW state gets suppressed with Cu concentration, and CuxTiSe2 becomes superconducting with critical temperature of T c = 4.15 K. There is still much debate over the mechanisms governing the coexistence of the two correlated electronic phases---CDW and superconductivity. I will present some of the first conductance spectroscopy measurements of proximity coupled superconductor-CDW systems. Measurements reveal a proximity-induced critical current at the Nb-TiSe2 interfaces, suggesting pair correlations in the pure TiSe2. The results indicate that superconducting order is present concurrently with CDW in
Wang, Yuxian; Xie, Yongbing; Sun, Hongqi; Xiao, Jiadong; Cao, Hongbin; Wang, Shaobin
2016-01-15
Two-dimensional reduced graphene oxide (2D rGO) was employed as both a shape-directing medium and support to fabricate 2D γ-MnO2/2D rGO nano-hybrids (MnO2/rGO) via a facile hydrothermal route. For the first time, the 2D/2D hybrid materials were used for catalytic ozonation of 4-nitrophenol. The catalytic efficiency of MnO2/rGO was much higher than either MnO2 or rGO only, and rGO was suggested to play the role for promoting electron transfers. Quenching tests using tert-butanol, p-benzoquinone, and sodium azide suggested that the major radicals responsible for 4-nitrophenol degradation and mineralization are O2(-) and (1)O2, but not ·OH. Reusability tests demonstrated a high stability of the materials in catalytic ozonation with minor Mn leaching below 0.5 ppm. Degradation mechanism, reaction kinetics, reusability and a synergistic effect between catalytic ozonation and coupling peroxymonosulfate (PMS) activation were also discussed.
Generation and Radiation of Acoustic Waves from a 2-D Shear Layer
NASA Technical Reports Server (NTRS)
Agarwal, Anurag; Morris, Philip J.
2000-01-01
A parallel numerical simulation of the radiation of sound from an acoustic source inside a 2-D jet is presented in this paper. This basic benchmark problem is used as a test case for scattering problems that are presently being solved by using the Impedance Mismatch Method (IMM). In this technique, a solid body in the domain is represented by setting the acoustic impedance of each medium, encountered by a wave, to a different value. This impedance discrepancy results in reflected and scattered waves with appropriate amplitudes. The great advantage of the use of this method is that no modifications to a simple Cartesian grid need to be made for complicated geometry bodies. Thus, high order finite difference schemes may be applied simply to all parts of the domain. In the IMM, the total perturbation field is split into incident and scattered fields. The incident pressure is assumed to be known and the equivalent sources for the scattered field are associated with the presence of the scattering body (through the impedance mismatch) and the propagation of the incident field through a non-uniform flow. An earlier version of the technique could only handle uniform flow in the vicinity of the source and at the outflow boundary. Scattering problems in non-uniform mean flow are of great practical importance (for example, scattering from a high lift device in a non-uniform mean flow or the effects of a fuselage boundary layer). The solution to this benchmark problem, which has an acoustic wave propagating through a non-uniform mean flow, serves as a test case for the extensions of the IMM technique.
NASA Astrophysics Data System (ADS)
Bernauer, F.; Hürkamp, K.; Rühm, W.; Tschiersch, J.
2015-08-01
Detailed characterization and classification of precipitation is an important task in atmospheric research. Line scanning 2-D video disdrometer devices are well established for rain observations. The two orthogonal views taken of each hydrometeor passing the sensitive area of the instrument qualify these devices especially for detailed characterization of nonsymmetric solid hydrometeors. However, in case of solid precipitation, problems related to the matching algorithm have to be considered and the user must be aware of the limited spatial resolution when size and shape descriptors are analyzed. Clarifying the potential of 2-D video disdrometers in deriving size, velocity and shape parameters from single recorded pictures is the aim of this work. The need of implementing a matching algorithm suitable for mixed- and solid-phase precipitation is highlighted as an essential step in data evaluation. For this purpose simple reproducible experiments with solid steel spheres and irregularly shaped Styrofoam particles are conducted. Self-consistency of shape parameter measurements is tested in 38 cases of real snowfall. As a result, it was found that reliable size and shape characterization with a relative standard deviation of less than 5 % is only possible for particles larger than 1 mm. For particles between 0.5 and 1.0 mm the relative standard deviation can grow up to 22 % for the volume, 17 % for size parameters and 14 % for shape descriptors. Testing the adapted matching algorithm with a reproducible experiment with Styrofoam particles, a mismatch probability of less than 3 % was found. For shape parameter measurements in case of real solid-phase precipitation, the 2-DVD shows self-consistent behavior.
NASA Technical Reports Server (NTRS)
Thakur, Siddarth; Wright, Jeffrey
2006-01-01
The traditional design and analysis practice for advanced propulsion systems, particularly chemical rocket engines, relies heavily on expensive full-scale prototype development and testing. Over the past decade, use of high-fidelity analysis and design tools such as CFD early in the product development cycle has been identified as one way to alleviate testing costs and to develop these devices better, faster and cheaper. Increased emphasis is being placed on developing and applying CFD models to simulate the flow field environments and performance of advanced propulsion systems. This necessitates the development of next generation computational tools which can be used effectively and reliably in a design environment by non-CFD specialists. A computational tool, called Loci-STREAM is being developed for this purpose. It is a pressure-based, Reynolds-averaged Navier-Stokes (RANS) solver for generalized unstructured grids, which is designed to handle all-speed flows (incompressible to hypersonic) and is particularly suitable for solving multi-species flow in fixed-frame combustion devices. Loci-STREAM integrates proven numerical methods for generalized grids and state-of-the-art physical models in a novel rule-based programming framework called Loci which allows: (a) seamless integration of multidisciplinary physics in a unified manner, and (b) automatic handling of massively parallel computing. The objective of the ongoing work is to develop a robust simulation capability for combustion problems in rocket engines. As an initial step towards validating this capability, a model problem is investigated in the present study which involves a gaseous oxygen/gaseous hydrogen (GO2/GH2) shear coaxial single element injector, for which experimental data are available. The sensitivity of the computed solutions to grid density, grid distribution, different turbulence models, and different near-wall treatments is investigated. A refined grid, which is clustered in the vicinity of
Measurements of Schottky barrier heights formed from metals and 2D transition metal dichalcogedides
NASA Astrophysics Data System (ADS)
Kim, Changsik; Moon, Inyong; Nam, Seunggeol; Cho, Yeonchoo; Shin, Hyeon-Jin; Park, Seongjun; Yoo, Won Jong
Schottky barrier height (SBH) is an important parameter that needs to be considered for designing electronic devices. However, for two dimensional (2D) materials based devices, SBH control is limited by 2D structure induced quantum confinement and 2D surface induced Fermi level pinning. In this work, we explore differences in measuring SBH between 2D and 3D materials. Recently, low temperature I-V measurement has been reported to extract SBH based on thermionic emission equation for Schottky diode. However, 2D devices are not real Schottky diode in that both source and drain metal electrodes make Schottky contact. According to our experimental results, SBH extracted from linear slope of ln (I/T3/2) against 1/T show widely diverse values, dependent on applied voltage bias and tested temperature which affect carrier transport including tunneling or thermionic emission across the metal-2D material interface. In this work, we wish to demonstrate the method to determine SBH and Fermi level pinning which are attributed to 2D transition metal dichalcogedides, differently from conventional 3D materials. .
Haug, Tobias; Mann, Wolfgang
2008-01-01
Given the current lack of appropriate assessment tools for measuring deaf children's sign language skills, many test developers have used existing tests of other sign languages as templates to measure the sign language used by deaf people in their country. This article discusses factors that may influence the adaptation of assessment tests from one natural sign language to another. Two tests which have been adapted for several other sign languages are focused upon: the Test for American Sign Language and the British Sign Language Receptive Skills Test. A brief description is given of each test as well as insights from ongoing adaptations of these tests for other sign languages. The problems reported in these adaptations were found to be grounded in linguistic and cultural differences, which need to be considered for future test adaptations. Other reported shortcomings of test adaptation are related to the question of how well psychometric measures transfer from one instrument to another. PMID:17569751
2-D Finite Element Heat Conduction
1989-10-30
AYER is a finite element program which implicitly solves the general two-dimensional equation of thermal conduction for plane or axisymmetric bodies. AYER takes into account the effects of time (transient problems), in-plane anisotropic thermal conductivity, a three-dimensional velocity distribution, and interface thermal contact resistance. Geometry and material distributions are arbitrary, and input is via subroutines provided by the user. As a result, boundary conditions, material properties, velocity distributions, and internal power generation may be mademore » functions of, e.g., time, temperature, location, and heat flux.« less
ERIC Educational Resources Information Center
Erford, Bradley T.; Butler, Caitlin; Peacock, Elizabeth
2015-01-01
The Screening Test for Emotional Problems-Teacher Version (STEP-T) was designed to identify students aged 7-17 years with wide-ranging emotional disturbances. Coefficients alpha and test-retest reliability were adequate for all subscales except Anxiety. The hypothesized five-factor model fit the data very well and external aspects of validity were…
Testing a Comprehensive Community Problem-Solving Framework for Community Coalitions
ERIC Educational Resources Information Center
Yang, Evelyn; Foster-Fishman, Pennie; Collins, Charles; Ahn, Soyeon
2012-01-01
Community problem solving is believed to help coalitions achieve community changes and subsequent population-level reductions in targeted community health problems. This study empirically examined a community problem solving model used by CADCA, a national coalition training organization, to determine if the model explains how coalitions become…
Quantum Simulation with 2D Arrays of Trapped Ions
NASA Astrophysics Data System (ADS)
Richerme, Philip
2016-05-01
The computational difficulty of solving fully quantum many-body spin problems is a significant obstacle to understanding the behavior of strongly correlated quantum matter. This work proposes the design and construction of a 2D quantum spin simulator to investigate the physics of frustrated materials, highly entangled states, mechanisms potentially underpinning high-temperature superconductivity, and other topics inaccessible to current 1D systems. The effective quantum spins will be encoded within the well-isolated electronic levels of trapped ions, confined in a two-dimensional planar geometry, and made to interact using phonon-mediated optical dipole forces. The system will be scalable to 100+ quantum particles, far beyond the realm of classical intractability, while maintaining individual-ion control, long quantum coherence times, and site-resolved projective spin measurements. Once constructed, the two-dimensional quantum simulator will implement a broad range of spin models on a variety of reconfigurable lattices and characterize their behavior through measurements of spin-spin correlations and entanglement. This versatile tool will serve as an important experimental resource for exploring difficult quantum many-body problems in a regime where classical methods fail.
Silicene: silicon conquers the 2D world
NASA Astrophysics Data System (ADS)
Le Lay, Guy; Salomon, Eric; Angot, Thierry
2016-01-01
We live in the digital age based on the silicon chip and driven by Moore's law. Last July, IBM created a surprise by announcing the fabrication of a 7 nm test chip with functional transistors using, instead of just silicon, a silicon-germanium alloy. Will silicon be dethroned?
Testing Wind as an Explanation for the Spin Problem in the Continuum-fitting Method
NASA Astrophysics Data System (ADS)
You, Bei; Straub, Odele; Czerny, Bożena; Sobolewska, Małgosia; Różańska, Agata; Bursa, Michal; Dovčiak, Michal
2016-04-01
The continuum-fitting method is one of the two most advanced methods of determining the black hole spin in accreting X-ray binary systems. There are, however, still some unresolved issues with the underlying disk models. One of these issues manifests as an apparent decrease in spin for increasing source luminosity. Here, we perform a few simple tests to establish whether outflows from the disk close to the inner radius can address this problem. We employ four different parametric models to describe the wind and compare these to the apparent decrease in spin with luminosity measured in the sources LMC X-3 and GRS 1915+105. Wind models in which parameters do not explicitly depend on the accretion rate cannot reproduce the spin measurements. Models with mass accretion rate dependent outflows, however, have spectra that emulate the observed ones. The assumption of a wind thus effectively removes the artifact of spin decrease. This solution is not unique; the same conclusion can be obtained using a truncated inner disk model. To distinguish among the valid models, we will need high-resolution X-ray data and a realistic description of the Comptonization in the wind.
Mechanical properties of 2D and 3D braided textile composites
NASA Technical Reports Server (NTRS)
Norman, Timothy L.
1991-01-01
The purpose of this research was to determine the mechanical properties of 2D and 3D braided textile composite materials. Specifically, those designed for tension or shear loading were tested under static loading to failure to investigate the effects of braiding. The overall goal of the work was to provide a structural designer with an idea of how textile composites perform under typical loading conditions. From test results for unnotched tension, it was determined that the 2D is stronger, stiffer, and has higher elongation to failure than the 3D. It was also found that the polyetherether ketone (PEEK) resin system was stronger, stiffer, and had higher elongation at failure than the resin transfer molding (RTM) epoxy. Open hole tension tests showed that PEEK resin is more notch sensitive than RTM epoxy. Of greater significance, it was found that the 3D is less notch sensitive than the 2D. Unnotched compression tests indicated, as did the tension tests, that the 2D is stronger, stiffer, and has higher elongation at failure than the RTM epoxy. The most encouraging results were from compression after impact. The 3D braided composite showed a compression after impact failure stress equal to 92 percent of the unimpacted specimen. The 2D braided composite failed at about 67 percent of the unimpacted specimen. Higher damage tolerance is observed in textiles over conventional composite materials. This is observed in the results, especially in the 3D braided materials.
Distribution of CYP2D6 alleles and phenotypes in the Brazilian population.
Friedrich, Deise C; Genro, Júlia P; Sortica, Vinicius A; Suarez-Kurtz, Guilherme; de Moraes, Maria Elizabete; Pena, Sergio D J; dos Santos, Andrea K Ribeiro; Romano-Silva, Marco A; Hutz, Mara H
2014-01-01
The CYP2D6 enzyme is one of the most important members of the cytochrome P450 superfamily. This enzyme metabolizes approximately 25% of currently prescribed medications. The CYP2D6 gene presents a high allele heterogeneity that determines great inter-individual variation. The aim of this study was to evaluate the variability of CYP2D6 alleles, genotypes and predicted phenotypes in Brazilians. Eleven single nucleotide polymorphisms and CYP2D6 duplications/multiplications were genotyped by TaqMan assays in 1020 individuals from North, Northeast, South, and Southeast Brazil. Eighteen CYP2D6 alleles were identified in the Brazilian population. The CYP2D6*1 and CYP2D6*2 alleles were the most frequent and widely distributed in different geographical regions of Brazil. The highest number of CYPD6 alleles observed was six and the frequency of individuals with more than two copies ranged from 6.3% (in Southern Brazil) to 10.2% (Northern Brazil). The analysis of molecular variance showed that CYP2D6 is homogeneously distributed across different Brazilian regions and most of the differences can be attributed to inter-individual differences. The most frequent predicted metabolic status was EM (83.5%). Overall 2.5% and 3.7% of Brazilians were PMs and UMs respectively. Genomic ancestry proportions differ only in the prevalence of intermediate metabolizers. The IM predicted phenotype is associated with a higher proportion of African ancestry and a lower proportion of European ancestry in Brazilians. PM and UM classes did not vary among regions and/or ancestry proportions therefore unique CYP2D6 testing guidelines for Brazilians are possible and could potentially avoid ineffective or adverse events outcomes due to drug prescriptions. PMID:25329392
Distribution of CYP2D6 alleles and phenotypes in the Brazilian population.
Friedrich, Deise C; Genro, Júlia P; Sortica, Vinicius A; Suarez-Kurtz, Guilherme; de Moraes, Maria Elizabete; Pena, Sergio D J; dos Santos, Andrea K Ribeiro; Romano-Silva, Marco A; Hutz, Mara H
2014-01-01
The CYP2D6 enzyme is one of the most important members of the cytochrome P450 superfamily. This enzyme metabolizes approximately 25% of currently prescribed medications. The CYP2D6 gene presents a high allele heterogeneity that determines great inter-individual variation. The aim of this study was to evaluate the variability of CYP2D6 alleles, genotypes and predicted phenotypes in Brazilians. Eleven single nucleotide polymorphisms and CYP2D6 duplications/multiplications were genotyped by TaqMan assays in 1020 individuals from North, Northeast, South, and Southeast Brazil. Eighteen CYP2D6 alleles were identified in the Brazilian population. The CYP2D6*1 and CYP2D6*2 alleles were the most frequent and widely distributed in different geographical regions of Brazil. The highest number of CYPD6 alleles observed was six and the frequency of individuals with more than two copies ranged from 6.3% (in Southern Brazil) to 10.2% (Northern Brazil). The analysis of molecular variance showed that CYP2D6 is homogeneously distributed across different Brazilian regions and most of the differences can be attributed to inter-individual differences. The most frequent predicted metabolic status was EM (83.5%). Overall 2.5% and 3.7% of Brazilians were PMs and UMs respectively. Genomic ancestry proportions differ only in the prevalence of intermediate metabolizers. The IM predicted phenotype is associated with a higher proportion of African ancestry and a lower proportion of European ancestry in Brazilians. PM and UM classes did not vary among regions and/or ancestry proportions therefore unique CYP2D6 testing guidelines for Brazilians are possible and could potentially avoid ineffective or adverse events outcomes due to drug prescriptions.
Distribution of CYP2D6 Alleles and Phenotypes in the Brazilian Population
Sortica, Vinicius A.; Suarez-Kurtz, Guilherme; de Moraes, Maria Elizabete; Pena, Sergio D. J.; dos Santos, Ândrea K. Ribeiro; Romano-Silva, Marco A.; Hutz, Mara H.
2014-01-01
Abstract The CYP2D6 enzyme is one of the most important members of the cytochrome P450 superfamily. This enzyme metabolizes approximately 25% of currently prescribed medications. The CYP2D6 gene presents a high allele heterogeneity that determines great inter-individual variation. The aim of this study was to evaluate the variability of CYP2D6 alleles, genotypes and predicted phenotypes in Brazilians. Eleven single nucleotide polymorphisms and CYP2D6 duplications/multiplications were genotyped by TaqMan assays in 1020 individuals from North, Northeast, South, and Southeast Brazil. Eighteen CYP2D6 alleles were identified in the Brazilian population. The CYP2D6*1 and CYP2D6*2 alleles were the most frequent and widely distributed in different geographical regions of Brazil. The highest number of CYPD6 alleles observed was six and the frequency of individuals with more than two copies ranged from 6.3% (in Southern Brazil) to 10.2% (Northern Brazil). The analysis of molecular variance showed that CYP2D6 is homogeneously distributed across different Brazilian regions and most of the differences can be attributed to inter-individual differences. The most frequent predicted metabolic status was EM (83.5%). Overall 2.5% and 3.7% of Brazilians were PMs and UMs respectively. Genomic ancestry proportions differ only in the prevalence of intermediate metabolizers. The IM predicted phenotype is associated with a higher proportion of African ancestry and a lower proportion of European ancestry in Brazilians. PM and UM classes did not vary among regions and/or ancestry proportions therefore unique CYP2D6 testing guidelines for Brazilians are possible and could potentially avoid ineffective or adverse events outcomes due to drug prescriptions. PMID:25329392
Presynaptic GluN2D receptors detect glutamate spillover and regulate cerebellar GABA release.
Dubois, Christophe J; Lachamp, Philippe M; Sun, Lu; Mishina, Masayoshi; Liu, Siqiong June
2016-01-01
Glutamate directly activates N-methyl-d-aspartate (NMDA) receptors on presynaptic inhibitory interneurons and enhances GABA release, altering the excitatory-inhibitory balance within a neuronal circuit. However, which class of NMDA receptors is involved in the detection of glutamate spillover is not known. GluN2D subunit-containing NMDA receptors are ideal candidates as they exhibit a high affinity for glutamate. We now show that cerebellar stellate cells express both GluN2B and GluN2D NMDA receptor subunits. Genetic deletion of GluN2D subunits prevented a physiologically relevant, stimulation-induced, lasting increase in GABA release from stellate cells [long-term potentiation of inhibitory transmission (I-LTP)]. NMDA receptors are tetramers composed of two GluN1 subunits associated to either two identical subunits (di-heteromeric receptors) or to two different subunits (tri-heteromeric receptors). To determine whether tri-heteromeric GluN2B/2D NMDA receptors mediate I-LTP, we tested the prediction that deletion of GluN2D converts tri-heteromeric GluN2B/2D to di-heteromeric GluN2B NMDA receptors. We find that prolonged stimulation rescued I-LTP in GluN2D knockout mice, and this was abolished by GluN2B receptor blockers that failed to prevent I-LTP in wild-type mice. Therefore, NMDA receptors that contain both GluN2D and GluN2B mediate the induction of I-LTP. Because these receptors are not present in the soma and dendrites, presynaptic tri-heteromeric GluN2B/2D NMDA receptors in inhibitory interneurons are likely to mediate the cross talk between excitatory and inhibitory transmission.
Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo
2016-01-01
Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of
Sparse and incomplete factorial matrices to screen membrane protein 2D crystallization.
Lasala, R; Coudray, N; Abdine, A; Zhang, Z; Lopez-Redondo, M; Kirshenbaum, R; Alexopoulos, J; Zolnai, Z; Stokes, D L; Ubarretxena-Belandia, I
2015-02-01
Electron crystallography is well suited for studying the structure of membrane proteins in their native lipid bilayer environment. This technique relies on electron cryomicroscopy of two-dimensional (2D) crystals, grown generally by reconstitution of purified membrane proteins into proteoliposomes under conditions favoring the formation of well-ordered lattices. Growing these crystals presents one of the major hurdles in the application of this technique. To identify conditions favoring crystallization a wide range of factors that can lead to a vast matrix of possible reagent combinations must be screened. However, in 2D crystallization these factors have traditionally been surveyed in a relatively limited fashion. To address this problem we carried out a detailed analysis of published 2D crystallization conditions for 12 β-barrel and 138 α-helical membrane proteins. From this analysis we identified the most successful conditions and applied them in the design of new sparse and incomplete factorial matrices to screen membrane protein 2D crystallization. Using these matrices we have run 19 crystallization screens for 16 different membrane proteins totaling over 1300 individual crystallization conditions. Six membrane proteins have yielded diffracting 2D crystals suitable for structure determination, indicating that these new matrices show promise to accelerate the success rate of membrane protein 2D crystallization.
Sparse and incomplete factorial matrices to screen membrane protein 2D crystallization
Lasala, R.; Coudray, N.; Abdine, A.; Zhang, Z.; Lopez-Redondo, M.; Kirshenbaum, R.; Alexopoulos, J.; Zolnai, Z.; Stokes, D.L.; Ubarretxena-Belandia, I.
2014-01-01
Electron crystallography is well suited for studying the structure of membrane proteins in their native lipid bilayer environment. This technique relies on electron cryomicroscopy of two-dimensional (2D) crystals, grown generally by reconstitution of purified membrane proteins into proteoliposomes under conditions favoring the formation of well-ordered lattices. Growing these crystals presents one of the major hurdles in the application of this technique. To identify conditions favoring crystallization a wide range of factors that can lead to a vast matrix of possible reagent combinations must be screened. However, in 2D crystallization these factors have traditionally been surveyed in a relatively limited fashion. To address this problem we carried out a detailed analysis of published 2D crystallization conditions for 12 β-barrel and 138 α-helical membrane proteins. From this analysis we identified the most successful conditions and applied them in the design of new sparse and incomplete factorial matrices to screen membrane protein 2D crystallization. Using these matrices we have run 19 crystallization screens for 16 different membrane proteins totaling over 1,300 individual crystallization conditions. Six membrane proteins have yielded diffracting 2D crystals suitable for structure determination, indicating that these new matrices show promise to accelerate the success rate of membrane protein 2D crystallization. PMID:25478971
Sparse and incomplete factorial matrices to screen membrane protein 2D crystallization.
Lasala, R; Coudray, N; Abdine, A; Zhang, Z; Lopez-Redondo, M; Kirshenbaum, R; Alexopoulos, J; Zolnai, Z; Stokes, D L; Ubarretxena-Belandia, I
2015-02-01
Electron crystallography is well suited for studying the structure of membrane proteins in their native lipid bilayer environment. This technique relies on electron cryomicroscopy of two-dimensional (2D) crystals, grown generally by reconstitution of purified membrane proteins into proteoliposomes under conditions favoring the formation of well-ordered lattices. Growing these crystals presents one of the major hurdles in the application of this technique. To identify conditions favoring crystallization a wide range of factors that can lead to a vast matrix of possible reagent combinations must be screened. However, in 2D crystallization these factors have traditionally been surveyed in a relatively limited fashion. To address this problem we carried out a detailed analysis of published 2D crystallization conditions for 12 β-barrel and 138 α-helical membrane proteins. From this analysis we identified the most successful conditions and applied them in the design of new sparse and incomplete factorial matrices to screen membrane protein 2D crystallization. Using these matrices we have run 19 crystallization screens for 16 different membrane proteins totaling over 1300 individual crystallization conditions. Six membrane proteins have yielded diffracting 2D crystals suitable for structure determination, indicating that these new matrices show promise to accelerate the success rate of membrane protein 2D crystallization. PMID:25478971
Mechanical characterization of 2D, 2D stitched, and 3D braided/RTM materials
NASA Technical Reports Server (NTRS)
Deaton, Jerry W.; Kullerd, Susan M.; Portanova, Marc A.
1993-01-01
Braided composite materials have potential for application in aircraft structures. Fuselage frames, floor beams, wing spars, and stiffeners are examples where braided composites could find application if cost effective processing and damage tolerance requirements are met. Another important consideration for braided composites relates to their mechanical properties and how they compare to the properties of composites produced by other textile composite processes being proposed for these applications. Unfortunately, mechanical property data for braided composites do not appear extensively in the literature. Data are presented in this paper on the mechanical characterization of 2D triaxial braid, 2D triaxial braid plus stitching, and 3D (through-the-thickness) braid composite materials. The braided preforms all had the same graphite tow size and the same nominal braid architectures, (+/- 30 deg/0 deg), and were resin transfer molded (RTM) using the same mold for each of two different resin systems. Static data are presented for notched and unnotched tension, notched and unnotched compression, and compression after impact strengths at room temperature. In addition, some static results, after environmental conditioning, are included. Baseline tension and compression fatigue results are also presented, but only for the 3D braided composite material with one of the resin systems.
NASA Astrophysics Data System (ADS)
Benjamini, Dan; Basser, Peter J.
2016-10-01
Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1 -T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1 -T2, D -D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time.
Benjamini, Dan; Basser, Peter J
2016-10-01
Measuring multidimensional (e.g., 2D) relaxation spectra in NMR and MRI clinical applications is a holy grail of the porous media and biomedical MR communities. The main bottleneck is the inversion of Fredholm integrals of the first kind, an ill-conditioned problem requiring large amounts of data to stabilize a solution. We suggest a novel experimental design and processing framework to accelerate and improve the reconstruction of such 2D spectra that uses a priori information from the 1D projections of spectra, or marginal distributions. These 1D marginal distributions provide powerful constraints when 2D spectra are reconstructed, and their estimation requires an order of magnitude less data than a conventional 2D approach. This marginal distributions constrained optimization (MADCO) methodology is demonstrated here with a polyvinylpyrrolidone-water phantom that has 3 distinct peaks in the 2D D-T1 space. The stability, sensitivity to experimental parameters, and accuracy of this new approach are compared with conventional methods by serially subsampling the full data set. While the conventional, unconstrained approach performed poorly, the new method had proven to be highly accurate and robust, only requiring a fraction of the data. Additionally, synthetic T1-T2 data are presented to explore the effects of noise on the estimations, and the performance of the proposed method with a smooth and realistic 2D spectrum. The proposed framework is quite general and can also be used with a variety of 2D MRI experiments (D-T2,T1-T2,D-D, etc.), making these potentially feasible for preclinical and even clinical applications for the first time. PMID:27543810
Computational Screening of 2D Materials for Photocatalysis.
Singh, Arunima K; Mathew, Kiran; Zhuang, Houlong L; Hennig, Richard G
2015-03-19
Two-dimensional (2D) materials exhibit a range of extraordinary electronic, optical, and mechanical properties different from their bulk counterparts with potential applications for 2D materials emerging in energy storage and conversion technologies. In this Perspective, we summarize the recent developments in the field of solar water splitting using 2D materials and review a computational screening approach to rapidly and efficiently discover more 2D materials that possess properties suitable for solar water splitting. Computational tools based on density-functional theory can predict the intrinsic properties of potential photocatalyst such as their electronic properties, optical absorbance, and solubility in aqueous solutions. Computational tools enable the exploration of possible routes to enhance the photocatalytic activity of 2D materials by use of mechanical strain, bias potential, doping, and pH. We discuss future research directions and needed method developments for the computational design and optimization of 2D materials for photocatalysis.
Synthetic Covalent and Non-Covalent 2D Materials.
Boott, Charlotte E; Nazemi, Ali; Manners, Ian
2015-11-16
The creation of synthetic 2D materials represents an attractive challenge that is ultimately driven by their prospective uses in, for example, electronics, biomedicine, catalysis, sensing, and as membranes for separation and filtration. This Review illustrates some recent advances in this diverse field with a focus on covalent and non-covalent 2D polymers and frameworks, and self-assembled 2D materials derived from nanoparticles, homopolymers, and block copolymers.
From 2-D electrophoresis to proteomics.
Klose, Joachim
2009-06-01
At first, a short history of the beginning of 2-DE is provided. Based on the present state of the art at the time I developed a 2-DE technique in 1975 that was able to resolve complex protein extracts from mouse tissues in hundreds of protein spots. My intention was to study proteins from a global point of view. Questions of interest were, how do proteins change during embryonic development, and what is the effect of induced mutations on the protein level. At that time protein chemistry was a matter of analyzing single proteins in detail. Therefore, my approach was frequently criticized as inappropriate because it would be impossible to identify and characterize the hundreds of proteins resolved. But soon it was realized that studying total proteins gives opportunities to answer many interesting questions. This led to a research field nowadays called "proteomics". Already in the beginning of the 1980s the idea to analyze the total human proteins had come up. By entering the post-genome era it became obvious that a human proteome project is needed in order to explain the human genome in terms of its functions. The problems in realizing such a project are considered. PMID:19517494
Eng, Charis; Sharp, Richard R
2010-02-01
A number of for-profit companies now provide personal genomic testing services to clients directly, without input from a physician or other health care provider, and the results of these tests include predictions about a broad spectrum of disease risks and traits. Validated clinical genetic testing and direct-to-consumer (DTC) genomic tests differ substantially in their reliability and usefulness, raising many clinical, ethical, and societal challenges, which are discussed in this Commentary. Of special concern is the problem of misattributed equivalence, which occurs when a patient or physician mistakenly views alternative methods of genetic evaluation as equivalent in their results and analytic rigor. Despite the many challenges raised by DTC genomic testing, we are reminded that commercial interests have sometimes acted as a disruptive force or technology that drives nonconventional approaches to difficult problems.
Eng, Charis; Sharp, Richard R
2010-02-01
A number of for-profit companies now provide personal genomic testing services to clients directly, without input from a physician or other health care provider, and the results of these tests include predictions about a broad spectrum of disease risks and traits. Validated clinical genetic testing and direct-to-consumer (DTC) genomic tests differ substantially in their reliability and usefulness, raising many clinical, ethical, and societal challenges, which are discussed in this Commentary. Of special concern is the problem of misattributed equivalence, which occurs when a patient or physician mistakenly views alternative methods of genetic evaluation as equivalent in their results and analytic rigor. Despite the many challenges raised by DTC genomic testing, we are reminded that commercial interests have sometimes acted as a disruptive force or technology that drives nonconventional approaches to difficult problems. PMID:20371476
Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy
NASA Astrophysics Data System (ADS)
Naaz, Farah
Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups:
Corbin, William R.; Iwamoto, Derek K.; Fromme, Kim
2011-01-01
Objective: According to the acquired preparedness model (APM), personality traits related to disinhibition (i.e., impulsivity and sensation seeking) may influence the learning process, contributing to individual differences in cognitions (e.g., expectations about outcomes) that may contribute to engagement in and consequences of risk behaviors, including alcohol use. Although there is strong support for the APM, longitudinal studies have involved short-term follow-ups, and the relevance of the APM for alcohol-related consequences has not been clearly established. Method: Participants were 2,245 (59.9% female) incoming freshmen who completed the first of eight web-based surveys during the summer before college matriculation. Structural equation modeling was used to test a comprehensive longitudinal APM for both alcohol use and related consequences. Multigroup models were used to examine measurement and structural invariance by gender. Results: Positive (but not negative) alcohol expectancies during freshman year of college partially mediated the relation between senior year of high school disinhibition and both alcohol use and related problems during the fourth year of college, and multigroup models suggested that the relationships proposed in the APM operated similarly for women and men. Conclusions: This study demonstrates the temporal relations proposed in the APM across a longer period (4 years) than in previous studies among a large sample of ethnically diverse students. Further, the results are the first to validate the APM with respect to drinking consequences while controlling for levels of alcohol use. The results lend support for brief interventions targeting positive alcohol expectancies, particularly for individuals high in trait disinhibition. PMID:21683042
A Geometric Boolean Library for 2D Objects
2006-01-05
The 2D Boolean Library is a collection of C++ classes -- which primarily represent 2D geometric data and relationships, and routines -- which contain algorithms for 2D geometric Boolean operations and utility functions. Classes are provided for 2D points, lines, arcs, edgeuses, loops, surfaces and mask sets. Routines are provided that incorporate the Boolean operations Union(OR), XOR, Intersection and Difference. Various analytical geometry routines and routines for importing and exporting the data in various filemore » formats, are also provided in the library.« less
VizieR Online Data Catalog: The 2dF Galaxy Redshift Survey (2dFGRS) (2dFGRS Team, 1998-2003)
NASA Astrophysics Data System (ADS)
Colless, M.; Dalton, G.; Maddox, S.; Sutherland, W.; Norberg, P.; Cole, S.; Bland-Hawthorn, J.; Bridges, T.; Cannon, R.; Collins, C.; Couch, W.; Cross, N.; Deeley, K.; de Propris, R.; Driver, S. P.; Efstathiou, G.; Ellis, R. S.; Frenk, C. S.; Glazebrook, K.; Jackson, C.; Lahav, O.; Lewis, I.; Lumsden, S.; Madgwick, D.; Peacock, J. A.; Peterson, B. A.; Price, I.; Seaborne, M.; Taylor, K.
2007-11-01
The 2dF Galaxy Redshift Survey (2dFGRS) is a major spectroscopic survey taking full advantage of the unique capabilities of the 2dF facility built by the Anglo-Australian Observatory. The 2dFGRS is integrated with the 2dF QSO survey (2QZ, Cat. VII/241). The 2dFGRS obtained spectra for 245591 objects, mainly galaxies, brighter than a nominal extinction-corrected magnitude limit of bJ=19.45. Reliable (quality>=3) redshifts were obtained for 221414 galaxies. The galaxies cover an area of approximately 1500 square degrees selected from the extended APM Galaxy Survey in three regions: a North Galactic Pole (NGP) strip, a South Galactic Pole (SGP) strip, and random fields scattered around the SGP strip. Redshifts are measured from spectra covering 3600-8000 Angstroms at a two-pixel resolution of 9.0 Angstrom and a median S/N of 13 per pixel. All redshift identifications are visually checked and assigned a quality parameter Q in the range 1-5; Q>=3 redshifts are 98.4% reliable and have an rms uncertainty of 85 km/s. The overall redshift completeness for Q>=3 redshifts is 91.8% but this varies with magnitude from 99% for the brightest galaxies to 90% for objects at the survey limit. The 2dFGRS data base is available on the World Wide Web at http://www.mso.anu.edu.au/2dFGRS/. (6 data files).
Estimating 2-D vector velocities using multidimensional spectrum analysis.
Oddershede, Niels; Løvstakken, Lasse; Torp, Hans; Jensen, Jørgen Arendt
2008-08-01
Wilson (1991) presented an ultrasonic wideband estimator for axial blood flow velocity estimation through the use of the 2-D Fourier transform. It was shown how a single velocity component was concentrated along a line in the 2-D Fourier space, where the slope was given by the axial velocity. Later, it was shown that this approach could also be used for finding the lateral velocity component by also including a lateral sampling. A single velocity component would then be concentrated along a plane in the 3-D Fourier space, tilted according to the 2 velocity components. This paper presents 2 new velocity estimators for finding both the axial and lateral velocity components. The estimators essentially search for the plane in the 3- D Fourier space, where the integrated power spectrum is largest. The first uses the 3-D Fourier transform to find the power spectrum, while the second uses a minimum variance approach. Based on this plane, the axial and lateral velocity components are estimated. Several phantom measurements, for flow-to-depth angles of 60, 75, and 90 degrees, were performed. Multiple parallel lines were beamformed simultaneously, and 2 different receive apodization schemes were tried. The 2 estimators were then applied to the data. The axial velocity component was estimated with an average standard deviation below 2.8% of the peak velocity, while the average standard deviation of the lateral velocity estimates was between 2.0% and 16.4%. The 2 estimators were also tested on in vivo data from a transverse scan of the common carotid artery, showing the potential of the vector velocity estimation method under in vivo conditions. PMID:18986918
ORMDIN. 2-D Nonlinear Inverse Heat Conduction
Bass, B.R.
1990-05-01
ORMDIN is a finite-element program developed for two-dimensional nonlinear inverse heat conduction analysis as part of the Oak Ridge National Laboratory Pressurized Water Reactor Blowdown Heat Transfer (BDHT) program. One of the primary objectives of the program was to determine the transient surface temperature and surface heat flux of fuel pin simulators from internal thermocouple signals obtained during a loss-of-coolant accident experiment in the Thermal-Hydraulic Test Facility (THTF). ORMDIN was designed primarily to perform a transient two-dimensional nonlinear inverse heat conduction analysis of the THTF bundle 3 heater rod; however, it can be applied to other cylindrical geometries for which the thermophysical properties are prescribed functions of temperature. The program assumes that discretized temperature histories are provided at three thermocouple locations in the interior of the cylinder. Concurrent with the two-dimensional analysis, ORMDIN also generates one-dimensional solutions for each of the three thermocouple radial planes.
Klassifikation von Standardebenen in der 2D-Echokardiographie mittels 2D-3D-Bildregistrierung
NASA Astrophysics Data System (ADS)
Bergmeir, Christoph; Subramanian, Navneeth
Zum Zweck der Entwicklung eines Systems, das einen unerfahrenen Anwender von Ultraschall (US) zur Aufnahme relevanter anatomischer Strukturen leitet, untersuchen wir die Machbarkeit von 2D-US zu 3D-CT Registrierung. Wir verwenden US-Aufnahmen von Standardebenen des Herzens, welche zu einem 3D-CT-Modell registriert werden. Unser Algorithmus unterzieht sowohl die US-Bilder als auch den CT-Datensatz Vorverarbeitungsschritten, welche die Daten durch Segmentierung auf wesentliche Informationen in Form von Labein für Muskel und Blut reduzieren. Anschließend werden diese Label zur Registrierung mittels der Match-Cardinality-Metrik genutzt. Durch mehrmaliges Registrieren mit verschiedenen Initialisierungen ermitteln wir die im US-Bild sichtbare Standardebene. Wir evaluierten die Methode auf sieben US-Bildern von Standardebenen. Fünf davon wurden korrekt zugeordnet.
Epitaxial 2D SnSe2/ 2D WSe2 van der Waals Heterostructures.
Aretouli, Kleopatra Emmanouil; Tsoutsou, Dimitra; Tsipas, Polychronis; Marquez-Velasco, Jose; Aminalragia Giamini, Sigiava; Kelaidis, Nicolaos; Psycharis, Vassilis; Dimoulas, Athanasios
2016-09-01
van der Waals heterostructures of 2D semiconductor materials can be used to realize a number of (opto)electronic devices including tunneling field effect devices (TFETs). It is shown in this work that high quality SnSe2/WSe2 vdW heterostructure can be grown by molecular beam epitaxy on AlN(0001)/Si(111) substrates using a Bi2Se3 buffer layer. A valence band offset of 0.8 eV matches the energy gap of SnSe2 in such a way that the VB edge of WSe2 and the CB edge of SnSe2 are lined up, making this materials combination suitable for (nearly) broken gap TFETs. PMID:27537619
CVMAC 2D Program: A method of converting 3D to 2D
Lown, J.
1990-06-20
This paper presents the user with a method of converting a three- dimensional wire frame model into a technical illustration, detail, or assembly drawing. By using the 2D Program, entities can be mapped from three-dimensional model space into two-dimensional model space, as if they are being traced. Selected entities to be mapped can include circles, arcs, lines, and points. This program prompts the user to digitize the view to be mapped, specify the layers in which the new two-dimensional entities will reside, and select the entities, either by digitizing or windowing. The new two-dimensional entities are displayed in a small view which the program creates in the lower left corner of the drawing. 9 figs.
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2014 CFR
2014-10-01
... the same business day on which you are notified of the problem, transmitting it by fax or courier. (2... information on the same business day on which you are notified of the problem, transmitting it by fax or courier. (3) You must maintain the written documentation of a correction with the CCF. (4) You must...
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2013 CFR
2013-10-01
... the same business day on which you are notified of the problem, transmitting it by fax or courier. (2... information on the same business day on which you are notified of the problem, transmitting it by fax or courier. (3) You must maintain the written documentation of a correction with the CCF. (4) You must...
49 CFR 40.205 - How are drug test problems corrected?
Code of Federal Regulations, 2012 CFR
2012-10-01
... the same business day on which you are notified of the problem, transmitting it by fax or courier. (2... information on the same business day on which you are notified of the problem, transmitting it by fax or courier. (3) You must maintain the written documentation of a correction with the CCF. (4) You must...
NASA Technical Reports Server (NTRS)
Banks, H. T.; Kojima, Fumio
1988-01-01
The identification of the geometrical structure of the system boundary for a two-dimensional diffusion system is reported. The domain identification problem treated here is converted into an optimization problem based on a fit-to-data criterion and theoretical convergence results for approximate identification techniques are discussed. Results of numerical experiments to demonstrate the efficacy of the theoretical ideas are reported.
2D Four-Channel Perfect Reconstruction Filter Bank Realized with the 2D Lattice Filter Structure
NASA Astrophysics Data System (ADS)
Sezen, S.; Ertüzün, A.
2006-12-01
A novel orthogonal 2D lattice structure is incorporated into the design of a nonseparable 2D four-channel perfect reconstruction filter bank. The proposed filter bank is obtained by using the polyphase decomposition technique which requires the design of an orthogonal 2D lattice filter. Due to constraint of perfect reconstruction, each stage of this lattice filter bank is simply parameterized by two coefficients. The perfect reconstruction property is satisfied regardless of the actual values of these parameters and of the number of the lattice stages. It is also shown that a separable 2D four-channel perfect reconstruction lattice filter bank can be constructed from the 1D lattice filter and that this is a special case of the proposed 2D lattice filter bank under certain conditions. The perfect reconstruction property of the proposed 2D lattice filter approach is verified by computer simulations.
ERIC Educational Resources Information Center
Leahy, Wayne; Hanham, José; Sweller, John
2015-01-01
The testing effect occurs when learners who are tested rather than relearning material perform better on a final test than those who relearn. Based on cognitive load theory, it was predicted that the testing effect may not be obtained when the material being learned is high in element interactivity. Three experiments investigated conditions of the…
Solutions for Some Technical Problems in Domain-Referenced Mastery Testing. Final Report.
ERIC Educational Resources Information Center
Huynh, Huynh; Saunders, Joseph C.
A basic technical framework is provided for the design and use of mastery tests. The Mastery Testing Project (MTP) prepared this framework using advanced mathematics supplemented with computer simulation based on real test data collected by the South Carolina Statewide Testing Program. The MTP focused on basic technical issues encountered in using…
Functional characterization of CYP2D6 enhancer polymorphisms
Wang, Danxin; Papp, Audrey C.; Sun, Xiaochun
2015-01-01
CYP2D6 metabolizes nearly 25% of clinically used drugs. Genetic polymorphisms cause large inter-individual variability in CYP2D6 enzyme activity and are currently used as biomarker to predict CYP2D6 metabolizer phenotype. Previously, we had identified a region 115 kb downstream of CYP2D6 as enhancer for CYP2D6, containing two completely linked single nucleotide polymorphisms (SNPs), rs133333 and rs5758550, associated with enhanced transcription. However, the enhancer effect on CYP2D6 expression, and the causative variant, remained to be ascertained. To characterize the CYP2D6 enhancer element, we applied chromatin conformation capture combined with the next-generation sequencing (4C assays) and chromatin immunoprecipitation with P300 antibody, in HepG2 and human primary culture hepatocytes. The results confirmed the role of the previously identified enhancer region in CYP2D6 expression, expanding the number of candidate variants to three highly linked SNPs (rs133333, rs5758550 and rs4822082). Among these, only rs5758550 demonstrated regulating enhancer activity in a reporter gene assay. Use of clustered regularly interspaced short palindromic repeats mediated genome editing in HepG2 cells targeting suspected enhancer regions decreased CYP2D6 mRNA expression by 70%, only upon deletion of the rs5758550 region. These results demonstrate robust effects of both the enhancer element and SNP rs5758550 on CYP2D6 expression, supporting consideration of rs5758550 for CYP2D6 genotyping panels to yield more accurate phenotype prediction. PMID:25381333
Topological evolutionary computing in the optimal design of 2D and 3D structures
NASA Astrophysics Data System (ADS)
Burczynski, T.; Poteralski, A.; Szczepanik, M.
2007-10-01
An application of evolutionary algorithms and the finite-element method to the topology optimization of 2D structures (plane stress, bending plates, and shells) and 3D structures is described. The basis of the topological evolutionary optimization is the direct control of the density material distribution (or thickness for 2D structures) by the evolutionary algorithm. The structures are optimized for stress, mass, and compliance criteria. The numerical examples demonstrate that this method is an effective technique for solving problems in computer-aided optimal design.
No relation between 2D : 4D fetal testosterone marker and dyslexia.
Boets, Bart; De Smedt, Bert; Wouters, Jan; Lemay, Katrien; Ghesquière, Pol
2007-09-17
It has been suggested that high levels of prenatal testosterone exposure are implied in the aetiology of dyslexia and its frequently co-occurring sensory problems. This study examined 2D : 4D digit ratio (a marker of fetal testosterone exposure) in dyslexic and normal reading children. No group differences in 2D : 4D were observed. Digit ratio did not show the postulated relation with reading, spelling, phonological ability, speech perception, auditory processing and visual processing. These findings challenge the validity of theories that allocate a prominent role to fetal testosterone exposure in the aetiology of dyslexia and its sensory impairments. PMID:17712280
Huang, Peng; Tilley, Barbara C.; Woolson, Robert F.; Lipsitz, Stuart
2010-01-01
Summary O'Brien (1984, Biometrics 40, 1079–1087) introduced a simple nonparametric test procedure for testing whether multiple outcomes in one treatment group have consistently larger values than outcomes in the other treatment group. We first explore the theoretical properties of O'Brien's test. We then extend it to the general nonparametric Behrens–Fisher hypothesis problem when no assumption is made regarding the shape of the distributions. We provide conditions when O'Brien's test controls its error probability asymptotically and when it fails. We also provide adjusted tests when the conditions do not hold. Throughout this article, we do not assume that all outcomes are continuous. Simulations are performed to compare the adjusted tests to O'Brien's test. The difference is also illustrated using data from a Parkinson's disease clinical trial. PMID:16011701
Melton-Celsa, Angela R; O'Brien, Alison D; Feng, Peter C H
2015-11-01
Shiga toxin (Stx)-producing Escherichia coli (STEC) strains are food- and waterborne pathogens that are often transmitted via beef products or fresh produce. STEC strains cause both sporadic infections and outbreaks, which may result in hemorrhagic colitis and hemolytic uremic syndrome. STEC strains may elaborate Stx1, Stx2, and/or subtypes of those toxins. Epidemiological evidence indicates that STEC that produce subtypes Stx2a, Stx2c, and/or Stx2d are more often associated with serious illness. The Stx2d subtype becomes more toxic to Vero cells after incubation with intestinal mucus or elastase, a process named "activation." Stx2d is not generally found in the E. coli serotypes most commonly connected to STEC outbreaks. However, STEC strains that are stx2d positive can be isolated from foods, an occurrence that gives rise to the question of whether those food isolates are potential human pathogens. In this study, we examined 14 STEC strains from fresh produce that were stx2d positive and found that they all produced the mucus-activatable Stx2d and that a subset of the strains tested were virulent in streptomycin-treated mice.
EFFECTS OF SMOKING ON D2/D3 STRIATAL RECEPTOR AVAILABILITY IN ALCOHOLICS AND SOCIAL DRINKERS
Albrecht, Daniel S.; Kareken, David A.; Yoder, Karmen K.
2013-01-01
Objective Studies have reported lower striatal D2/D3 receptor availability in both alcoholics and cigarette smokers relative to healthy controls. These substances are commonly co-abused, yet the relationship between comorbid alcohol/tobacco abuse and striatal D2/D3 receptor availability has not been examined. We sought to determine the degree to which dual abuse of alcohol and tobacco is associated with lower D2/D3 receptor availability. Method Eighty-one subjects (34 nontreatment-seeking alcoholic smokers [NTS-S], 21 social-drinking smokers [SD-S], and 26 social-drinking non-smokers [SD-NS]) received baseline [11C]raclopride scans. D2/D3 binding potential (BPND ≡ Bavail/KD) was estimated for ten anatomically defined striatal regions of interest (ROIs). Results Significant group effects were detected in bilateral pre-commissural dorsal putamen, bilateral pre-commissural dorsal caudate; and bilateral post-commissural dorsal putamen. Post-hoc testing revealed that, regardless of drinking status, smokers had lower D2/D3 receptor availability than non-smoking controls. Conclusions Chronic tobacco smokers have lower striatal D2/D3 receptor availability than non-smokers, independent of alcohol use. Additional studies are needed to identify the mechanisms by which chronic tobacco smoking is associated with striatal dopamine receptor availability. PMID:23649848
The use of 2D and 3D information in a perceptual-cognitive judgement task.
Put, Koen; Wagemans, Johan; Spitz, Jochim; Gallardo, Manuel Armenteros; Williams, A Mark; Helsen, Werner F
2014-01-01
We examined whether the use of three-dimensional (3D) simulations in an off-field offside decision-making task is beneficial compared to the more widely available two-dimensional (2D) simulations. Thirty-three assistant referees, who were all involved in professional football, participated in the experiment. They assessed 40 offside situations in both 2D and 3D formats using a counterbalanced design. A distinction was made between offside situations near (i.e., 15 m) and far (i.e., 30 m) from the touchline. Subsequently, a frame recognition task was performed in which assistant referees were asked to indicate which of the five pictures represented the previous video scene. A higher response accuracy score was observed under 3D (80.0%) compared to 2D (75.0%) conditions, in particular for the situations near the touchline (3D: 81.8%; 2D: 72.7%). No differences were reported between 2D and 3D in the frame recognition task. Findings suggest that in highly dynamic and complex situations, the visual system can benefit from the availability of 3D information, especially for relatively fine, metric position judgements. In the memory task, in which a mental abstraction had to be made from a dynamic situation to a static snapshot, 3D stereo disparities do not add anything over and beyond 2D simulations. The specific task demands should be taken into account when considering the most appropriate format for testing and training. PMID:24857384
NASA Astrophysics Data System (ADS)
Jeromin, A.; Schaffarczyk, A. P.; Puczylowski, J.; Peinke, J.; Hölling, M.
2014-12-01
For the investigation of atmospheric turbulent flows on small scales a new anemometer was developed, the so-called 2d-Atmospheric Laser Cantilever Anemometer (2d-ALCA). It performs highly resolved measurements with a spatial resolution in millimeter range and temporal resolution in kHz range, thus detecting very small turbulent structures. The anemometer is a redesign of the successfully operating 2d-LCA for laboratory application. The new device was designed to withstand hostile operating environments (rain and saline, humid air). In February 2012, the 2d-ALCA was used for the first time in a test field. The device was mounted in about 53 m above ground level on a lattice tower near the German North Sea coast. Wind speed was measured by the 2d-ALCA at 10 kHz sampling rate and by cup anemometers at 1 Hz. The instantaneous wind speed ranged from 8 m/s to 19 m/s at an average turbulence level of about 7 %. Wind field characteristics were analyzed based on cup anemometer as well as 2d-ALCA. The combination of both devices allowed the study of atmospheric turbulence over several magnitudes in turbulent scales.
The use of 2D and 3D information in a perceptual-cognitive judgement task.
Put, Koen; Wagemans, Johan; Spitz, Jochim; Gallardo, Manuel Armenteros; Williams, A Mark; Helsen, Werner F
2014-01-01
We examined whether the use of three-dimensional (3D) simulations in an off-field offside decision-making task is beneficial compared to the more widely available two-dimensional (2D) simulations. Thirty-three assistant referees, who were all involved in professional football, participated in the experiment. They assessed 40 offside situations in both 2D and 3D formats using a counterbalanced design. A distinction was made between offside situations near (i.e., 15 m) and far (i.e., 30 m) from the touchline. Subsequently, a frame recognition task was performed in which assistant referees were asked to indicate which of the five pictures represented the previous video scene. A higher response accuracy score was observed under 3D (80.0%) compared to 2D (75.0%) conditions, in particular for the situations near the touchline (3D: 81.8%; 2D: 72.7%). No differences were reported between 2D and 3D in the frame recognition task. Findings suggest that in highly dynamic and complex situations, the visual system can benefit from the availability of 3D information, especially for relatively fine, metric position judgements. In the memory task, in which a mental abstraction had to be made from a dynamic situation to a static snapshot, 3D stereo disparities do not add anything over and beyond 2D simulations. The specific task demands should be taken into account when considering the most appropriate format for testing and training.
Effect of CYP2D6 genetic polymorphism on the metabolism of citalopram in vitro.
Hu, Xiao-Xia; Yuan, Ling-Jing; Fang, Ping; Mao, Yong-Hui; Zhan, Yun-Yun; Li, Xiang-Yu; Dai, Da-Peng; Cai, Jian-Ping; Hu, Guo-Xin
2016-04-01
Genetic polymorphisms of CYP2D6 significantly influence the efficacy and safety of some drugs, which might cause adverse effects and therapeutic failure. We aimed at investigating the role of CYP2D6 in the metabolism of citalopram and identifying the effect of 24 CYP2D6 allelic variants we found in Chinese Han population on the metabolism of citalopram in vitro. These CYP2D6 variants expressed by insect cells system were incubated with 10-1000 μM citalopram for 30 min at 37 °C and the reaction was terminated by cooling to -80 °C immediately. Citalopram and its metabolites were analyzed by high-performance liquid chromatography (HPLC). The intrinsic clearance (Vmax/Km) values of the variants toward citalopram metabolites were significantly altered, 38-129% for demethylcitalopram and 13-138% for citalopram N-oxide when compared with CYP2D6*1. Most of the tested rare alleles exhibited significantly decreased values due to increased Km and/or decreased Vmax values. We conclude that recombinant system could be used to investigate the enzymes involved in drug metabolism and these findings suggest that more attention should be paid to subjects carrying these CYP2D6 alleles when administering citalopram in the clinic. PMID:27016952
Can exposure to prenatal sex hormones (2D:4D) predict cognitive reflection?
Bosch-Domènech, Antoni; Brañas-Garza, Pablo; Espín, Antonio M
2014-05-01
The Cognitive Reflection Test (CRT) is a test introduced by Frederick (2005). The task is designed to measure the tendency to override an intuitive response that is incorrect and to engage in further reflection that leads to the correct response. The consistent sex differences in CRT performance may suggest a role for prenatal sex hormones. A now widely studied putative marker for relative prenatal testosterone is the second-to-fourth digit ratio (2D:4D). This paper tests to what extent 2D:4D, as a proxy for the prenatal ratio of testosterone/estrogens, can predict CRT scores in a sample of 623 students. After controlling for sex, we observe that a lower 2D:4D (reflecting a relative higher exposure to testosterone) is significantly associated with a higher number of correct answers. The result holds for both hands' 2D:4Ds. In addition, the effect appears to be stronger for females than for males. We also control for patience and math proficiency, which are significantly related to performance in the CRT. But the effect of 2D:4D on performance in CRT is not reduced with these controls, implying that these variables are not mediating the relationship between digit ratio and CRT.
NASA Astrophysics Data System (ADS)
Chae, Dongho; Constantin, Peter; Wu, Jiahong
2014-09-01
We give an example of a well posed, finite energy, 2D incompressible active scalar equation with the same scaling as the surface quasi-geostrophic equation and prove that it can produce finite time singularities. In spite of its simplicity, this seems to be the first such example. Further, we construct explicit solutions of the 2D Boussinesq equations whose gradients grow exponentially in time for all time. In addition, we introduce a variant of the 2D Boussinesq equations which is perhaps a more faithful companion of the 3D axisymmetric Euler equations than the usual 2D Boussinesq equations.
PLAN2D - A PROGRAM FOR ELASTO-PLASTIC ANALYSIS OF PLANAR FRAMES
NASA Technical Reports Server (NTRS)
Lawrence, C.
1994-01-01
PLAN2D is a FORTRAN computer program for the plastic analysis of planar rigid frame structures. Given a structure and loading pattern as input, PLAN2D calculates the ultimate load that the structure can sustain before collapse. Element moments and plastic hinge rotations are calculated for the ultimate load. The location of hinges required for a collapse mechanism to form are also determined. The program proceeds in an iterative series of linear elastic analyses. After each iteration the resulting elastic moments in each member are compared to the reserve plastic moment capacity of that member. The member or members that have moments closest to their reserve capacity will determine the minimum load factor and the site where the next hinge is to be inserted. Next, hinges are inserted and the structural stiffness matrix is reformulated. This cycle is repeated until the structure becomes unstable. At this point the ultimate collapse load is calculated by accumulating the minimum load factor from each previous iteration and multiplying them by the original input loads. PLAN2D is based on the program STAN, originally written by Dr. E.L. Wilson at U.C. Berkeley. PLAN2D has several limitations: 1) Although PLAN2D will detect unloading of hinges it does not contain the capability to remove hinges; 2) PLAN2D does not allow the user to input different positive and negative moment capacities and 3) PLAN2D does not consider the interaction between axial and plastic moment capacity. Axial yielding and buckling is ignored as is the reduction in moment capacity due to axial load. PLAN2D is written in FORTRAN and is machine independent. It has been tested on an IBM PC and a DEC MicroVAX. The program was developed in 1988.
3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients
NASA Astrophysics Data System (ADS)
Mitrovic, Uroš; Špiclin, Žiga; Štern, Darko; Markelj, Primož; Likar, Boštjan; Miloševic, Zoran; Pernuš, Franjo
2012-02-01
Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.
2-D Inhomogeneous Modeling of the Solar CO Bands
NASA Astrophysics Data System (ADS)
Ayres, T. R.
1996-05-01
The recent discovery of off-limb emissions in the mid-IR ( ~ 5 mu m) vibration-rotation bands of solar carbon monoxide (CO) has sparked new interest in the formation of the molecular lines, and their ability to diagnose thermal conditions at high altitudes. The off-limb extensions of the strong CO lines indicate the penetration of cool material (T ~ 3500 K) several hundred kilometers into the otherwise hot (T ~ 6000 K) chromosphere. The origin of the cool gas, and its role in the thermal energy balance, remain controversial. The interpretation of the CO observations must rely heavily upon numerical modeling, in particular highly-inhomogeneous thermal structures arrayed in a 2-D scheme that can properly treat the geometry of the grazing rays at the solar limb. The radiation transport, itself, is especially simple for the CO off-limb emissions, because the fundamental bands form quite close to LTE (high collision rates; low spontaneous decay rates) and the background continuum is purely thermal as well (f--f transitions in H(-) and H). Thus, the geometrical aspects of the problem can be treated in considerably more detail than would be practical for typical NLTE scattering lines. I describe the recent modeling efforts, and the diagnostic potential of the CO bands for future observational studies of inhomogeneous surface structure on the Sun, and on other stars of late spectral type. This work was supported by NSF grant AST-9218063 to the University of Colorado.
Adaptation algorithms for 2-D feedforward neural networks.
Kaczorek, T
1995-01-01
The generalized weight adaptation algorithms presented by J.G. Kuschewski et al. (1993) and by S.H. Zak and H.J. Sira-Ramirez (1990) are extended for 2-D madaline and 2-D two-layer feedforward neural nets (FNNs).
Integrating Mobile Multimedia into Textbooks: 2D Barcodes
ERIC Educational Resources Information Center
Uluyol, Celebi; Agca, R. Kagan
2012-01-01
The major goal of this study was to empirically compare text-plus-mobile phone learning using an integrated 2D barcode tag in a printed text with three other conditions described in multimedia learning theory. The method examined in the study involved modifications of the instructional material such that: a 2D barcode was used near the text, the…
Efficient Visible Quasi-2D Perovskite Light-Emitting Diodes.
Byun, Jinwoo; Cho, Himchan; Wolf, Christoph; Jang, Mi; Sadhanala, Aditya; Friend, Richard H; Yang, Hoichang; Lee, Tae-Woo
2016-09-01
Efficient quasi-2D-structure perovskite light-emitting diodes (4.90 cd A(-1) ) are demonstrated by mixing a 3D-structured perovskite material (methyl ammonium lead bromide) and a 2D-structured perovskite material (phenylethyl ammonium lead bromide), which can be ascribed to better film uniformity, enhanced exciton confinement, and reduced trap density. PMID:27334788
CYP2D6: novel genomic structures and alleles
Kramer, Whitney E.; Walker, Denise L.; O’Kane, Dennis J.; Mrazek, David A.; Fisher, Pamela K.; Dukek, Brian A.; Bruflat, Jamie K.; Black, John L.
2010-01-01
Objective CYP2D6 is a polymorphic gene. It has been observed to be deleted, to be duplicated and to undergo recombination events involving the CYP2D7 pseudogene and surrounding sequences. The objective of this study was to discover the genomic structure of CYP2D6 recombinants that interfere with clinical genotyping platforms that are available today. Methods Clinical samples containing rare homozygous CYP2D6 alleles, ambiguous readouts, and those with duplication signals and two different alleles were analyzed by long-range PCR amplification of individual genes, PCR fragment analysis, allele-specific primer extension assay, and DNA sequencing to characterize alleles and genomic structure. Results Novel alleles, genomic structures, and the DNA sequence of these structures are described. Interestingly, in 49 of 50 DNA samples that had CYP2D6 gene duplications or multiplications where two alleles were detected, the chromosome containing the duplication or multiplication had identical tandem alleles. Conclusion Several new CYP2D6 alleles and genomic structures are described which will be useful for CYP2D6 genotyping. The findings suggest that the recombination events responsible for CYP2D6 duplications and multiplications are because of mechanisms other than interchromosomal crossover during meiosis. PMID:19741566
Efficient Visible Quasi-2D Perovskite Light-Emitting Diodes.
Byun, Jinwoo; Cho, Himchan; Wolf, Christoph; Jang, Mi; Sadhanala, Aditya; Friend, Richard H; Yang, Hoichang; Lee, Tae-Woo
2016-09-01
Efficient quasi-2D-structure perovskite light-emitting diodes (4.90 cd A(-1) ) are demonstrated by mixing a 3D-structured perovskite material (methyl ammonium lead bromide) and a 2D-structured perovskite material (phenylethyl ammonium lead bromide), which can be ascribed to better film uniformity, enhanced exciton confinement, and reduced trap density.
Kolkoori, S R; Rahman, M-U; Chinta, P K; Ktreutzbruck, M; Rethmeier, M; Prager, J
2013-02-01
Ultrasound propagation in inhomogeneous anisotropic materials is difficult to examine because of the directional dependency of elastic properties. Simulation tools play an important role in developing advanced reliable ultrasonic non destructive testing techniques for the inspection of anisotropic materials particularly austenitic cladded materials, austenitic welds and dissimilar welds. In this contribution we present an adapted 2D ray tracing model for evaluating ultrasonic wave fields quantitatively in inhomogeneous anisotropic materials. Inhomogeneity in the anisotropic material is represented by discretizing into several homogeneous layers. According to ray tracing model, ultrasonic ray paths are traced during its energy propagation through various discretized layers of the material and at each interface the problem of reflection and transmission is solved. The presented algorithm evaluates the transducer excited ultrasonic fields accurately by taking into account the directivity of the transducer, divergence of the ray bundle, density of rays and phase relations as well as transmission coefficients. The ray tracing model is able to calculate the ultrasonic wave fields generated by a point source as well as a finite dimension transducer. The ray tracing model results are validated quantitatively with the results obtained from 2D Elastodynamic Finite Integration Technique (EFIT) on several configurations generally occurring in the ultrasonic non destructive testing of anisotropic materials. Finally, the quantitative comparison of ray tracing model results with experiments on 32mm thick austenitic weld material and 62mm thick austenitic cladded material is discussed.
Wong, Wang I; Hines, Melissa
2016-02-01
The popularity of using the ratio of the second to the fourth digit (2D:4D) to study influences of early androgen exposure on human behavior relies, in part, on a report that the ratio is sex-dimorphic and stable from age 2 years (Manning etal., 1998). However, subsequent research has rarely replicated this finding. Moreover, although 2D:4D has been correlated with many behaviors, these correlations are often inconsistent. Young children's 2D:4D-behavior correlations may be more consistent than those of older individuals, because young children have experienced fewer postnatal influences. To evaluate the usefulness of 2D:4D as a biomarker of prenatal androgen exposure in studies of 2D:4D-behavior correlations, we assessed its sex difference, temporal stability, and behavioral correlates over a 6- to 8-month period in 126, 2- to 3-year-old children, providing a rare same-sample replicability test. We found a moderate sex difference on both hands and high temporal stability. However, between-sex overlap and within-sex variability were also large. Only 3 of 24 correlations with sex-typed behaviors-scores on the Preschool Activities Inventory (PSAI), preference for a boy-typical toy, preference for a girl-typical toy, were significant and in the predicted direction, all of which involved the PSAI, partially confirming findings from another study. Correlation coefficients were larger for behaviors that showed larger sex differences. But, as in older samples, the overall pattern showed inconsistency across time, sex, and hand. Therefore, although sex-dimorphic and stable, 2D:4D-behavior correlations are no more consistent for young children than for older samples. Theoretical and methodological implications are discussed.
Synthesis and characterization of 2D molybdenum carbide (MXene)
Halim, Joseph; Kota, Sankalp; Lukatskaya, Maria R.; Naguib, Michael; Zhao, Meng -Qiang; Moon, Eun Ju; Pitock, Jeremy; Nanda, Jagjit; May, Steven J.; Gogotsi, Yury; et al
2016-02-17
Large scale synthesis and delamination of 2D Mo2CT x (where T is a surface termination group) has been achieved by selectively etching gallium from the recently discovered nanolaminated, ternary transition metal carbide Mo2Ga2C. Different synthesis and delamination routes result in different flake morphologies. The resistivity of free-standing Mo2CT x films increases by an order of magnitude as the temperature is reduced from 300 to 10 K, suggesting semiconductor-like behavior of this MXene, in contrast to Ti3C2T x which exhibits metallic behavior. At 10 K, the magnetoresistance is positive. Additionally, changes in electronic transport are observed upon annealing of the films.more » When 2 μm thick films are tested as electrodes in supercapacitors, capacitances as high as 700 F cm–3 in a 1 m sulfuric acid electrolyte and high capacity retention for at least 10,000 cycles at 10 A g–1 are obtained. Free-standing Mo2CT x films, with ≈8 wt% carbon nanotubes, perform well when tested as an electrode material for Li-ions, especially at high rates. In conclusion, at 20 and 131 C cycling rates, stable reversible capacities of 250 and 76 mAh g–1, respectively, are achieved for over 1000 cycles.« less
GetDDM: An open framework for testing optimized Schwarz methods for time-harmonic wave problems
NASA Astrophysics Data System (ADS)
Thierry, B.; Vion, A.; Tournier, S.; El Bouajaji, M.; Colignon, D.; Marsic, N.; Antoine, X.; Geuzaine, C.
2016-06-01
We present an open finite element framework, called GetDDM, for testing optimized Schwarz domain decomposition techniques for time-harmonic wave problems. After a review of Schwarz domain decomposition methods and associated transmission conditions, we discuss the implementation, based on the open source software GetDP and Gmsh. The solver, along with ready-to-use examples for Helmholtz and Maxwell's equations, is freely available online for further testing.
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-01
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method. PMID:26805835
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-20
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method.
Hyun, Eugin; Jin, Young-Seok; Lee, Jong-Hun
2016-01-01
For an automotive pedestrian detection radar system, fast-ramp based 2D range-Doppler Frequency Modulated Continuous Wave (FMCW) radar is effective for distinguishing between moving targets and unwanted clutter. However, when a weak moving target such as a pedestrian exists together with strong clutter, the pedestrian may be masked by the side-lobe of the clutter even though they are notably separated in the Doppler dimension. To prevent this problem, one popular solution is the use of a windowing scheme with a weighting function. However, this method leads to a spread spectrum, so the pedestrian with weak signal power and slow Doppler may also be masked by the main-lobe of clutter. With a fast-ramp based FMCW radar, if the target is moving, the complex spectrum of the range- Fast Fourier Transform (FFT) is changed with a constant phase difference over ramps. In contrast, the clutter exhibits constant phase irrespective of the ramps. Based on this fact, in this paper we propose a pedestrian detection for highly cluttered environments using a coherent phase difference method. By detecting the coherent phase difference from the complex spectrum of the range-FFT, we first extract the range profile of the moving pedestrians. Then, through the Doppler FFT, we obtain the 2D range-Doppler map for only the pedestrian. To test the proposed detection scheme, we have developed a real-time data logging system with a 24 GHz FMCW transceiver. In laboratory tests, we verified that the signal processing results from the proposed method were much better than those expected from the conventional 2D FFT-based detection method. PMID:26805835
FLAG Simulations of the Elasticity Test Problem of Gavrilyuk et al.
Kamm, James R.; Runnels, Scott R.; Canfield, Thomas R.; Carney, Theodore C.
2014-04-23
This report contains a description of the impact problem used to compare hypoelastic and hyperelastic material models, as described by Gavrilyuk, Favrie & Saurel. That description is used to set up hypoelastic simulations in the FLAG hydrocode.
2D materials and van der Waals heterostructures.
Novoselov, K S; Mishchenko, A; Carvalho, A; Castro Neto, A H
2016-07-29
The physics of two-dimensional (2D) materials and heterostructures based on such crystals has been developing extremely fast. With these new materials, truly 2D physics has begun to appear (for instance, the absence of long-range order, 2D excitons, commensurate-incommensurate transition, etc.). Novel heterostructure devices--such as tunneling transistors, resonant tunneling diodes, and light-emitting diodes--are also starting to emerge. Composed from individual 2D crystals, such devices use the properties of those materials to create functionalities that are not accessible in other heterostructures. Here we review the properties of novel 2D crystals and examine how their properties are used in new heterostructure devices.
Van der Waals stacked 2D layered materials for optoelectronics
NASA Astrophysics Data System (ADS)
Zhang, Wenjing; Wang, Qixing; Chen, Yu; Wang, Zhuo; Wee, Andrew T. S.
2016-06-01
The band gaps of many atomically thin 2D layered materials such as graphene, black phosphorus, monolayer semiconducting transition metal dichalcogenides and hBN range from 0 to 6 eV. These isolated atomic planes can be reassembled into hybrid heterostructures made layer by layer in a precisely chosen sequence. Thus, the electronic properties of 2D materials can be engineered by van der Waals stacking, and the interlayer coupling can be tuned, which opens up avenues for creating new material systems with rich functionalities and novel physical properties. Early studies suggest that van der Waals stacked 2D materials work exceptionally well, dramatically enriching the optoelectronics applications of 2D materials. Here we review recent progress in van der Waals stacked 2D materials, and discuss their potential applications in optoelectronics.
Installed Transonic 2D Nozzle Nacelle Boattail Drag Study
NASA Technical Reports Server (NTRS)
Malone, Michael B.; Peavey, Charles C.
1999-01-01
The Transonic Nozzle Boattail Drag Study was initiated in 1995 to develop an understanding of how external nozzle transonic aerodynamics effect airplane performance and how strongly those effects are dependent on nozzle configuration (2D vs. axisymmetric). MDC analyzed the axisymmetric nozzle. Boeing subcontracted Northrop-Grumman to analyze the 2D nozzle. AU participants analyzed the AGARD nozzle as a check-out and validation case. Once the codes were checked out and the gridding resolution necessary for modeling the separated flow in this region determined, the analysis moved to the installed wing/body/nacelle/diverter cases. The boat tail drag validation case was the AGARD B.4 rectangular nozzle. This test case offered both test data and previous CFD analyses for comparison. Results were obtained for test cases B.4.1 (M=0.6) and B.4.2 (M=0.938) and compared very well with the experimental data. Once the validation was complete a CFD grid was constructed for the full Ref. H configuration (wing/body/nacelle/diverter) using a combination of patched and overlapped (Chimera) grids. This was done to ensure that the grid topologies and density would be adequate for the full model. The use of overlapped grids allowed the same grids from the full configuration model to be used for the wing/body alone cases, thus eliminating the risk of grid differences affecting the determination of the installation effects. Once the full configuration model was run and deemed to be suitable the nacelle/diverter grids were removed and the wing/body analysis performed. Reference H wing/body results were completed for M=0.9 (a=0.0, 2.0, 4.0, 6.0 and 8.0), M=1.1 (a=4.0 and 6.0) and M=2.4 (a=0.0, 2.0, 4.4, 6.0 and 8.0). Comparisons of the M=0.9 and M=2.4 cases were made with available wind tunnel data and overall comparisons were good. The axi-inlet/2D nozzle nacelle was analyzed isolated. The isolated nacelle data coupled with the wing/body result enabled the interference effects of the
Estrogen-Induced Cholestasis Leads to Repressed CYP2D6 Expression in CYP2D6-Humanized Mice
Pan, Xian
2015-01-01
Cholestasis activates bile acid receptor farnesoid X receptor (FXR) and subsequently enhances hepatic expression of small heterodimer partner (SHP). We previously demonstrated that SHP represses the transactivation of cytochrome P450 2D6 (CYP2D6) promoter by hepatocyte nuclear factor (HNF) 4α. In this study, we investigated the effects of estrogen-induced cholestasis on CYP2D6 expression. Estrogen-induced cholestasis occurs in subjects receiving estrogen for contraception or hormone replacement, or in susceptible women during pregnancy. In CYP2D6-humanized transgenic (Tg-CYP2D6) mice, cholestasis triggered by administration of 17α-ethinylestradiol (EE2) at a high dose led to 2- to 3-fold decreases in CYP2D6 expression. This was accompanied by increased hepatic SHP expression and subsequent decreases in the recruitment of HNF4α to CYP2D6 promoter. Interestingly, estrogen-induced cholestasis also led to increased recruitment of estrogen receptor (ER) α, but not that of FXR, to Shp promoter, suggesting a predominant role of ERα in transcriptional regulation of SHP in estrogen-induced cholestasis. EE2 at a low dose (that does not cause cholestasis) also increased SHP (by ∼50%) and decreased CYP2D6 expression (by 1.5-fold) in Tg-CYP2D6 mice, the magnitude of differences being much smaller than that shown in EE2-induced cholestasis. Taken together, our data indicate that EE2-induced cholestasis increases SHP and represses CYP2D6 expression in Tg-CYP2D6 mice in part through ERα transactivation of Shp promoter. PMID:25943116
Review of Problems of Testing for Homogeneity Prior to Running an ANOVA.
ERIC Educational Resources Information Center
Proper, Elizabeth C.
Texts often suggest running preliminary tests for homogeneity of variance prior to running an ANOVA. While it has been known for some time that most of the suggested tests are probably not appropriate, they are still being used. This paper is a review of the literature in terms of the implications involved in running preliminary tests in general…
Progress in Understanding the Infrared Spectra of He- and Ne-C_2D_2
NASA Astrophysics Data System (ADS)
Moazzen-Ahmadi, Nasser; McKellar, Bob
2014-06-01
Infrared spectra of He-C_2H_2 were recorded around 1990 in Roger Miller's lab, but detailed rotational assignment was apparently not possible even with the help of theoretical predictions. So there were no published experimental spectra of helium-acetylene van der Waals complexes until our recent work on He-C_2D_2 in the νb{3} region (˜2440 wn). The problem is that this complex lies close to the free rotor limit, so that most of the intensity in the spectrum piles up in tangles of closely spaced lines located close to the monomer rotational transitions, R(0), P(1), etc. Our previous He-C_2D_2 assignments were limited to the R(0) region, that is, the j = 1 ← 0 subband, where j represents C_2D_2 rotation. Here, we extend the analysis to j = 0 ← 1 and 2 ← 1 transitions with the help of new spectra obtained using a tunable OPO laser probe and a cooled supersonic jet nozzle. These subbands are weaker, not only because of the Boltzmann factor, but also the 2:1 nuclear spin statistics of j" = even:odd C_2D_2 levels. Moreover, the j = 0 ← 1 subband is overlapped by strong (C_2D_2)_2 transitions. We use a term value approach, obtaining a self-consistent set of ``experimental" energy levels which can be directly compared with theory or fitted in terms of a Coriolis model. Challenges also arise with Ne-C_2D_2, which is not quite so close to the free rotor limit, but still has many overlapping lines. Insights gained here help in assigning the tricky R(1) region for Ne-C_2D_2. M. Rezaei, N. Moazzen-Ahmadi, A.R.W. McKellar, B. Fernández, and D. Farrelly, Mol. Phys. 110, 2743 (2012).
Test Problems for Reactive Flow HE Model in the ALE3D Code and Limited Sensitivity Study
Gerassimenko, M.
2000-03-01
We document quick running test problems for a reactive flow model of HE initiation incorporated into ALE3D. A quarter percent change in projectile velocity changes the outcome from detonation to HE burn that dies down. We study the sensitivity of calculated HE behavior to several parameters of practical interest where modeling HE initiation with ALE3D.
ERIC Educational Resources Information Center
Liu, Lisa L.; Lau, Anna S.; Chen, Angela Chia-Chen; Dinh, Khanh T.; Kim, Su Yeong
2009-01-01
Associations among neighborhood disadvantage, maternal acculturation, parenting and conduct problems were investigated in a sample of 444 Chinese American adolescents. Adolescents (54% female, 46% male) ranged from 12 to 15 years of age (mean age = 13.0 years). Multilevel modeling was employed to test the hypothesis that the association between…
ERIC Educational Resources Information Center
White, Lydia; And Others
1997-01-01
Studies on second-language acquisition of reflexives have experienced difficulties assessing learners' knowledge of the binding principles because of problems associated with ambiguous sentences where there is more than one antecedent for a reflexive. In this study, English-as-a-Second-Language students were tested using a variety of sentence…
ERIC Educational Resources Information Center
Hambrick, David Z.; Libarkin, Julie C.; Petcovic, Heather L.; Baker, Kathleen M.; Elkins, Joe; Callahan, Caitlin N.; Turner, Sheldon P.; Rench, Tara A.; LaDue, Nicole D.
2012-01-01
Sources of individual differences in scientific problem solving were investigated. Participants representing a wide range of experience in geology completed tests of visuospatial ability and geological knowledge, and performed a geological bedrock mapping task, in which they attempted to infer the geological structure of an area in the Tobacco…
ERIC Educational Resources Information Center
Masson, J. D.; Dagnan, D.; Evans, J.
2010-01-01
Background: There is a need for validated, standardised tools for the assessment of executive functions in adults with intellectual disabilities (ID). This study examines the validity of a test of planning and problem solving (Tower of London) with adults with ID. Method: Participants completed an adapted version of the Tower of London (ToL) while…
A novel KMT2D mutation resulting in Kabuki syndrome: A case report
Lu, Jun; Mo, Guiling; Ling, Yaojun; Ji, Lijuan
2016-01-01
Kabuki syndrome (KS) is a rare genetic syndrome characterized by multiple congenital anomalies and varying degrees of mental retardation. Patients with KS often present with facial, skeletal, visceral and dermatoglyphic abnormalities, cardiac anomalies and immunological defects. Mutation of the lysine methyltransferase 2D (KMT2D) gene (formerly known as MLL2) is the primary cause of KS. The present study reported the case of a 4-year-old Chinese girl who presented with atypical KS, including atypical facial features, unclear speech and suspected mental retardation. A diagnosis of KS was confirmed by genetic testing, which revealed a nonsense mutation in exon 16 of KMT2D (c.4485C>A, Tyr1495Ter). To the best of our knowledge, this is a novel mutation that has not been reported previously. The present case underscores the importance of genetic testing in KS diagnosis. PMID:27573763
A novel KMT2D mutation resulting in Kabuki syndrome: A case report.
Lu, Jun; Mo, Guiling; Ling, Yaojun; Ji, Lijuan
2016-10-01
Kabuki syndrome (KS) is a rare genetic syndrome characterized by multiple congenital anomalies and varying degrees of mental retardation. Patients with KS often present with facial, skeletal, visceral and dermatoglyphic abnormalities, cardiac anomalies and immunological defects. Mutation of the lysine methyltransferase 2D (KMT2D) gene (formerly known as MLL2) is the primary cause of KS. The present study reported the case of a 4‑year‑old Chinese girl who presented with atypical KS, including atypical facial features, unclear speech and suspected mental retardation. A diagnosis of KS was confirmed by genetic testing, which revealed a nonsense mutation in exon 16 of KMT2D (c.4485C>A, Tyr1495Ter). To the best of our knowledge, this is a novel mutation that has not been reported previously. The present case underscores the importance of genetic testing in KS diagnosis. PMID:27573763
Laser probe for measuring 2-D wave slope spectra of ocean capillary waves.
Palm, C S; Anderson, R C; Reece, A M
1977-04-01
A laser-optical instrument for use in determining the 2-D wave slope spectrum of ocean capillary waves is described. The instrument measures up to a 35 degrees tip angle of the surface normal by measuring the position of a refracted laser beam directed vertically upward through a water surface. A telescope, a continuous 2-D Schottky barrier photodiode, and a pair of analog dividers render the signals independent of water height and insensitive to laser beam intensity fluctuations. Calibration is performed entirely in the laboratory before field use. Sample records and wave slope spectra are shown for 1-D wave tank tests and for 2-D ocean tests. These are presented along with comparison spectra for calm and choppy water conditions. A mechanical wave follower was used to adjust the instrument position in the presence of large ocean swell and tides. PMID:20168638
Quasi 2D Materials: Raman Nanometrology and Thermal Management Applications
NASA Astrophysics Data System (ADS)
Shahil, Khan Mohammad Farhan
Quasi two-dimensional (2D) materials obtained by the "graphene-like" exfoliation attracted tremendous attention. Such materials revealed unique electronic, thermal and optical properties, which can be potentially used in electronics, thermal management and energy conversion. This dissertation research addresses two separate but synergetic problems: (i) preparation and optical characterization of quasi-2D films of the bismuth-telluride (Bi 2Te3) family of materials, which demonstrate both thermoelectric and topological insulator properties; and (ii) investigation of thermal properties of composite materials prepared with graphene and few-layer graphene (FLG). The first part of dissertation reports properties of the exfoliated few-quintuple layers of Bi2Te3, Bi2Se3 and Sb 2Te3. Both non-resonant and resonant Raman scattering spectra have been investigated. It was found that the crystal symmetry breaking in few-quintuple films results in appearance of A1u-symmetry Raman peaks, which are not active in the bulk crystals. The scattering spectra measured under the 633-nm wavelength excitation reveals a number of resonant features, which could be used for analysis of the electronic and phonon processes in these materials. The obtained results help to understand the physical mechanisms of Raman scattering in the few-quintuple-thick films and can be used for nanometrology of topological insulator films on various substrates. The second part of the dissertation is dedicated to investigation of properties of composite materials prepared with graphene and FLG. It was found that the optimized mixture of graphene and multilayer graphene---produced by the high-yield inexpensive liquid-phase-exfoliation technique---can lead to an extremely strong enhancement of the cross-plane thermal conductivity K of the composite. The "laser flash" measurements revealed a record-high enhancement of K by 2300 % in the graphene-based polymer at the filler loading fraction f =10 vol. %. It was
Calculating tissue shear modulus and pressure by 2D Log-Elastographic methods
McLaughlin, Joyce R; Zhang, Ning; Manduca, Armando
2010-01-01
Shear modulus imaging, often called elastography, enables detection and characterization of tissue abnormalities. In this paper the data is two displacement components obtained from successive MR or ultrasound data sets acquired while the tissue is excited mechanically. A 2D plane strain elastic model is assumed to govern the 2D displacement, u. The shear modulus, μ, is unknown and whether or not the first Lamé parameter, λ, is known the pressure p = λ∇ · u which is present in the plane strain model cannot be measured and is unreliably computed from measured data and can be shown to be an order one quantity in the units kPa. So here we present a 2D Log-Elastographic inverse algorithm that: (1) simultaneously reconstructs the shear modulus, μ, and p, which together satisfy a first order partial differential equation system, with the goal of imaging μ; (2) controls potential exponential growth in the numerical error; and (3) reliably reconstructs the quantity p in the inverse algorithm as compared to the same quantity computed with a forward algorithm. This work generalizes the Log-Elastographic algorithm in [20] which uses one displacement component, is derived assuming the component satisfies the wave equation, and is tested on synthetic data computed with the wave equation model. The 2D Log-Elastographic algorithm is tested on 2D synthetic data and 2D in-vivo data from Mayo Clinic. We also exhibit examples to show that the 2D Log-Elastographic algorithm improves the quality of the recovered images as compared to the Log-Elastographic and Direct Inversion algorithms. PMID:21822349
Testing the effectiveness of problem-based learning with learning-disabled students in biology
NASA Astrophysics Data System (ADS)
Guerrera, Claudia Patrizia
The purpose of the present study was to investigate the effects of problem-based learning (PBL) with learning-disabled (LD) students. Twenty-four students (12 dyads) classified as LD and attending a school for the learning-disabled participated in the study. Students engaged in either a computer-based environment involving BioWorld, a hospital simulation designed to teach biology students problem-solving skills, or a paper-and-pencil version based on the computer program. A hybrid model of learning was adopted whereby students were provided with direct instruction on the digestive system prior to participating in a problem-solving activity. Students worked in dyads and solved three problems involving the digestive system in either a computerized or a paper-and-pencil condition. The experimenter acted as a coach to assist students throughout the problem-solving process. A follow-up study was conducted, one month later, to measure the long-term learning gains. Quantitative and qualitative methods were used to analyze three types of data: process data, outcome data, and follow-up data. Results from the process data showed that all students engaged in effective collaboration and became more systematic in their problem solving over time. Findings from the outcome and follow-up data showed that students in both treatment conditions, made both learning and motivational gains and that these benefits were still evident one month later. Overall, results demonstrated that the computer facilitated students' problem solving and scientific reasoning skills. Some differences were noted in students' collaboration and the amount of assistance required from the coach in both conditions. Thus, PBL is an effective learning approach with LD students in science, regardless of the type of learning environment. These results have implications for teaching science to LD students, as well as for future designs of educational software for this population.
De Zorzi, Rita; Nicholson, William V.; Guigner, Jean-Michel; Erne-Brand, Françoise; Vénien-Bryan, Catherine
2013-01-01
2D crystallography has proven to be an excellent technique to determine the 3D structure of membrane proteins. Compared to 3D crystallography, it has the advantage of visualizing the protein in an environment closer to the native one. However, producing good 2D crystals is still a challenge and little statistical knowledge can be gained from literature. Here, we present a thorough screening of 2D crystallization conditions for a prokaryotic inwardly rectifying potassium channel (>130 different conditions). Key parameters leading to very large and well-organized 2D crystals are discussed. In addition, the problem of formation of multilayers during the growth of 2D crystals is also addressed. An intermediate resolution projection map of KirBac3.1 at 6 Å is presented, which sheds (to our knowledge) new light on the structure of this channel in a lipid environment. PMID:23870261
2D/3D Program work summary report, [January 1988--December 1992
Damerell, P. S.; Simons, J. W.
1993-06-01
The 2D/3D Program was carried out by Germany, Japan and the United States to investigate the thermal-hydraulics of a PWR large-break LOCA. A contributory approach was utilized in which each country contributed significant effort to the program and all three countries shared the research results. Germany constructed and operated the Upper Plenum Test Facility (UPTF), and Japan constructed and operated the Cylindrical Core Test Facility (CCTF) and the Slab Core Test Facility (SCTF). The US contribution consisted of provision of advanced instrumentation to each of the three test facilities, and assessment of the TRAC computer code against the test results. Evaluations of the test results were carried out in all three countries. This report summarizes the 2D/3D Program in terms of the contributing efforts of the participants.
A real-time multi-scale 2D Gaussian filter based on FPGA
NASA Astrophysics Data System (ADS)
Luo, Haibo; Gai, Xingqin; Chang, Zheng; Hui, Bin
2014-11-01
Multi-scale 2-D Gaussian filter has been widely used in feature extraction (e.g. SIFT, edge etc.), image segmentation, image enhancement, image noise removing, multi-scale shape description etc. However, their computational complexity remains an issue for real-time image processing systems. Aimed at this problem, we propose a framework of multi-scale 2-D Gaussian filter based on FPGA in this paper. Firstly, a full-hardware architecture based on parallel pipeline was designed to achieve high throughput rate. Secondly, in order to save some multiplier, the 2-D convolution is separated into two 1-D convolutions. Thirdly, a dedicate first in first out memory named as CAFIFO (Column Addressing FIFO) was designed to avoid the error propagating induced by spark on clock. Finally, a shared memory framework was designed to reduce memory costs. As a demonstration, we realized a 3 scales 2-D Gaussian filter on a single ALTERA Cyclone III FPGA chip. Experimental results show that, the proposed framework can computing a Multi-scales 2-D Gaussian filtering within one pixel clock period, is further suitable for real-time image processing. Moreover, the main principle can be popularized to the other operators based on convolution, such as Gabor filter, Sobel operator and so on.
Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng
2016-08-01
Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples.
Damage Assessment and Digital 2D-3D Documentation of PetraTreasury
NASA Astrophysics Data System (ADS)
Bala'awi, Fadi; Alshawabkeh, Yahya; Alawneh, Firas; Masri, Eyed al
The treasury is the icon monument of the world heritage site of ancient Petra city. Unfortunately, this important part of the world's cultural heritage is gradually being diminished due to weathering and erosion problems. This give rise to the need to have a comprehensive study and full documentation of the monument in order to evaluate its status. In this research a comprehensive approach utilizing 2D-3D documentation of the structure using laser scanner and photogrammetry is carried parallel with a laboratory analysis and a correlation study of the salt content and the surface weathering forms. In addition, the research extends to evaluate a set of chemical and physical properties of the case study monument. Studies of stone texture and spatial distribution of soluble salts were carried out at the monument in order to explain the mechanism of the weathering problem. Then a series of field work investigations and laboratory work were undertaken to study the effect of relative humidity, temperature, and wind are the main factors in the salt damage process. The 3D modelling provides accurate geometric and radiometric properties of the damage shape. In order to support the visual quality of 3D surface details and cracks, a hybrid approach combining data from the laser scanner and the digital imagery was developed. Based on the findings, salt damage appears to be one of the main problems at this monument. Although, the total soluble salt content are quite low, but the salts contamination is all over the tested samples in all seasons, with higher concentrations at deep intervals. The thermodynamic calculations carried out by this research have also shown that salt damage could be minimised by controlling the surrounding relative humidity conditions. This measure is undoubtedly the most challenging of all, and its application, if deemed feasible, should be carried out in parallel with other conservation measures.
CYP2D6 polymorphism and mental and personality disorders in suicide attempters.
Blasco-Fontecilla, Hilario; Peñas-Lledó, Eva; Vaquero-Lorenzo, Concepción; Dorado, Pedro; Saiz-Ruiz, Jerónimo; Llerena, Adrián; Baca-García, Enrique
2014-12-01
Prior studies on the association between the CYP2D6 polymorphism and suicide did not explore whether mental and personality disorders mediate this association. The main objective of the present study was to test an association between CYP2D6 polymorphism and mental and personality disorders among suicide attempters. The MINI and the DSM-IV version of the International Personality Disorder Examination Screening Questionnaire were used to diagnose mental and personality disorders, respectively, in 342 suicide attempters. Suicide attempters were divided into four groups according to their number of CYP2D6 active genes (zero, one, and two or more). Differences in mental and personality disorders across the four groups were measured using linear-by-linear association, chi square-test, and 95% confidence intervals. Suicide attempters carrying two or more active CYP2D6 genes were more likely to be diagnosed with at least one personality disorder than those with one or zero CYP2D6 active genes.
Digit ratio (2D:4D), aggression, and testosterone in men exposed to an aggressive video stimulus.
Kilduff, Liam P; Hopp, Renato N; Cook, Christian J; Crewther, Blair T; Manning, John T
2013-01-01
The relative lengths of the 2(nd) and 4(th) digits (2D:4D) is a negative biomarker for prenatal testosterone, and low 2D:4D may be associated with aggression. However, the evidence for a 2D:4D-aggression association is mixed. Here we test the hypothesis that 2D:4D is robustly linked to aggression in "challenge" situations in which testosterone is increased. Participants were exposed to an aggressive video and a control video. Aggression was measured after each video and salivary free testosterone levels before and after each video. Compared to the control video, the aggressive video was associated with raised aggression responses and a marginally significant increase in testosterone. Left 2D:4D was negatively correlated with aggression after the aggressive video and the strength of the correlation was higher in those participants who showed the greatest increases in testosterone. Left 2D:4D was also negatively correlated to the difference between aggression scores in the aggressive and control conditions. The control video did not influence testosterone concentrations and there were no associations between 2D:4D and aggression. We conclude that 2D:4D moderates the impact of an aggressive stimulus on aggression, such that an increase in testosterone resulting from a "challenge" is associated with a negative correlation between 2D:4D and aggression.
Long-Read Single Molecule Real-Time Full Gene Sequencing of Cytochrome P450-2D6.
Qiao, Wanqiong; Yang, Yao; Sebra, Robert; Mendiratta, Geetu; Gaedigk, Andrea; Desnick, Robert J; Scott, Stuart A
2016-03-01
The cytochrome P450-2D6 (CYP2D6) enzyme metabolizes ∼25% of common medications, yet homologous pseudogenes and copy number variants (CNVs) make interrogating the polymorphic CYP2D6 gene with short-read sequencing challenging. Therefore, we developed a novel long-read, full gene CYP2D6 single molecule real-time (SMRT) sequencing method using the Pacific Biosciences platform. Long-range PCR and CYP2D6 SMRT sequencing of 10 previously genotyped controls identified expected star (*) alleles, but also enabled suballele resolution, diplotype refinement, and discovery of novel alleles. Coupled with an optimized variant-calling pipeline, CYP2D6 SMRT sequencing was highly reproducible as triplicate intra- and inter-run nonreference genotype results were completely concordant. Importantly, targeted SMRT sequencing of upstream and downstream CYP2D6 gene copies characterized the duplicated allele in 15 control samples with CYP2D6 CNVs. The utility of CYP2D6 SMRT sequencing was further underscored by identifying the diplotypes of 14 samples with discordant or unclear CYP2D6 configurations from previous targeted genotyping, which again included suballele resolution, duplicated allele characterization, and discovery of a novel allele and tandem arrangement. Taken together, long-read CYP2D6 SMRT sequencing is an innovative, reproducible, and validated method for full-gene characterization, duplication allele-specific analysis, and novel allele discovery, which will likely improve CYP2D6 metabolizer phenotype prediction for both research and clinical testing applications.
Efficient 2D MRI relaxometry using compressed sensing
NASA Astrophysics Data System (ADS)
Bai, Ruiliang; Cloninger, Alexander; Czaja, Wojciech; Basser, Peter J.
2015-06-01
Potential applications of 2D relaxation spectrum NMR and MRI to characterize complex water dynamics (e.g., compartmental exchange) in biology and other disciplines have increased in recent years. However, the large amount of data and long MR acquisition times required for conventional 2D MR relaxometry limits its applicability for in vivo preclinical and clinical MRI. We present a new MR pipeline for 2D relaxometry that incorporates compressed sensing (CS) as a means to vastly reduce the amount of 2D relaxation data needed for material and tissue characterization without compromising data quality. Unlike the conventional CS reconstruction in the Fourier space (k-space), the proposed CS algorithm is directly applied onto the Laplace space (the joint 2D relaxation data) without compressing k-space to reduce the amount of data required for 2D relaxation spectra. This framework is validated using synthetic data, with NMR data acquired in a well-characterized urea/water phantom, and on fixed porcine spinal cord tissue. The quality of the CS-reconstructed spectra was comparable to that of the conventional 2D relaxation spectra, as assessed using global correlation, local contrast between peaks, peak amplitude and relaxation parameters, etc. This result brings this important type of contrast closer to being realized in preclinical, clinical, and other applications.
Subehan; Usia, Tepy; Kadota, Shigetoshi; Tezuka, Yasuhiro
2006-05-01
Nineteen alkamides isolated from Piper nigrum L. were tested for their mechanism-based inhibition on human liver microsomal dextromethorphan O-demethylation activity, a prototype marker for cytochrome P450 2D6 (CYP2D6). All compounds increased their inhibitory activity with increasing preincubation time. Among them, 15 and 17 showed more than 50 % decrease of the CYP2D6 residual activity after 20 min preincubation. Further investigations on 15 and 17 showed that the characteristic time- and concentration-dependent inhibition, which required a catalytic step with NADPH, was not protected by nucleophiles, and was decreased by the presence of a competitive inhibitor. The kinetic parameters for inactivation (kinact and KI) were 0.028 min-1 and 0.23 microM for 15 and 0.064 min-1 and 0.71 microM for 17, respectively, which were stronger than the known mechanism-based inhibitor, paroxetine (a positive control). Thus, 15 and 17 are potent mechanism-based inhibitors of CYP2D6.
A Study Guide on Holography (Draft). Test Edition. AAAS Study Guides on Contemporary Problems.
ERIC Educational Resources Information Center
Jeong, Tung H.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. The primary purpose of this guide is to provide a student with sufficient practical and technical information to begin independently practicing holography, with occasional…
Atmospheric Sciences. Test Edition. AAAS Study Guides on Contemporary Problems, No. 6.
ERIC Educational Resources Information Center
Schaefer, Vincent J.; Mohnen, Volker A.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide includes the following sections: (1) Solar Radiation and Its Interaction with the Earth's Atmosphere System; (2) The Water Cycle; (3) Fundamentals of Air…
ERIC Educational Resources Information Center
Lorber, Michael F.; Egeland, Byron
2011-01-01
The prediction of conduct problems (CPs) from infant difficulty and parenting measured in the first 6 months of life was studied in a sample of 267 high-risk mother-child dyads. Stable, cross-situational CPs at school entry (5-6 years) were predicted by negative infancy parenting, mediated by mutually angry and hostile mother-toddler interactions…
ERIC Educational Resources Information Center
Kidd, David E.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide on water pollution includes the following units: (1) Overview of World Pollution; (2) History, Definition, Criteria; (3) Ecosystem Theory; (4) Biological…
ERIC Educational Resources Information Center
Mesa, Vilma; Wladis, Claire; Watkins, Laura
2014-01-01
This commentary articulates the need to investigate problems of mathematics instruction at community colleges. The authors briefly describe some features of this often-ignored institution and the current status of research. They also make an argument for how investigations of instruction in this setting can both advance understanding of this…
ERIC Educational Resources Information Center
Bolkan, San; Goodboy, Alan K.
2016-01-01
Protection motivation theory (PMT) explains people's adaptive behavior in response to personal threats. In this study, PMT was used to predict rhetorical dissent episodes related to 210 student reports of perceived classroom problems. In line with theoretical predictions, a moderated moderation analysis revealed that students were likely to voice…
ERIC Educational Resources Information Center
Wedell, Douglas H.; Moro, Rodrigo
2008-01-01
Two experiments used within-subject designs to examine how conjunction errors depend on the use of (1) choice versus estimation tasks, (2) probability versus frequency language, and (3) conjunctions of two likely events versus conjunctions of likely and unlikely events. All problems included a three-option format verified to minimize…
ERIC Educational Resources Information Center
Simic, Andrei
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This guide focuses on the ethnology of traditional and complex societies. Part I, Simple and Complex Societies, includes three sections: (1) Introduction: Anthropologists…
Deviant Peer Affiliation and Problem Behavior: A Test of Genetic and Environmental Influences
ERIC Educational Resources Information Center
Bullock, Bernadette Marie; Deater-Deckard, Kirby; Leve, Leslie D.
2006-01-01
This study uses a multitrait, multimethod (MTMM) approach to investigate the genetic and environmental etiologies of childhood deviant peer affiliation (DPA) and problem behavior (PROB). The variability of genetic and environmental estimates by agent and method is also examined. A total of 77 monozygotic and 72 dizygotic twin pairs and each twin's…
Ethical Issues and the Life Sciences. Test Edition. AAAS Study Guides on Contemporary Problems.
ERIC Educational Resources Information Center
Kieffer, George H.
This is one of several study guides on contemporary problems produced by the American Association for the Advancement of Science with support of the National Science Foundation. This study guide on Ethical Issues and the Life Sciences includes the following sections: (1) Introduction; (2) The Search for an Ethic; (3) Biomedical Issues including…
Hastings, Paul D; Helm, Jonathan; Mills, Rosemary S L; Serbin, Lisa A; Stack, Dale M; Schwartzman, Alex E
2015-07-01
This investigation evaluated a multilevel model of dispositional and environmental factors contributing to the development of internalizing problems from preschool-age to school-age. In a sample of 375 families (185 daughters, 190 sons) drawn from three independent samples, preschoolers' behavioral inhibition, cortisol and gender were examined as moderators of the links between mothers' negative parenting behavior, negative emotional characteristics, and socioeconomic status when children were 3.95 years, and their internalizing problems when they were 8.34 years. Children's dispositional characteristics moderated all associations between these environmental factors and mother-reported internalizing problems in patterns that were consistent with either diathesis-stress or differential-susceptibility models of individual-environment interaction, and with gender models of developmental psychopathology. Greater inhibition and lower socioeconomic status were directly predictive of more teacher reported internalizing problems. These findings highlight the importance of using multilevel models within a bioecological framework to understand the complex pathways through which internalizing difficulties develop.
A Test of Problem Behavior and Self-Medication Theories in Incarcerated Adolescent Males
ERIC Educational Resources Information Center
Esposito-Smythers, Christianne; Penn, Joseph V.; Stein, L. A. R.; Lacher-Katz, Molly; Spirito, Anthony
2008-01-01
The purpose of this study is to examine the problem behavior and self-medication models of alcohol abuse in incarcerated male adolescents. Male adolescents (N = 56) incarcerated in a juvenile correction facility were administered a battery of psychological measures. Approximately 84% of adolescents with clinically significant alcohol-related…
2D electron cyclotron emission imaging at ASDEX Upgrade (invited)
Classen, I. G. J.; Boom, J. E.; Vries, P. C. de; Suttrop, W.; Schmid, E.; Garcia-Munoz, M.; Schneider, P. A.; Tobias, B.; Domier, C. W.; Luhmann, N. C. Jr.; Donne, A. J. H.; Jaspers, R. J. E.; Park, H. K.; Munsat, T.
2010-10-15
The newly installed electron cyclotron emission imaging diagnostic on ASDEX Upgrade provides measurements of the 2D electron temperature dynamics with high spatial and temporal resolution. An overview of the technical and experimental properties of the system is presented. These properties are illustrated by the measurements of the edge localized mode and the reversed shear Alfven eigenmode, showing both the advantage of having a two-dimensional (2D) measurement, as well as some of the limitations of electron cyclotron emission measurements. Furthermore, the application of singular value decomposition as a powerful tool for analyzing and filtering 2D data is presented.
Comparison of 2D and 3D gamma analyses
Pulliam, Kiley B.; Huang, Jessie Y.; Howell, Rebecca M.; Followill, David; Kry, Stephen F.; Bosca, Ryan; O’Daniel, Jennifer
2014-02-15
Purpose: As clinics begin to use 3D metrics for intensity-modulated radiation therapy (IMRT) quality assurance, it must be noted that these metrics will often produce results different from those produced by their 2D counterparts. 3D and 2D gamma analyses would be expected to produce different values, in part because of the different search space available. In the present investigation, the authors compared the results of 2D and 3D gamma analysis (where both datasets were generated in the same manner) for clinical treatment plans. Methods: Fifty IMRT plans were selected from the authors’ clinical database, and recalculated using Monte Carlo. Treatment planning system-calculated (“evaluated dose distributions”) and Monte Carlo-recalculated (“reference dose distributions”) dose distributions were compared using 2D and 3D gamma analysis. This analysis was performed using a variety of dose-difference (5%, 3%, 2%, and 1%) and distance-to-agreement (5, 3, 2, and 1 mm) acceptance criteria, low-dose thresholds (5%, 10%, and 15% of the prescription dose), and data grid sizes (1.0, 1.5, and 3.0 mm). Each comparison was evaluated to determine the average 2D and 3D gamma, lower 95th percentile gamma value, and percentage of pixels passing gamma. Results: The average gamma, lower 95th percentile gamma value, and percentage of passing pixels for each acceptance criterion demonstrated better agreement for 3D than for 2D analysis for every plan comparison. The average difference in the percentage of passing pixels between the 2D and 3D analyses with no low-dose threshold ranged from 0.9% to 2.1%. Similarly, using a low-dose threshold resulted in a difference between the mean 2D and 3D results, ranging from 0.8% to 1.5%. The authors observed no appreciable differences in gamma with changes in the data density (constant difference: 0.8% for 2D vs 3D). Conclusions: The authors found that 3D gamma analysis resulted in up to 2.9% more pixels passing than 2D analysis. It must
Recent advances in 2D materials for photocatalysis.
Luo, Bin; Liu, Gang; Wang, Lianzhou
2016-04-01
Two-dimensional (2D) materials have attracted increasing attention for photocatalytic applications because of their unique thickness dependent physical and chemical properties. This review gives a brief overview of the recent developments concerning the chemical synthesis and structural design of 2D materials at the nanoscale and their applications in photocatalytic areas. In particular, recent progress on the emerging strategies for tailoring 2D material-based photocatalysts to improve their photo-activity including elemental doping, heterostructure design and functional architecture assembly is discussed.
Language Testing and Technology: Problems of Transition to a New Era
ERIC Educational Resources Information Center
Dooey, Patricia
2008-01-01
Technological advances have revolutionised methods of both teaching and testing in languages, and practitioners have eagerly embraced the opportunity to provide more innovative ways of doing this. The unique features offered by technology make it increasingly possible to test for a wide range of language skills required for a specific purpose.…
Direct and Inverse Problems of Item Pool Design for Computerized Adaptive Testing
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.
2009-01-01
The recent literature on computerized adaptive testing (CAT) has developed methods for creating CAT item pools from a large master pool. Each CAT pool is designed as a set of nonoverlapping forms reflecting the skill levels of an assumed population of test takers. This article presents a Monte Carlo method to obtain these CAT pools and discusses…
The Major Field Test in Business: A Solution to the Problem of Assurance of Learning Assessment?
ERIC Educational Resources Information Center
Green, Jeffrey J.; Stone, Courtenay Clifford; Zegeye, Abera
2014-01-01
Colleges and universities are being asked by numerous sources to provide assurance of learning assessments of their students and programs. Colleges of business have responded by using a plethora of assessment tools, including the Major Field Test in Business. In this article, the authors show that the use of the Major Field Test in Business for…
Testing in Groups: A "Real" Exercise in Small Group Problem-Solving.
ERIC Educational Resources Information Center
Millar, Dan Pyle
Students in a small group discussion class offered as part of a speech communication curriculum found that testing their knowledge of the theory of the course in small groups was a positive learning experience. Each self-selected test group was made up of three students and was formed at least two class periods prior to the exam. Time was given in…
ERIC Educational Resources Information Center
Tobin, Michael; Hill, Eileen
2010-01-01
An examination is made of the value of using published personality tests with young blind and partially sighted children. Based on data gathered during a longitudinal investigation into the educational and psychological development of a group of 120 visually impaired learners, the authors conclude that their own selection of a test instrument…
Problem-Solving Test: Analysis of DNA Damage Recognizing Proteins in Yeast and Human Cells
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2013-01-01
The experiment described in this test was aimed at identifying DNA repair proteins in human and yeast cells. Terms to be familiar with before you start to solve the test: DNA repair, germline mutation, somatic mutation, inherited disease, cancer, restriction endonuclease, radioactive labeling, [alpha-[superscript 32]P]ATP, [gamma-[superscript…
ERIC Educational Resources Information Center
Norris, John M.
2015-01-01
Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…
ERIC Educational Resources Information Center
Educational Testing Service, Princeton, NJ.
Three themes were addressed at the conference: (1) implications of factor analysis for achievement testing; (2) use of achievement tests in awarding course credits; and (3) extended conceptions of evaluation in higher education. The speeches were entitled: Factors of Verbal Achievement, by John B. Carroll; Schools of Thought in Judging Excellence…
The Problem of the Match and Mis-Match in Testing Black Children.
ERIC Educational Resources Information Center
Williams, Robert L.
Ability tests in use today and the educational programs of the schools are examined from a Black perspective. It is stated that it is incumbent upon educators to develop appropriate learning experiences in the classroom which relate to the Black child's background experiences. The following issues are raised: (1) I.Q. tests (predictor variables)…
Fracture morphology of 2-D carbon-carbon composition
NASA Technical Reports Server (NTRS)
Avery, W. B.; Herakovich, C. T.
1985-01-01
Out-of-plane tensile tests of a woven fabric carbon-carbon composite were performed in a scanning electron microscope equipped with a tensile stage and a videotape recording system. The composite was prepared from T-300 8-harness satin graphite fabric and a phenolic resin. The (0/90/0/90/0 sub 1/2) sub 2 laminate, with a Theta describing the orientation of the warp fibers of the fabric, was cured at 160 C and pyrolized at 871 C. This was followed by four cycles of resin impregnation, curing, and pyrolysis. A micrograph of the cross section of the composite is presented. Inspection of the specimen fracture surface revealed that the filaments had no residual matrix bonded to them. Further inspection revealed that the fracture was interlaminar in nature. Failure occurred where filaments of adjacent plies had the same orientation. Thus it is postulated that improvement in transverse tensile strength of 2-D carbon-carbon depends on the improvement of the filament-matrix bond strength.
Global 2-D intercomparison of sectional and modal aerosol modules
Weisenstein, D K; Penner, J E; Herzog, M; Liu, Xiaohong
2007-05-08
We present an intercomparison of two aerosol modules, one sectional, one modal, in a global 2-D model in order to differentiate their behavior for tropospheric and stratospheric applications. We model only binary sulfuric acid-water aerosols in this study. Two versions of the sec-tional model and three versions of the modal model are used to test the sensitivity of background aerosol mass and size distribution to the number of bins or modes and to the pre-scribed width of the largest mode. We find modest sensitivity to the number of bins (40 vs 150) used in the sectional model. Aerosol mass is found to be reduced in a modal model if care is not taken in selecting the width of the largest lognormal mode, reflecting differences in sedimentation in the middle stratosphere. The size distributions calculated by the sec-tional model can be better matched by a modal model with four modes rather than three modes in most but not all sit-uations. A simulation of aerosol decay following the 1991 eruption of Mt. Pinatubo shows that the representation of the size distribution can have a signflcant impact on model-calculated aerosol decay rates in the stratosphere. Between 1991 and 1995, aerosol mass and surface area density calcu-lated by two versions of the modal model adequately match results from the sectional model. Calculated effective radius for the same time period shows more intermodel variability.
2-D Model for Normal and Sickle Cell Blood Microcirculation
NASA Astrophysics Data System (ADS)
Tekleab, Yonatan; Harris, Wesley
2011-11-01
Sickle cell disease (SCD) is a genetic disorder that alters the red blood cell (RBC) structure and function such that hemoglobin (Hb) cannot effectively bind and release oxygen. Previous computational models have been designed to study the microcirculation for insight into blood disorders such as SCD. Our novel 2-D computational model represents a fast, time efficient method developed to analyze flow dynamics, O2 diffusion, and cell deformation in the microcirculation. The model uses a finite difference, Crank-Nicholson scheme to compute the flow and O2 concentration, and the level set computational method to advect the RBC membrane on a staggered grid. Several sets of initial and boundary conditions were tested. Simulation data indicate a few parameters to be significant in the perturbation of the blood flow and O2 concentration profiles. Specifically, the Hill coefficient, arterial O2 partial pressure, O2 partial pressure at 50% Hb saturation, and cell membrane stiffness are significant factors. Results were found to be consistent with those of Le Floch [2010] and Secomb [2006].
Effects of Agent's Repulsion in 2d Flocking Models
NASA Astrophysics Data System (ADS)
Moussa, Najem; Tarras, Iliass; Mazroui, M'hammed; Boughaleb, Yahya
In nature many animal groups, such as fish schools or bird flocks, clearly display structural order and appear to move as a single coherent entity. In order to understand the complex behavior of these systems, many models have been proposed and tested so far. This paper deals with an extension of the Vicsek model, by including a second zone of repulsion, where each agent attempts to maintain a minimum distance from the others. The consideration of this zone in our study seems to play an important role during the travel of agents in the two-dimensional (2D) flocking models. Our numerical investigations show that depending on the basic ingredients such as repulsion radius (R1), effect of density of agents (ρ) and noise (η), our nonequilibrium system can undergo a kinetic phase transition from no transport to finite net transport. For different values of ρ, kinetic phase diagrams in the plane (η ,R1) are found. Implications of these findings are discussed.
2-D Chemical-Dynamical Modeling of Venus's Sulfur Variability
NASA Astrophysics Data System (ADS)
Bierson, Carver J.; Zhang, Xi
2016-10-01
Over the last decade a combination of ground based and Venus Express observations have been made of the concentration of sulfur species in Venus's atmosphere, both above [1, 2] and below the clouds [3, 4]. These observations put constraints on both the vertical and meridional variations of the major sulfur species in Venus's atmosphere.. It has also been observed that SO2 concentrations varies on both timescales of hours and years [1,4]. The spatial and temporal distribution of tracer species is owing to two possibilities: mutual chemical interaction and dynamical tracer transport.Previous Chemical modeling of Venus's middle atmosphere has only been explored in 1-D. We will present the first 2-D (altitude and latitude) chemical-dynamical model for Venus's middle atmosphere. The sulfur chemistry is based on of the 1D model of Zhang et al. 2012 [5]. We do model runs over multiple Venus decades testing two scenarios: first one with varying sulfur fluxes from below, and second with secular dynamical perturbations in the atmosphere [6]. By comparing to Venus Express and ground based observations, we put constraints on the dynamics of Venus's middle atmosphere.References: [1] Belyaev et al. Icarus 2012 [2] Marcq et al. Nature geoscience, 2013 [3] Marcq et al. JGR:Planets, 2008 [4] Arney et al. JGR:Planets, 2014 [5] Zhang et al. Icarus 2012 [6] Parish et al. Icarus 2012
The FDA and genetic testing: improper tools for a difficult problem
Willmarth, Kirk
2015-01-01
The US Food and Drug Administration (FDA) has recently issued draft guidance on how it intends to regulate laboratory-developed tests, including genetic tests. This article argues that genetic tests differ from traditional targets of FDA regulation in both product as well as industry landscape, and that the FDA's traditional tools are ill-suited for regulating this space. While existing regulatory gaps do create risks in genetic testing, the regulatory burden of the FDA's proposal introduces new risks for both test providers and patients that may offset the benefits. Incremental expansion of current oversight outside of the FDA can mitigate many of the risks necessitating increased oversight while avoiding the creation of new ones that could undermine this industry.
2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems
Parallelized CCHE2D flow model with CUDA Fortran on Graphics Process Units
Technology Transfer Automated Retrieval System (TEKTRAN)
This paper presents the CCHE2D implicit flow model parallelized using CUDA Fortran programming technique on Graphics Processing Units (GPUs). A parallelized implicit Alternating Direction Implicit (ADI) solver using Parallel Cyclic Reduction (PCR) algorithm on GPU is developed and tested. This solve...
Nutter, C.
1980-11-01
GRAV2D is an interactive computer program used for modeling 2-1/2 dimensional gravity data. A forward algorithm is used to give the theoretical attraction of gravity intensity at a station due to a perturbing body given by the initial model. The resultant model can then be adjusted for a better fit by a combination of manual adjustment, one-dimensional automatic search, and Marquardt inversion. GRAV2D has an interactive data management system for data manipulation and display built around subroutines to do a forward problem, a one-dimensional direct search and an inversion. This is a user's guide and documentation for GRAV2D.
1,25(OH)2D3 dependent overt hyperactivity phenotype in klotho-hypomorphic mice
Leibrock, Christina B.; Voelkl, Jakob; Kuro-o, Makoto; Lang, Florian; Lang, Undine E
2016-01-01
Klotho, a protein mainly expressed in kidney and cerebral choroid plexus, is a powerful regulator of 1,25(OH)2D3 formation. Klotho-deficient mice (kl/kl) suffer from excessive plasma 1,25(OH)2D3-, Ca2+- and phosphate-concentrations, leading to severe soft tissue calcification and accelerated aging. NH4Cl treatment prevents tissue calcification and premature ageing without affecting 1,25(OH)2D3-formation. The present study explored the impact of excessive 1,25(OH)2D3 formation in NH4Cl-treated kl/kl-mice on behavior. To this end kl/kl-mice and wild-type mice were treated with NH4Cl and either control diet or vitamin D deficient diet (LVD). As a result, plasma 1,25(OH)2D3-, Ca2+- and phosphate-concentrations were significantly higher in untreated and in NH4Cl-treated kl/kl-mice than in wild-type mice, a difference abrogated by LVD. In each, open field, dark-light box, and O-maze NH4Cl-treated kl/kl-mice showed significantly higher exploratory behavior than untreated wild-type mice, a difference abrogated by LVD. The time of floating in the forced swimming test was significantly shorter in NH4Cl treated kl/kl-mice compared to untreated wild-type mice and to kl/kl-mice on LVD. In wild-type animals, NH4Cl treatment did not significantly alter 1,25(OH)2D3, calcium and phosphate concentrations or exploratory behavior. In conclusion, the excessive 1,25(OH)2D3 formation in klotho-hypomorphic mice has a profound effect on murine behavior. PMID:27109615
2-D and 3-D numerical simulation of a supersonic inlet flowfield
NASA Astrophysics Data System (ADS)
Enomoto, Shunji; Arakawa, Chuichi
The 2-D and 3-D steady, Reynolds-averaged Navier-Stokes equations were numerically solved for the flowfields in an experimentally tested inlet model with bleed through a cavity. In the 2-D analysis, a normal shock was located at diffuser inlet instead of the position below the cavity. The normal shock in the middle of the diffuser caused a massive separation of the boundary layer and a large total pressure loss. In the 3-D analysis, the shock wave was distorted by the side wall boundary layer separation, and the complex flow structure was established. The result of the 3-D analysis agreed well with the experiment.