Sample records for testing large scale

  1. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  2. The Rights and Responsibility of Test Takers When Large-Scale Testing Is Used for Classroom Assessment

    ERIC Educational Resources Information Center

    van Barneveld, Christina; Brinson, Karieann

    2017-01-01

    The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…

  3. Critical Issues in Large-Scale Assessment: A Resource Guide.

    ERIC Educational Resources Information Center

    Redfield, Doris

    The purpose of this document is to provide practical guidance and support for the design, development, and implementation of large-scale assessment systems that are grounded in research and best practice. Information is included about existing large-scale testing efforts, including national testing programs, state testing programs, and…

  4. Large-Scale Wind Turbine Testing in the NASA 24.4m (80) by 36.6m(120) Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Zell, Peter T.; Imprexia, Cliff (Technical Monitor)

    2000-01-01

    The 80- by 120-Foot Wind Tunnel at NASA Ames Research Center in California provides a unique capability to test large-scale wind turbines under controlled conditions. This special capability is now available for domestic and foreign entities wishing to test large-scale wind turbines. The presentation will focus on facility capabilities to perform wind turbine tests and typical research objectives for this type of testing.

  5. Aquatic Plant Control Research Program. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 5. Synthesis Report.

    DTIC Science & Technology

    1984-06-01

    RD-Rl45 988 AQUATIC PLANT CONTROL RESEARCH PROGRAM LARGE-SCALE 1/2 OPERATIONS MANAGEMENT ..(U) ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG MS...REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR -, CONTROL OF PROBLEM AQUATIC PLANTS Report 5 SYNTHESIS REPORT bv Andrew...Corps of Engineers Washington, DC 20314 84 0,_1 oil.. LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC

  6. Large-Scale Operations Management Test of Use of The White Amur for Control of Problem Aquatic Plants. Report 1. Baseline Studies. Volume V. The Herpetofauna of Lake Conway, Florida.

    DTIC Science & Technology

    1981-06-01

    V ADA02 7414 UNIVERSITY OF SOUTH FLORIDA TAMPA DEPT OF BIOLOGY F/6 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE MHITE AMUM-ETC(U) JUN 81...Army Engineer Waterways Expiftaton P. 0. Box 631, Vicksburg, Miss. 391( 0 81 8 1102 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR...78-22// 4. TITLE (and Su~btitle) 5 TYPE OF REPORT & PERIOD COVERED LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE. or Report I of a series THE W4HITE

  7. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume II. The Fish, Mammals, and Waterfowl of Lake Conway, Florida.

    DTIC Science & Technology

    1982-02-01

    7AD-AI3 853 ’FLORIDA SAME AND FRESH WATER FISH COMMISSION ORLANDO F/ 616 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR--ETC(U...of a series of reports documenting a large-scale operations management test of use of the white amur for control of problem aquatic plants in Lake...M. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First Year Poststock- ing

  8. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume VI. The Water and Sediment Quality of Lake Conway, Florida.

    DTIC Science & Technology

    1982-02-01

    AD A113 .5. ORANGE COUNTY POLLUTION CONTROL DEPT ORLANDO FL F/S 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR-ETC(U) FEB 82 H D...Large-Scale Operations Management Test of use of the white amur for control of problem aquatic plants in Lake Conway, Fla. Report 1 of the series presents...as follows: Miller, D. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First

  9. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 4. Third Year Poststocking Results. Volume VI. The Water and Sediment Quality of Lake Conway, Florida.

    DTIC Science & Technology

    1983-01-01

    RAI-RI247443 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE i/i UNITE AMUR FOR CONTR.. (U) MILLER RND MILLER INC ORLANDO FL H D MILLER ET RL...LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS Report 1: Baseline Studies Volume I...Boyd, J. 1983. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 4, Third Year Poststocking

  10. Gas-Centered Swirl Coaxial Liquid Injector Evaluations

    NASA Technical Reports Server (NTRS)

    Cohn, A. K.; Strakey, P. A.; Talley, D. G.

    2005-01-01

    Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.

  11. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume IV. Nitrogen and Phosphorus Dynamics of the Lake Conway Ecosystem: Loading Budgets and a Dynamic Hydrologic Phosphorus Model.

    DTIC Science & Technology

    1982-08-01

    AD-AIA 700 FLORIDA UN1V GAINESVILLE DEPT OF ENVIRONMENTAL ENGIN -ETC F/G 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMOR--ENL...Conway ecosystem and is part of the Large- Scale Operations Management Test (LSOMT) of the Aquatic Plant Control Research Program (APCRP) at the WES...should be cited as follows: Blancher, E. C., II, and Fellows, C. R. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control

  12. Aquatic Plant Control Research Program. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Reports 2 and 3. First and Second Year Poststocking Results. Volume 5. The Herpetofauna of Lake Conway, Florida: Community Analysis.

    DTIC Science & Technology

    1983-07-01

    TEST CHART NATIONAL BVIREAU OF StANARS-1963- I AQUATIC PLANT CONTROL RESEARCH PROGRAM TECHNICAL REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF...Waterways Experiment Station P. 0. Box 631, Vicksburg, Miss. 39180 83 11 01 018 - I ., lit I III I | LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE...No. 3. RECIPIENT’S CATALOG NUMBER Technical Report A-78-2 Aa 1 Lj 19 ________5!1___ A. TITLE (Ad Subtitle) LARGE-SCALE OPERATIONS MANAGEMENT S. TYPE

  13. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  14. Voices from Test-Takers: Further Evidence for Language Assessment Validation and Use

    ERIC Educational Resources Information Center

    Cheng, Liying; DeLuca, Christopher

    2011-01-01

    Test-takers' interpretations of validity as related to test constructs and test use have been widely debated in large-scale language assessment. This study contributes further evidence to this debate by examining 59 test-takers' perspectives in writing large-scale English language tests. Participants wrote about their test-taking experiences in…

  15. Role of optometry school in single day large scale school vision testing

    PubMed Central

    Anuradha, N; Ramani, Krishnakumar

    2015-01-01

    Background: School vision testing aims at identification and management of refractive errors. Large-scale school vision testing using conventional methods is time-consuming and demands a lot of chair time from the eye care professionals. A new strategy involving a school of optometry in single day large scale school vision testing is discussed. Aim: The aim was to describe a new approach of performing vision testing of school children on a large scale in a single day. Materials and Methods: A single day vision testing strategy was implemented wherein 123 members (20 teams comprising optometry students and headed by optometrists) conducted vision testing for children in 51 schools. School vision testing included basic vision screening, refraction, frame measurements, frame choice and referrals for other ocular problems. Results: A total of 12448 children were screened, among whom 420 (3.37%) were identified to have refractive errors. 28 (1.26%) children belonged to the primary, 163 to middle (9.80%), 129 (4.67%) to secondary and 100 (1.73%) to the higher secondary levels of education respectively. 265 (2.12%) children were referred for further evaluation. Conclusion: Single day large scale school vision testing can be adopted by schools of optometry to reach a higher number of children within a short span. PMID:25709271

  16. Study on Thermal Decomposition Characteristics of Ammonium Nitrate Emulsion Explosive in Different Scales

    NASA Astrophysics Data System (ADS)

    Wu, Qiujie; Tan, Liu; Xu, Sen; Liu, Dabin; Min, Li

    2018-04-01

    Numerous accidents of emulsion explosive (EE) are attributed to uncontrolled thermal decomposition of ammonium nitrate emulsion (ANE, the intermediate of EE) and EE in large scale. In order to study the thermal decomposition characteristics of ANE and EE in different scales, a large-scale test of modified vented pipe test (MVPT), and two laboratory-scale tests of differential scanning calorimeter (DSC) and accelerating rate calorimeter (ARC) were applied in the present study. The scale effect and water effect both play an important role in the thermal stability of ANE and EE. The measured decomposition temperatures of ANE and EE in MVPT are 146°C and 144°C, respectively, much lower than those in DSC and ARC. As the size of the same sample in DSC, ARC, and MVPT successively increases, the onset temperatures decrease. In the same test, the measured onset temperature value of ANE is higher than that of EE. The water composition of the sample stabilizes the sample. The large-scale test of MVPT can provide information for the real-life operations. The large-scale operations have more risks, and continuous overheating should be avoided.

  17. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGES

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  18. Haz-Map Glossary

    MedlinePlus

    ... lung. Radiation Accident Large-scale accidents from atomic bomb testing fallout released iodine-131 and strontium-90. ... lung. Radiation Accident Large-scale accidents from atomic bomb testing fallout released iodine-131 and strontium-90. ...

  19. Gravitational lenses and large scale structure

    NASA Technical Reports Server (NTRS)

    Turner, Edwin L.

    1987-01-01

    Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.

  20. Proceedings of the Annual Meeting (14th) Aquatic Plant Control Research Planning and Operations Review, Held at Lake Eufaula, Oklahoma on 26-29 November 1979.

    DTIC Science & Technology

    1980-10-01

    Development; Problem Identification and Assessment for Aquatic Plant Management; Natural Succession of Aquatic Plants; Large-Scale Operations Management Test...of Insects and Pathogens for Control of Waterhyacinth in Louisiana; Large-Scale Operations Management Test to Evaluate Prevention Methodology for...Control of Eurasian Watermilfoil in Washington; Large-Scale Operations Management Test Using the White Amur at Lake Conway, Florida; and Aquatic Plant Control Activities in the Panama Canal Zone.

  1. The Expanded Large Scale Gap Test

    DTIC Science & Technology

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  2. Fracture Testing of Large-Scale Thin-Sheet Aluminum Alloy (MS Word file)

    DOT National Transportation Integrated Search

    1996-02-01

    Word Document; A series of fracture tests on large-scale, precracked, aluminum alloy panels were carried out to examine and characterize the process by which cracks propagate and link up in this material. Extended grips and test fixtures were special...

  3. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1974-01-01

    Based on the premises that (1) magnetic suspension techniques can play a useful role in large-scale aerodynamic testing and (2) superconductor technology offers the only practical hope for building large-scale magnetic suspensions, an all-superconductor three-component magnetic suspension and balance facility was built as a prototype and was tested successfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities have been made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  4. Experimental feasibility study of the application of magnetic suspension techniques to large-scale aerodynamic test facilities. [cryogenic traonics wind tunnel

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    Based on the premises that magnetic suspension techniques can play a useful role in large scale aerodynamic testing, and that superconductor technology offers the only practical hope for building large scale magnetic suspensions, an all-superconductor 3-component magnetic suspension and balance facility was built as a prototype and tested sucessfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities at Langley Research Center were made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.

  5. Food waste impact on municipal solid waste angle of internal friction.

    PubMed

    Cho, Young Min; Ko, Jae Hac; Chi, Liqun; Townsend, Timothy G

    2011-01-01

    The impact of food waste content on the municipal solid waste (MSW) friction angle was studied. Using reconstituted fresh MSW specimens with different food waste content (0%, 40%, 58%, and 80%), 48 small-scale (100-mm-diameter) direct shear tests and 12 large-scale (430 mm × 430 mm) direct shear tests were performed. A stress-controlled large-scale direct shear test device allowing approximately 170-mm sample horizontal displacement was designed and used. At both testing scales, the mobilized internal friction angle of MSW decreased considerably as food waste content increased. As food waste content increased from 0% to 40% and from 40% to 80%, the mobilized internal friction angles (estimated using the mobilized peak (ultimate) shear strengths of the small-scale direct shear tests) decreased from 39° to 31° and from 31° to 7°, respectively, while those of large-scale tests decreased from 36° to 26° and from 26° to 15°, respectively. Most friction angle measurements produced in this study fell within the range of those previously reported for MSW. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    NASA Technical Reports Server (NTRS)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  7. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    ERIC Educational Resources Information Center

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  8. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume III. The Plankton and Benthos of Lake Conway, Florida,

    DTIC Science & Technology

    1981-11-01

    AD-AI09 516 FLORIDA UNIV GAINESVILLE DEPT OF ENVIRONMENTAL ENGIN--ETC F/G 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE,WHITE AMUR--ETC(U... OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS Report I: Baseline Studies Volume I: The Aquatic Macropyes of...COVERED LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF Report 2 of a series THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC (In 7 volumes) PLANTS

  9. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 3. Second Year Poststocking Results. Volume VI. The Water and Sediment Quality of Lake Conway, Florida.

    DTIC Science & Technology

    1982-08-01

    AD-A-11 701 ORANGE COUNTY POLLUTION CONTROL DEPT ORLANDO FL F/0 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR--ETC(U) AUG 82 H...8217 OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL -OF PROBLEM AQ.UATIC PLANTS SECOND YEAR POSTSTOCKING RESULTS Volume, Vt The Water...and Subetie) S. TYPE OF REPORT & PERIOD COVERED LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF Report 3 of a series THE WHITE AMUR FOR CONTROL OF

  10. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  11. Small-scale test program to develop a more efficient swivel nozzle thrust deflector for V/STOL lift/cruise engines

    NASA Technical Reports Server (NTRS)

    Schlundt, D. W.

    1976-01-01

    The installed performance degradation of a swivel nozzle thrust deflector system obtained during increased vectoring angles of a large-scale test program was investigated and improved. Small-scale models were used to generate performance data for analyzing selected swivel nozzle configurations. A single-swivel nozzle design model with five different nozzle configurations and a twin-swivel nozzle design model, scaled to 0.15 size of the large-scale test hardware, were statically tested at low exhaust pressure ratios of 1.4, 1.3, 1.2, and 1.1 and vectored at four nozzle positions from 0 deg cruise through 90 deg vertical used for the VTOL mode.

  12. Reliability and Validity of Information about Student Achievement: Comparing Large-Scale and Classroom Testing Contexts

    ERIC Educational Resources Information Center

    Cizek, Gregory J.

    2009-01-01

    Reliability and validity are two characteristics that must be considered whenever information about student achievement is collected. However, those characteristics--and the methods for evaluating them--differ in large-scale testing and classroom testing contexts. This article presents the distinctions between reliability and validity in the two…

  13. Dynamics of the McDonnell Douglas Large Scale Dynamic Rig and Dynamic Calibration of the Rotor Balance

    DOT National Transportation Integrated Search

    1994-10-01

    A shake test was performed on the Large Scale Dynamic Rig in the 40- by 80-Foot Wind Tunnel in support of the McDonnell Douglas Advanced Rotor Technology (MDART) Test Program. The shake test identifies the hub modes and the dynamic calibration matrix...

  14. Developing a Strategy for Using Technology-Enhanced Items in Large-Scale Standardized Tests

    ERIC Educational Resources Information Center

    Bryant, William

    2017-01-01

    As large-scale standardized tests move from paper-based to computer-based delivery, opportunities arise for test developers to make use of items beyond traditional selected and constructed response types. Technology-enhanced items (TEIs) have the potential to provide advantages over conventional items, including broadening construct measurement,…

  15. Parallel and serial computing tools for testing single-locus and epistatic SNP effects of quantitative traits in genome-wide association studies

    PubMed Central

    Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang

    2008-01-01

    Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146

  16. Gap Test Calibrations and Their Scaling

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2011-06-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations with water gaps will be provided and compared with PMMA gaps. Scaling for other donor systems will also be provided. Shock initiation data with water gaps will be reviewed.

  17. Gap Test Calibrations And Their Scalin

    NASA Astrophysics Data System (ADS)

    Sandusky, Harold

    2012-03-01

    Common tests for measuring the threshold for shock initiation are the NOL large scale gap test (LSGT) with a 50.8-mm diameter donor/gap and the expanded large scale gap test (ELSGT) with a 95.3-mm diameter donor/gap. Despite the same specifications for the explosive donor and polymethyl methacrylate (PMMA) gap in both tests, calibration of shock pressure in the gap versus distance from the donor scales by a factor of 1.75, not the 1.875 difference in their sizes. Recently reported model calculations suggest that the scaling discrepancy results from the viscoelastic properties of PMMA in combination with different methods for obtaining shock pressure. This is supported by the consistent scaling of these donors when calibrated in water-filled aquariums. Calibrations and their scaling are compared for other donors with PMMA gaps and for various donors in water.

  18. Detecting and Correcting Scale Drift in Test Equating: An Illustration from a Large Scale Testing Program

    ERIC Educational Resources Information Center

    Puhan, Gautam

    2009-01-01

    The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…

  19. Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants. Report 2. First Year Poststocking Results. Volume VII. A Model for Evaluation of the Response of the Lake Conway, Florida, Ecosystem to Introduction of the White Amur.

    DTIC Science & Technology

    1981-11-01

    OPERATIONS MANAGEMENT S. TYPE OF REPORT A PERIOD COVERED TEST OF THE USE OF THE WHITE AMUR FOR CONTROL OF Report 2 of a series PROBLEM AQUATIC PLANTS...111. 1981. "Large-Scale Operations Management Test of the Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First Year Poststock...Al 3 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS A MODEL FOR EVALUATION OF

  20. A Combined Ethical and Scientific Analysis of Large-scale Tests of Solar Climate Engineering

    NASA Astrophysics Data System (ADS)

    Ackerman, T. P.

    2017-12-01

    Our research group recently published an analysis of the combined ethical and scientific issues surrounding large-scale testing of stratospheric aerosol injection (SAI; Lenferna et al., 2017, Earth's Future). We are expanding this study in two directions. The first is extending this same analysis to other geoengineering techniques, particularly marine cloud brightening (MCB). MCB has substantial differences to SAI in this context because MCB can be tested over significantly smaller areas of the planet and, following injection, has a much shorter lifetime of weeks as opposed to years for SAI. We examine issues such as the role of intent, the lesser of two evils, and the nature of consent. In addition, several groups are currently considering climate engineering governance tools such as a code of ethics and a registry. We examine how these tools might influence climate engineering research programs and, specifically, large-scale testing. The second direction of expansion is asking whether ethical and scientific issues associated with large-scale testing are so significant that they effectively preclude moving ahead with climate engineering research and testing. Some previous authors have suggested that no research should take place until these issues are resolved. We think this position is too draconian and consider a more nuanced version of this argument. We note, however, that there are serious questions regarding the ability of the scientific research community to move to the point of carrying out large-scale tests.

  1. An Approach to Scoring and Equating Tests with Binary Items: Piloting With Large-Scale Assessments

    ERIC Educational Resources Information Center

    Dimitrov, Dimiter M.

    2016-01-01

    This article describes an approach to test scoring, referred to as "delta scoring" (D-scoring), for tests with dichotomously scored items. The D-scoring uses information from item response theory (IRT) calibration to facilitate computations and interpretations in the context of large-scale assessments. The D-score is computed from the…

  2. Development and Initial Testing of the Tiltrotor Test Rig

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Sheikman, A. L.

    2018-01-01

    The NASA Tiltrotor Test Rig (TTR) is a new, large-scale proprotor test system, developed jointly with the U.S. Army and Air Force, to develop a new, large-scale proprotor test system for the National Full-Scale Aerodynamics Complex (NFAC). The TTR is designed to test advanced proprotors up to 26 feet in diameter at speeds up to 300 knots, and even larger rotors at lower airspeeds. This combination of size and speed is unprecedented and is necessary for research into 21st-century tiltrotors and other advanced rotorcraft concepts. The TTR will provide critical data for validation of state-of-the-art design and analysis tools.

  3. Stereotype Threat, Inquiring about Test Takers' Race and Gender, and Performance on Low-Stakes Tests in a Large-Scale Assessment. Research Report. ETS RR-15-02

    ERIC Educational Resources Information Center

    Stricker, Lawrence J.; Rock, Donald A.; Bridgeman, Brent

    2015-01-01

    This study explores stereotype threat on low-stakes tests used in a large-scale assessment, math and reading tests in the Education Longitudinal Study of 2002 (ELS). Issues identified in laboratory research (though not observed in studies of high-stakes tests) were assessed: whether inquiring about their race and gender is related to the…

  4. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  5. The Washback Effect of Konkoor on Teachers' Attitudes toward Their Teaching

    ERIC Educational Resources Information Center

    Birjandi, Parviz; Shirkhani, Servat

    2012-01-01

    Large scale tests have been considered by many scholars in the field of language testing and teaching to influence teaching and learning considerably. The present study looks at the effect of a large scale test (Konkoor) on the attitudes of teachers in high schools. Konkoor is the university entrance examination in Iran which is taken by at least…

  6. Large-Scale Academic Achievement Testing of Deaf and Hard-of-Hearing Students: Past, Present, and Future

    ERIC Educational Resources Information Center

    Qi, Sen; Mitchell, Ross E.

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the…

  7. Fire extinguishing tests -80 with methyl alcohol gasoline

    NASA Astrophysics Data System (ADS)

    Holmstedt, G.; Ryderman, A.; Carlsson, B.; Lennmalm, B.

    1980-10-01

    Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15 fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results; and performances with small pools can hardly be correlated with results from large scale fires.

  8. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  9. Recent Developments in Language Assessment and the Case of Four Large-Scale Tests of ESOL Ability

    ERIC Educational Resources Information Center

    Stoynoff, Stephen

    2009-01-01

    This review article surveys recent developments and validation activities related to four large-scale tests of L2 English ability: the iBT TOEFL, the IELTS, the FCE, and the TOEIC. In addition to describing recent changes to these tests, the paper reports on validation activities that were conducted on the measures. The results of this research…

  10. An NCME Instructional Module on Booklet Designs in Large-Scale Assessments of Student Achievement: Theory and Practice

    ERIC Educational Resources Information Center

    Frey, Andreas; Hartig, Johannes; Rupp, Andre A.

    2009-01-01

    In most large-scale assessments of student achievement, several broad content domains are tested. Because more items are needed to cover the content domains than can be presented in the limited testing time to each individual student, multiple test forms or booklets are utilized to distribute the items to the students. The construction of an…

  11. Scaling effects in the static and dynamic response of graphite-epoxy beam-columns. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Jackson, Karen E.

    1990-01-01

    Scale model technology represents one method of investigating the behavior of advanced, weight-efficient composite structures under a variety of loading conditions. It is necessary, however, to understand the limitations involved in testing scale model structures before the technique can be fully utilized. These limitations, or scaling effects, are characterized. in the large deflection response and failure of composite beams. Scale model beams were loaded with an eccentric axial compressive load designed to produce large bending deflections and global failure. A dimensional analysis was performed on the composite beam-column loading configuration to determine a model law governing the system response. An experimental program was developed to validate the model law under both static and dynamic loading conditions. Laminate stacking sequences including unidirectional, angle ply, cross ply, and quasi-isotropic were tested to examine a diversity of composite response and failure modes. The model beams were loaded under scaled test conditions until catastrophic failure. A large deflection beam solution was developed to compare with the static experimental results and to analyze beam failure. Also, the finite element code DYCAST (DYnamic Crash Analysis of STructure) was used to model both the static and impulsive beam response. Static test results indicate that the unidirectional and cross ply beam responses scale as predicted by the model law, even under severe deformations. In general, failure modes were consistent between scale models within a laminate family; however, a significant scale effect was observed in strength. The scale effect in strength which was evident in the static tests was also observed in the dynamic tests. Scaling of load and strain time histories between the scale model beams and the prototypes was excellent for the unidirectional beams, but inconsistent results were obtained for the angle ply, cross ply, and quasi-isotropic beams. Results show that valuable information can be obtained from testing on scale model composite structures, especially in the linear elastic response region. However, due to scaling effects in the strength behavior of composite laminates, caution must be used in extrapolating data taken from a scale model test when that test involves failure of the structure.

  12. Large-scale wind tunnel tests of a sting-supported V/STOL fighter model at high angles of attack

    NASA Technical Reports Server (NTRS)

    Stoll, F.; Minter, E. A.

    1981-01-01

    A new sting model support has been developed for the NASA/Ames 40- by 80-Foot Wind Tunnel. This addition to the facility permits testing of relatively large models to large angles of attack or angles of yaw depending on model orientation. An initial test on the sting is described. This test used a 0.4-scale powered V/STOL model designed for testing at angles of attack to 90 deg and greater. A method for correcting wake blockage was developed and applied to the force and moment data. Samples of this data and results of surface-pressure measurements are presented.

  13. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  14. Potential for geophysical experiments in large scale tests.

    USGS Publications Warehouse

    Dieterich, J.H.

    1981-01-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author

  15. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  16. Experience with specifications applicable to certification. [of photovoltaic modules for large-scale application

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.

  17. Development of fire test methods for airplane interior materials

    NASA Technical Reports Server (NTRS)

    Tustin, E. A.

    1978-01-01

    Fire tests were conducted in a 737 airplane fuselage at NASA-JSC to characterize jet fuel fires in open steel pans (simulating post-crash fire sources and a ruptured airplane fuselage) and to characterize fires in some common combustibles (simulating in-flight fire sources). Design post-crash and in-flight fire source selections were based on these data. Large panels of airplane interior materials were exposed to closely-controlled large scale heating simulations of the two design fire sources in a Boeing fire test facility utilizing a surplused 707 fuselage section. Small samples of the same airplane materials were tested by several laboratory fire test methods. Large scale and laboratory scale data were examined for correlative factors. Published data for dangerous hazard levels in a fire environment were used as the basis for developing a method to select the most desirable material where trade-offs in heat, smoke and gaseous toxicant evolution must be considered.

  18. High-Stakes Accountability: Student Anxiety and Large-Scale Testing

    ERIC Educational Resources Information Center

    von der Embse, Nathaniel P.; Witmer, Sara E.

    2014-01-01

    This study examined the relationship between student anxiety about high-stakes testing and their subsequent test performance. The FRIEDBEN Test Anxiety Scale was administered to 1,134 11th-grade students, and data were subsequently collected on their statewide assessment performance. Test anxiety was a significant predictor of test performance…

  19. Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian

    2016-01-01

    The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.

  20. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.

    DOT National Transportation Integrated Search

    2012-06-01

    This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...

  1. Interrater Reliability in Large-Scale Assessments--Can Teachers Score National Tests Reliably without External Controls?

    ERIC Educational Resources Information Center

    Pantzare, Anna Lind

    2015-01-01

    In most large-scale assessment systems a set of rather expensive external quality controls are implemented in order to guarantee the quality of interrater reliability. This study empirically examines if teachers' ratings of national tests in mathematics can be reliable without using monitoring, training, or other methods of external quality…

  2. Status of JUPITER Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, T.; Shirakata, K.; Kinjo, K.

    To obtain the data necessary for evaluating the nuclear design method of a large-scale fast breeder reactor, criticality tests with a large- scale homogeneous reactor were conducted as part of a joint research program by Japan and the U.S. Analyses of the tests are underway in both countries. The purpose of this paper is to describe the status of this project.

  3. Changing the English Classroom: When Large-Scale "Common" Testing Meets Secondary Curriculum and Instruction in the United States

    ERIC Educational Resources Information Center

    Cimbricz, Sandra K.; McConn, Matthew L.

    2015-01-01

    This article explores the intersection of new, large-scale standards-based testing, teacher accountability policy, and secondary curriculum and instruction in the United States. Two federally funded consortia--the Smarter Balanced Assessment Consortium and the Partnership for Readiness of College and Careers--prove focal to this paper, as these…

  4. Policy Incentives in Canadian Large-Scale Assessment: How Policy Levers Influence Teacher Decisions about Instructional Change

    ERIC Educational Resources Information Center

    Copp, Derek T.

    2017-01-01

    Large-scale assessment (LSA) is a tool used by education authorities for several purposes, including the promotion of teacher-based instructional change. In Canada, all 10 provinces engage in large-scale testing across several grade levels and subjects, and also have the common expectation that the results data will be used to improve instruction…

  5. Exploring Unidimensional Proficiency Classification Accuracy from Multidimensional Data in a Vertical Scaling Context

    ERIC Educational Resources Information Center

    Kroopnick, Marc Howard

    2010-01-01

    When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…

  6. Fire extinguishing tests -80 with methyl alcohol gasoline (in MIXED)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holmstedt, G.; Ryderman, A.; Carlsson, B.

    1980-01-01

    Large scale tests and laboratory experiments were carried out for estimating the extinguishing effectiveness of three alcohol resistant aqueous film forming foams (AFFF), two alcohol resistant fluoroprotein foams and two detergent foams in various poolfires: gasoline, isopropyl alcohol, acetone, methyl-ethyl ketone, methyl alcohol and M15 (a gasoline, methyl alcohol, isobutene mixture). The scaling down of large scale tests for developing a reliable laboratory method was especially examined. The tests were performed with semidirect foam application, in pools of 50, 11, 4, 0.6, and 0.25 sq m. Burning time, temperature distribution in the liquid, and thermal radiation were determined. An M15more » fire can be extinguished with a detergent foam, but it is impossible to extinguish fires in polar solvents, such as methyl alcohol, acetone, and isopropyl alcohol with detergent foams, AFFF give the best results, and performances with small pools can hardly be correlated with results from large scale fires.« less

  7. Stability of Rasch Scales over Time

    ERIC Educational Resources Information Center

    Taylor, Catherine S.; Lee, Yoonsun

    2010-01-01

    Item response theory (IRT) methods are generally used to create score scales for large-scale tests. Research has shown that IRT scales are stable across groups and over time. Most studies have focused on items that are dichotomously scored. Now Rasch and other IRT models are used to create scales for tests that include polytomously scored items.…

  8. A gravitational puzzle.

    PubMed

    Caldwell, Robert R

    2011-12-28

    The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.

  9. Potential utilization of the NASA/George C. Marshall Space Flight Center in earthquake engineering research

    NASA Technical Reports Server (NTRS)

    Scholl, R. E. (Editor)

    1979-01-01

    Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.

  10. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  11. Large-Scale 3D Printing: The Way Forward

    NASA Astrophysics Data System (ADS)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  12. Performance of lap splices in large-scale column specimens affected by ASR and/or DEF-extension phase.

    DOT National Transportation Integrated Search

    2015-03-01

    A large experimental program, consisting of the design, construction, curing, exposure, and structural load : testing of 16 large-scale column specimens with a critical lap splice region that were influenced by varying : stages of alkali-silica react...

  13. The Reliability of Teacher Decision-Making in Recommending Accommodations for Large-Scale Tests. Technical Report # 08-01

    ERIC Educational Resources Information Center

    Tindal, Gerald; Lee, Daesik; Geller, Leanne Ketterlin

    2008-01-01

    In this paper we review different methods for teachers to recommend accommodations in large scale tests. Then we present data on the stability of their judgments on variables relevant to this decision-making process. The outcomes from the judgments support the need for a more explicit model. Four general categories are presented: student…

  14. Grindability measurements on low-rank fuels. [Prediction of large pulverizer performance from small scale test equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peipho, R.R.; Dougan, D.R.

    1981-01-01

    Experience has shown that the grinding characteristics of low rank coals are best determined by testing them in a pulverizer. Test results from a small MPS-32 Babcock and Wilcox pulverizer to predict large, full-scale pulverizer performance are presented. The MPS-32 apparatus, test procedure and evaluation of test results is described. The test data show that the Hardgrove apparatus and the ASTM test method must be used with great caution when considering low-rank fuels. The MPS-32 meets the needs for real-machine simulation but with some disadvantages. A smaller pulverizer is desirable. 1 ref.

  15. Equating in Small-Scale Language Testing Programs

    ERIC Educational Resources Information Center

    LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan

    2017-01-01

    Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…

  16. Pyrotechnic hazards classification and evaluation program. Run-up reaction testing in pyrotechnic dust suspensions

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A preliminary investigation of the parameters included in run-up dust reactions is presented. Two types of tests were conducted: (1) ignition criteria of large bulk pyrotechnic dusts, and (2) optimal run-up conditions of large bulk pyrotechnic dusts. These tests were used to evaluate the order of magnitude and gross scale requirements needed to induce run-up reactions in pyrotechnic dusts and to simulate at reduced scale an accident that occurred in a manufacturing installation. Test results showed that propagation of pyrotechnic dust clouds resulted in a fireball of relatively long duration and large size. In addition, a plane wave front was observed to travel down the length of the gallery.

  17. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    PubMed

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  18. Evaluation of the reliability and validity for X16 balance testing scale for the elderly.

    PubMed

    Ju, Jingjuan; Jiang, Yu; Zhou, Peng; Li, Lin; Ye, Xiaolei; Wu, Hongmei; Shen, Bin; Zhang, Jialei; He, Xiaoding; Niu, Chunjin; Xia, Qinghua

    2018-05-10

    Balance performance is considered as an indicator of functional status in the elderly, a large scale population screening and evaluation in the community context followed by proper interventions would be of great significance at public health level. However, there has been no suitable balance testing scale available for large scale studies in the unique community context of urban China. A balance scale named X16 balance testing scale was developed, which was composed of 3 domains and 16 items. A total of 1985 functionally independent and active community-dwelling elderly adults' balance abilities were tested using the X16 scale. The internal consistency, split-half reliability, content validity, construct validity, discriminant validity of X16 balance testing scale were evaluated. Factor analysis was performed to identify alternative factor structure. The Eigenvalues of factors 1, 2, and 3 were 8.53, 1.79, and 1.21, respectively, and their cumulative contribution to the total variance reached 72.0%. These 3 factors mainly represented domains static balance, postural stability, and dynamic balance. The Cronbach alpha coefficient for the scale was 0.933. The Spearman correlation coefficients between items and its corresponding domains were ranged from 0.538 to 0.964. The correlation coefficients between each item and its corresponding domain were higher than the coefficients between this item and other domains. With the increase of age, the scores of balance performance, domains static balance, postural stability, and dynamic balance in the elderly declined gradually (P < 0.001). With the increase of age, the proportion of the elderly with intact balance performance decreased gradually (P < 0.001). The reliability and validity of the X16 balance testing scale is both adequate and acceptable. Due to its simple and quick use features, it is practical to be used repeatedly and routinely especially in community setting and on large scale screening.

  19. Large-Scale Wind-Tunnel Tests of an Airplane Model with an Unswept, Tilt Wing of Aspect Ratio 5.5, and with Four Propellers and Blowing Flaps

    NASA Technical Reports Server (NTRS)

    Weiberg, James A.; Holzhauser, Curt A.

    1961-01-01

    Tests were made of a large-scale tilt-wing deflected-slipstream VTOL airplane with blowing-type BLC trailing-edge flaps. The model was tested with flap deflections of 0 deg. without BLC, 50 deg. with and without BLC, and 80 deg. with BLC for wing-tilt angles of 0, 30, and 50 deg. Included are results of tests of the model equipped with a leading-edge flap and the results of tests of the model in the presence of a ground plane.

  20. Development of a superconductor magnetic suspension and balance prototype facility for studying the feasibility of applying this technique to large scale aerodynamic testing

    NASA Technical Reports Server (NTRS)

    Zapata, R. N.; Humphris, R. R.; Henderson, K. C.

    1975-01-01

    The basic research and development work towards proving the feasibility of operating an all-superconductor magnetic suspension and balance device for aerodynamic testing is presented. The feasibility of applying a quasi-six-degree-of freedom free support technique to dynamic stability research was studied along with the design concepts and parameters for applying magnetic suspension techniques to large-scale aerodynamic facilities. A prototype aerodynamic test facility was implemented. Relevant aspects of the development of the prototype facility are described in three sections: (1) design characteristics; (2) operational characteristics; and (3) scaling to larger facilities.

  1. Research-Based Recommendations for the Use of Accommodations in Large-Scale Assessments: 2012 Update. Practical Guidelines for the Education of English Language Learners. Book 4

    ERIC Educational Resources Information Center

    Kieffer, Michael J.; Rivera, Mabel; Francis, David J.

    2012-01-01

    This report presents results from a new quantitative synthesis of research on the effectiveness and validity of test accommodations for English language learners (ELLs) taking large-scale assessments. In 2006, the Center on Instruction published a review of the literature on test accommodations for ELLs titled "Practical Guidelines for the…

  2. Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.

    1987-01-01

    The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.

  3. Safety Testing of Ammonium Nitrate Based Mixtures

    NASA Astrophysics Data System (ADS)

    Phillips, Jason; Lappo, Karmen; Phelan, James; Peterson, Nathan; Gilbert, Don

    2013-06-01

    Ammonium nitrate (AN)/ammonium nitrate based explosives have a lengthy documented history of use by adversaries in acts of terror. While historical research has been conducted on AN-based explosive mixtures, it has primarily focused on detonation performance while varying the oxygen balance between the oxidizer and fuel components. Similarly, historical safety data on these materials is often lacking in pertinent details such as specific fuel type, particle size parameters, oxidizer form, etc. A variety of AN-based fuel-oxidizer mixtures were tested for small-scale sensitivity in preparation for large-scale testing. Current efforts focus on maintaining a zero oxygen-balance (a stoichiometric ratio for active chemical participants) while varying factors such as charge geometry, oxidizer form, particle size, and inert diluent ratios. Small-scale safety testing was conducted on various mixtures and fuels. It was found that ESD sensitivity is significantly affected by particle size, while this is less so for impact and friction. Thermal testing is in progress to evaluate hazards that may be experienced during large-scale testing.

  4. Questionnaire-based assessment of executive functioning: Psychometrics.

    PubMed

    Castellanos, Irina; Kronenberger, William G; Pisoni, David B

    2018-01-01

    The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.

  5. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  6. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    PubMed

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  7. The use of impact force as a scale parameter for the impact response of composite laminates

    NASA Technical Reports Server (NTRS)

    Jackson, Wade C.; Poe, C. C., Jr.

    1992-01-01

    The building block approach is currently used to design composite structures. With this approach, the data from coupon tests is scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low velocity impacts where the mass of the impacter is large and the size of the specimen is small. For large mass impacts of moderately thick (0.35 to 0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large mass test results can be applied directly to other plates of the same size.

  8. The use of impact force as a scale parameter for the impact response of composite laminates

    NASA Technical Reports Server (NTRS)

    Jackson, Wade C.; Poe, C. C., Jr.

    1992-01-01

    The building block approach is currently used to design composite structures. With this approach, the data from coupon tests are scaled up to determine the design of a structure. Current standard impact tests and methods of relating test data to other structures are not generally understood and are often used improperly. A methodology is outlined for using impact force as a scale parameter for delamination damage for impacts of simple plates. Dynamic analyses were used to define ranges of plate parameters and impact parameters where quasi-static analyses are valid. These ranges include most low-velocity impacts where the mass of the impacter is large, and the size of the specimen is small. For large-mass impacts of moderately thick (0.35-0.70 cm) laminates, the maximum extent of delamination damage increased with increasing impact force and decreasing specimen thickness. For large-mass impact tests at a given kinetic energy, impact force and hence delamination size depends on specimen size, specimen thickness, boundary conditions, and indenter size and shape. If damage is reported in terms of impact force instead of kinetic energy, large-mass test results can be applied directly to other plates of the same thickness.

  9. Ice Shape Scaling for Aircraft in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2008-01-01

    This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.

  10. Quarter Scale RLV Multi-Lobe LH2 Tank Test Program

    NASA Technical Reports Server (NTRS)

    Blum, Celia; Puissegur, Dennis; Tidwell, Zeb; Webber, Carol

    1998-01-01

    Thirty cryogenic pressure cycles have been completed on the Lockheed Martin Michoud Space Systems quarter scale RLV composite multi-lobe liquid hydrogen propellant tank assembly, completing the initial phases of testing and demonstrating technologies key to the success of large scale composite cryogenic tankage for X33, RLV, and other future launch vehicles.

  11. Providing Test Performance Feedback That Bridges Assessment and Instruction: The Case of Two Standardized English Language Tests in Japan

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo; Koizumi, Rie

    2017-01-01

    This small-scale qualitative study considers feedback and results reported for two major large-scale English language tests administered in Japan: the Global Test of English Communication for Students (GTECfS) and the Eiken Test in Practical English Proficiency (Eiken). Specifically, it examines current score-reporting practices in student and…

  12. Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane

    NASA Technical Reports Server (NTRS)

    Syberg, J.; Koncsek, J. L.

    1977-01-01

    Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.

  13. Wing force and surface pressure data from a hover test of a 0.658-scale V-22 rotor and wing

    NASA Technical Reports Server (NTRS)

    Felker, Fort F.; Shinoda, Patrick R.; Heffernan, Ruth M.; Sheehy, Hugh F.

    1990-01-01

    A hover test of a 0.658-scale V-22 rotor and wing was conducted in the 40 x 80 foot wind tunnel at Ames Research Center. The principal objective of the test was to measure the surface pressures and total download on a large scale V-22 wing in hover. The test configuration consisted of a single rotor and semispan wing on independent balance systems. A large image plane was used to represent the aircraft plane of symmetry. Wing flap angles ranging from 45 to 90 degrees were examined. Data were acquired for both directions of the rotor rotation relative to the wing. Steady and unsteady wing surface pressures, total wing forces, and rotor performance data are presented for all of the configurations that were tested.

  14. Mach Number effects on turbulent superstructures in wall bounded flows

    NASA Astrophysics Data System (ADS)

    Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven

    2017-11-01

    Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.

  15. An Investigation on Computer-Adaptive Multistage Testing Panels for Multidimensional Assessment

    ERIC Educational Resources Information Center

    Wang, Xinrui

    2013-01-01

    The computer-adaptive multistage testing (ca-MST) has been developed as an alternative to computerized adaptive testing (CAT), and been increasingly adopted in large-scale assessments. Current research and practice only focus on ca-MST panels for credentialing purposes. The ca-MST test mode, therefore, is designed to gauge a single scale. The…

  16. Impact of Accumulated Error on Item Response Theory Pre-Equating with Mixed Format Tests

    ERIC Educational Resources Information Center

    Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F.

    2016-01-01

    The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…

  17. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  18. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2016-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…

  19. Behavioral self-organization underlies the resilience of a coastal ecosystem.

    PubMed

    de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R; Herman, Peter M J; van de Koppel, Johan

    2017-07-25

    Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds ( Mytilus edulis ) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance.

  20. Behavioral self-organization underlies the resilience of a coastal ecosystem

    PubMed Central

    de Paoli, Hélène; van der Heide, Tjisse; van den Berg, Aniek; Silliman, Brian R.; Herman, Peter M. J.

    2017-01-01

    Self-organized spatial patterns occur in many terrestrial, aquatic, and marine ecosystems. Theoretical models and observational studies suggest self-organization, the formation of patterns due to ecological interactions, is critical for enhanced ecosystem resilience. However, experimental tests of this cross-ecosystem theory are lacking. In this study, we experimentally test the hypothesis that self-organized pattern formation improves the persistence of mussel beds (Mytilus edulis) on intertidal flats. In natural beds, mussels generate self-organized patterns at two different spatial scales: regularly spaced clusters of mussels at centimeter scale driven by behavioral aggregation and large-scale, regularly spaced bands at meter scale driven by ecological feedback mechanisms. To test for the relative importance of these two spatial scales of self-organization on mussel bed persistence, we conducted field manipulations in which we factorially constructed small-scale and/or large-scale patterns. Our results revealed that both forms of self-organization enhanced the persistence of the constructed mussel beds in comparison to nonorganized beds. Small-scale, behaviorally driven cluster patterns were found to be crucial for persistence, and thus resistance to wave disturbance, whereas large-scale, self-organized patterns facilitated reformation of small-scale patterns if mussels were dislodged. This study provides experimental evidence that self-organization can be paramount to enhancing ecosystem persistence. We conclude that ecosystems with self-organized spatial patterns are likely to benefit greatly from conservation and restoration actions that use the emergent effects of self-organization to increase ecosystem resistance to disturbance. PMID:28696313

  1. "Fan-Tip-Drive" High-Power-Density, Permanent Magnet Electric Motor and Test Rig Designed for a Nonpolluting Aircraft Propulsion Program

    NASA Technical Reports Server (NTRS)

    Brown, Gerald V.; Kascak, Albert F.

    2004-01-01

    A scaled blade-tip-drive test rig was designed at the NASA Glenn Research Center. The rig is a scaled version of a direct-current brushless motor that would be located in the shroud of a thrust fan. This geometry is very attractive since the allowable speed of the armature is approximately the speed of the blade tips (Mach 1 or 1100 ft/s). The magnetic pressure generated in the motor acts over a large area and, thus, produces a large force or torque. This large force multiplied by the large velocity results in a high-power-density motor.

  2. Large Field Photogrammetry Techniques in Aircraft and Spacecraft Impact Testing

    NASA Technical Reports Server (NTRS)

    Littell, Justin D.

    2010-01-01

    The Landing and Impact Research Facility (LandIR) at NASA Langley Research Center is a 240 ft. high A-frame structure which is used for full-scale crash testing of aircraft and rotorcraft vehicles. Because the LandIR provides a unique capability to introduce impact velocities in the forward and vertical directions, it is also serving as the facility for landing tests on full-scale and sub-scale Orion spacecraft mass simulators. Recently, a three-dimensional photogrammetry system was acquired to assist with the gathering of vehicle flight data before, throughout and after the impact. This data provides the basis for the post-test analysis and data reduction. Experimental setups for pendulum swing tests on vehicles having both forward and vertical velocities can extend to 50 x 50 x 50 foot cubes, while weather, vehicle geometry, and other constraints make each experimental setup unique to each test. This paper will discuss the specific calibration techniques for large fields of views, camera and lens selection, data processing, as well as best practice techniques learned from using the large field of view photogrammetry on a multitude of crash and landing test scenarios unique to the LandIR.

  3. Large scale static tests of a tilt-nacelle V/STOL propulsion/attitude control system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The concept of a combined V/STOL propulsion and aircraft attitude control system was subjected to large scale engine tests. The tilt nacelle/attitude control vane package consisted of the T55 powered Hamilton Standard Q-Fan demonstrator. Vane forces, moments, thermal and acoustic characteristics as well as the effects on propulsion system performance were measured under conditions simulating hover in and out of ground effect.

  4. Modal Testing of the NPSAT1 Engineering Development Unit

    DTIC Science & Technology

    2012-07-01

    erkläre ich, dass die vorliegende Master Arbeit von mir selbstständig und nur unter Verwendung der angegebenen Quellen und Hilfsmittel angefertigt...logarithmic scale . As 5 Figure 2 shows, natural frequencies are indicated by large values of the first CMIF (peaks), and multiple modes can be detected by...structure’s behavior. Ewins even states, “that no large- scale modal test should be permitted to proceed until some preliminary SDOF analyses have

  5. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    DTIC Science & Technology

    2016-11-01

    protocol designed to support groups of devices in a local region [4]. It attempts to use the wireless medium to broadcast minimal control information...1) Group Discovery: The goal of the group discovery algo- rithm is to find group nodes without globally flooding control messages. To facilitate this...Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory

  6. Scaling up HIV viral load - lessons from the large-scale implementation of HIV early infant diagnosis and CD4 testing.

    PubMed

    Peter, Trevor; Zeh, Clement; Katz, Zachary; Elbireer, Ali; Alemayehu, Bereket; Vojnov, Lara; Costa, Alex; Doi, Naoko; Jani, Ilesh

    2017-11-01

    The scale-up of effective HIV viral load (VL) testing is an urgent public health priority. Implementation of testing is supported by the availability of accurate, nucleic acid based laboratory and point-of-care (POC) VL technologies and strong WHO guidance recommending routine testing to identify treatment failure. However, test implementation faces challenges related to the developing health systems in many low-resource countries. The purpose of this commentary is to review the challenges and solutions from the large-scale implementation of other diagnostic tests, namely nucleic-acid based early infant HIV diagnosis (EID) and CD4 testing, and identify key lessons to inform the scale-up of VL. Experience with EID and CD4 testing provides many key lessons to inform VL implementation and may enable more effective and rapid scale-up. The primary lessons from earlier implementation efforts are to strengthen linkage to clinical care after testing, and to improve the efficiency of testing. Opportunities to improve linkage include data systems to support the follow-up of patients through the cascade of care and test delivery, rapid sample referral networks, and POC tests. Opportunities to increase testing efficiency include improvements to procurement and supply chain practices, well connected tiered laboratory networks with rational deployment of test capacity across different levels of health services, routine resource mapping and mobilization to ensure adequate resources for testing programs, and improved operational and quality management of testing services. If applied to VL testing programs, these approaches could help improve the impact of VL on ART failure management and patient outcomes, reduce overall costs and help ensure the sustainable access to reduced pricing for test commodities, as well as improve supportive health systems such as efficient, and more rigorous quality assurance. These lessons draw from traditional laboratory practices as well as fields such as logistics, operations management and business. The lessons and innovations from large-scale EID and CD4 programs described here can be adapted to inform more effective scale-up approaches for VL. They demonstrate that an integrated approach to health system strengthening focusing on key levers for test access such as data systems, supply efficiencies and network management. They also highlight the challenges with implementation and the need for more innovative approaches and effective partnerships to achieve equitable and cost-effective test access. © 2017 The Authors. Journal of the International AIDS Society published by John Wiley & sons Ltd on behalf of the International AIDS Society.

  7. Experimental Study of Homogeneous Isotropic Slowly-Decaying Turbulence in Giant Grid-Wind Tunnel Set Up

    NASA Astrophysics Data System (ADS)

    Aliseda, Alberto; Bourgoin, Mickael; Eswirp Collaboration

    2014-11-01

    We present preliminary results from a recent grid turbulence experiment conducted at the ONERA wind tunnel in Modane, France. The ESWIRP Collaboration was conceived to probe the smallest scales of a canonical turbulent flow with very high Reynolds numbers. To achieve this, the largest scales of the turbulence need to be extremely big so that, even with the large separation of scales, the smallest scales would be well above the spatial and temporal resolution of the instruments. The ONERA wind tunnel in Modane (8 m -diameter test section) was chosen as a limit of the biggest large scales achievable in a laboratory setting. A giant inflatable grid (M = 0.8 m) was conceived to induce slowly-decaying homogeneous isotropic turbulence in a large region of the test section, with minimal structural risk. An international team or researchers collected hot wire anemometry, ultrasound anemometry, resonant cantilever anemometry, fast pitot tube anemometry, cold wire thermometry and high-speed particle tracking data of this canonical turbulent flow. While analysis of this large database, which will become publicly available over the next 2 years, has only started, the Taylor-scale Reynolds number is estimated to be between 400 and 800, with Kolmogorov scales as large as a few mm . The ESWIRP Collaboration is formed by an international team of scientists to investigate experimentally the smallest scales of turbulence. It was funded by the European Union to take advantage of the largest wind tunnel in Europe for fundamental research.

  8. Using Microsoft Excel[R] to Calculate Descriptive Statistics and Create Graphs

    ERIC Educational Resources Information Center

    Carr, Nathan T.

    2008-01-01

    Descriptive statistics and appropriate visual representations of scores are important for all test developers, whether they are experienced testers working on large-scale projects, or novices working on small-scale local tests. Many teachers put in charge of testing projects do not know "why" they are important, however, and are utterly convinced…

  9. Measuring large-scale vertical motion in the atmosphere with dropsondes

    NASA Astrophysics Data System (ADS)

    Bony, Sandrine; Stevens, Bjorn

    2017-04-01

    Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.

  10. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations.

    PubMed

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.

  11. Experimental investigation of an ejector-powered free-jet facility

    NASA Technical Reports Server (NTRS)

    Long, Mary JO

    1992-01-01

    NASA Lewis Research Center's (LeRC) newly developed Nozzle Acoustic Test Rig (NATR) is a large free-jet test facility powered by an ejector system. In order to assess the pumping performance of this ejector concept and determine its sensitivity to various design parameters, a 1/5-scale model of the NATR was built and tested prior to the operation of the actual facility. This paper discusses the results of the 1/5-scale model tests and compares them with the findings from the full-scale tests.

  12. Quality Control for Scoring Tests Administered in Continuous Mode: An NCME Instructional Module

    ERIC Educational Resources Information Center

    Allalouf, Avi; Gutentag, Tony; Baumer, Michal

    2017-01-01

    Quality control (QC) in testing is paramount. QC procedures for tests can be divided into two types. The first type, one that has been well researched, is QC for tests administered to large population groups on few administration dates using a small set of test forms (e.g., large-scale assessment). The second type is QC for tests, usually…

  13. Structural Similitude and Scaling Laws

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1998-01-01

    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.

  14. Development of a Shipboard Remote Control and Telemetry Experimental System for Large-Scale Model’s Motions and Loads Measurement in Realistic Sea Waves

    PubMed Central

    Jiao, Jialong; Ren, Huilong; Adenya, Christiaan Adika; Chen, Chaohe

    2017-01-01

    Wave-induced motion and load responses are important criteria for ship performance evaluation. Physical experiments have long been an indispensable tool in the predictions of ship’s navigation state, speed, motions, accelerations, sectional loads and wave impact pressure. Currently, majority of the experiments are conducted in laboratory tank environment, where the wave environments are different from the realistic sea waves. In this paper, a laboratory tank testing system for ship motions and loads measurement is reviewed and reported first. Then, a novel large-scale model measurement technique is developed based on the laboratory testing foundations to obtain accurate motion and load responses of ships in realistic sea conditions. For this purpose, a suite of advanced remote control and telemetry experimental system was developed in-house to allow for the implementation of large-scale model seakeeping measurement at sea. The experimental system includes a series of technique sensors, e.g., the Global Position System/Inertial Navigation System (GPS/INS) module, course top, optical fiber sensors, strain gauges, pressure sensors and accelerometers. The developed measurement system was tested by field experiments in coastal seas, which indicates that the proposed large-scale model testing scheme is capable and feasible. Meaningful data including ocean environment parameters, ship navigation state, motions and loads were obtained through the sea trial campaign. PMID:29109379

  15. Full-scale flammability test data for validation of aircraft fire mathematical models

    NASA Technical Reports Server (NTRS)

    Kuminecz, J. F.; Bricker, R. W.

    1982-01-01

    Twenty-five large scale aircraft flammability tests were conducted in a Boeing 737 fuselage at the NASA Johnson Space Center (JSC). The objective of this test program was to provide a data base on the propagation of large scale aircraft fires to support the validation of aircraft fire mathematical models. Variables in the test program included cabin volume, amount of fuel, fuel pan area, fire location, airflow rate, and cabin materials. A number of tests were conducted with jet A-1 fuel only, while others were conducted with various Boeing 747 type cabin materials. These included urethane foam seats, passenger service units, stowage bins, and wall and ceiling panels. Two tests were also included using special urethane foam and polyimide foam seats. Tests were conducted with each cabin material individually, with various combinations of these materials, and finally, with all materials in the cabin. The data include information obtained from approximately 160 locations inside the fuselage.

  16. Technology and testing.

    PubMed

    Quellmalz, Edys S; Pellegrino, James W

    2009-01-02

    Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.

  17. Large-scale academic achievement testing of deaf and hard-of-hearing students: past, present, and future.

    PubMed

    Qi, Sen; Mitchell, Ross E

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the validity and reliability of using the Stanford for this special student population still require extensive scrutiny. Recent shifts in educational policy environment, which require that schools enable all children to achieve proficiency through accountability testing, warrants a close examination of the adequacy and relevance of the current large-scale testing of deaf and hard-of-hearing students. This study has three objectives: (a) it will summarize the historical data over the last three decades to indicate trends in academic achievement for this special population, (b) it will analyze the current federal laws and regulations related to educational testing and special education, thereby identifying gaps between policy and practice in the field, especially identifying the limitations of current testing programs in assessing what deaf and hard-of-hearing students know, and (c) it will offer some insights and suggestions for future testing programs for deaf and hard-of-hearing students.

  18. Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions

    NASA Technical Reports Server (NTRS)

    Grujic, L. T.; Siljak, D. D.

    1973-01-01

    The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.

  19. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  20. Large-scale translocation strategies for reintroducing red-cockaded woodpeckers

    Treesearch

    Daniel Saenz; Kristen A. Baum; Richard N. Conner; D. Craig Rudolph; Ralph Costa

    2002-01-01

    Translocation of wild birds is a potential conservation strategy for the endangered red-cockaded woodpecker (Picoides borealis). We developed and tested 8 large-scale translocation strategy models for a regional red-cockaded woodpecker reintroduction program. The purpose of the reintroduction program is to increase the number of red-cockaded...

  1. Review of the Need for a Large-scale Test Facility for Research on the Effects of Extreme Winds on Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. G. Little

    1999-03-01

    The Idaho National Engineering and Environmental Laboratory (INEEL), through the US Department of Energy (DOE), has proposed that a large-scale wind test facility (LSWTF) be constructed to study, in full-scale, the behavior of low-rise structures under simulated extreme wind conditions. To determine the need for, and potential benefits of, such a facility, the Idaho Operations Office of the DOE requested that the National Research Council (NRC) perform an independent assessment of the role and potential value of an LSWTF in the overall context of wind engineering research. The NRC established the Committee to Review the Need for a Large-scale Testmore » Facility for Research on the Effects of Extreme Winds on Structures, under the auspices of the Board on Infrastructure and the Constructed Environment, to perform this assessment. This report conveys the results of the committee's deliberations as well as its findings and recommendations. Data developed at large-scale would enhanced the understanding of how structures, particularly light-frame structures, are affected by extreme winds (e.g., hurricanes, tornadoes, sever thunderstorms, and other events). With a large-scale wind test facility, full-sized structures, such as site-built or manufactured housing and small commercial or industrial buildings, could be tested under a range of wind conditions in a controlled, repeatable environment. At this time, the US has no facility specifically constructed for this purpose. During the course of this study, the committee was confronted by three difficult questions: (1) does the lack of a facility equate to a need for the facility? (2) is need alone sufficient justification for the construction of a facility? and (3) would the benefits derived from information produced in an LSWTF justify the costs of producing that information? The committee's evaluation of the need and justification for an LSWTF was shaped by these realities.« less

  2. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  4. Turbulent and Laminar Flow in Karst Conduits Under Unsteady Flow Conditions: Interpretation of Pumping Tests by Discrete Conduit-Continuum Modeling

    NASA Astrophysics Data System (ADS)

    Giese, M.; Reimann, T.; Bailly-Comte, V.; Maréchal, J.-C.; Sauter, M.; Geyer, T.

    2018-03-01

    Due to the duality in terms of (1) the groundwater flow field and (2) the discharge conditions, flow patterns of karst aquifer systems are complex. Estimated aquifer parameters may differ by several orders of magnitude from local (borehole) to regional (catchment) scale because of the large contrast in hydraulic parameters between matrix and conduit, their heterogeneity and anisotropy. One approach to deal with the scale effect problem in the estimation of hydraulic parameters of karst aquifers is the application of large-scale experiments such as long-term high-abstraction conduit pumping tests, stimulating measurable groundwater drawdown in both, the karst conduit system as well as the fractured matrix. The numerical discrete conduit-continuum modeling approach MODFLOW-2005 Conduit Flow Process Mode 1 (CFPM1) is employed to simulate laminar and nonlaminar conduit flow, induced by large-scale experiments, in combination with Darcian matrix flow. Effects of large-scale experiments were simulated for idealized settings. Subsequently, diagnostic plots and analyses of different fluxes are applied to interpret differences in the simulated conduit drawdown and general flow patterns. The main focus is set on the question to which extent different conduit flow regimes will affect the drawdown in conduit and matrix depending on the hydraulic properties of the conduit system, i.e., conduit diameter and relative roughness. In this context, CFPM1 is applied to investigate the importance of considering turbulent conditions for the simulation of karst conduit flow. This work quantifies the relative error that results from assuming laminar conduit flow for the interpretation of a synthetic large-scale pumping test in karst.

  5. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  6. Measured acoustic characteristics of ducted supersonic jets at different model scales

    NASA Technical Reports Server (NTRS)

    Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.

    1993-01-01

    A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.

  7. Testing the Big Bang: Light elements, neutrinos, dark matter and large-scale structure

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    Several experimental and observational tests of the standard cosmological model are examined. In particular, a detailed discussion is presented regarding: (1) nucleosynthesis, the light element abundances, and neutrino counting; (2) the dark matter problems; and (3) the formation of galaxies and large-scale structure. Comments are made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing and the cosmological and astrophysical constraints on it.

  8. Gamma-ray Background Spectrum and Annihilation Rate in the Baryon-symmetric Big-bang Cosmology

    NASA Technical Reports Server (NTRS)

    Puget, J. L.

    1973-01-01

    An attempt was made to acquire experimental information on the problem of baryon symmetry on a large cosmological scale by observing the annihilation products. Data cover absorption cross sections and background radiation due to other sources for the two main products of annihilation, gamma rays and neutrinos. Test results show that the best direct experimental test for the presence of large scale antimatter lies in the gamma ray background spectrum between 1 and 70 MeV.

  9. Active Self-Testing Noise Measurement Sensors for Large-Scale Environmental Sensor Networks

    PubMed Central

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-01-01

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10. PMID:24351634

  10. Measures of Agreement Between Many Raters for Ordinal Classifications

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2015-01-01

    Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449

  11. Sex differences in virtual navigation influenced by scale and navigation experience.

    PubMed

    Padilla, Lace M; Creem-Regehr, Sarah H; Stefanucci, Jeanine K; Cashdan, Elizabeth A

    2017-04-01

    The Morris water maze is a spatial abilities test adapted from the animal spatial cognition literature and has been studied in the context of sex differences in humans. This is because its standard design, which manipulates proximal (close) and distal (far) cues, applies to human navigation. However, virtual Morris water mazes test navigation skills on a scale that is vastly smaller than natural human navigation. Many researchers have argued that navigating in large and small scales is fundamentally different, and small-scale navigation might not simulate natural human navigation. Other work has suggested that navigation experience could influence spatial skills. To address the question of how individual differences influence navigational abilities in differently scaled environments, we employed both a large- (146.4 m in diameter) and a traditional- (36.6 m in diameter) scaled virtual Morris water maze along with a novel measure of navigation experience (lifetime mobility). We found sex differences on the small maze in the distal cue condition only, but in both cue-conditions on the large maze. Also, individual differences in navigation experience modulated navigation performance on the virtual water maze, showing that higher mobility was related to better performance with proximal cues for only females on the small maze, but for both males and females on the large maze.

  12. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies

    PubMed Central

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets. PMID:25937948

  13. Stormbow: A Cloud-Based Tool for Reads Mapping and Expression Quantification in Large-Scale RNA-Seq Studies.

    PubMed

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance

    2013-01-01

    RNA-Seq is becoming a promising replacement to microarrays in transcriptome profiling and differential gene expression study. Technical improvements have decreased sequencing costs and, as a result, the size and number of RNA-Seq datasets have increased rapidly. However, the increasing volume of data from large-scale RNA-Seq studies poses a practical challenge for data analysis in a local environment. To meet this challenge, we developed Stormbow, a cloud-based software package, to process large volumes of RNA-Seq data in parallel. The performance of Stormbow has been tested by practically applying it to analyse 178 RNA-Seq samples in the cloud. In our test, it took 6 to 8 hours to process an RNA-Seq sample with 100 million reads, and the average cost was $3.50 per sample. Utilizing Amazon Web Services as the infrastructure for Stormbow allows us to easily scale up to handle large datasets with on-demand computational resources. Stormbow is a scalable, cost effective, and open-source based tool for large-scale RNA-Seq data analysis. Stormbow can be freely downloaded and can be used out of box to process Illumina RNA-Seq datasets.

  14. Impact of Design Effects in Large-Scale District and State Assessments

    ERIC Educational Resources Information Center

    Phillips, Gary W.

    2015-01-01

    This article proposes that sampling design effects have potentially huge unrecognized impacts on the results reported by large-scale district and state assessments in the United States. When design effects are unrecognized and unaccounted for they lead to underestimating the sampling error in item and test statistics. Underestimating the sampling…

  15. Design under Constraints: The Case of Large-Scale Assessment Systems

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2010-01-01

    In "Updating the Duplex Design for Test-Based Accountability in the Twenty-First Century," Bejar and Graf (2010) propose extensions to the duplex design for large-scale assessment presented in Bock and Mislevy (1988). Examining the range of people who use assessment results--from students, teachers, administrators, curriculum designers,…

  16. Linkages between large-scale climate patterns and the dynamics of Alaskan caribou populations

    Treesearch

    Kyle Joly; David R. Klein; David L. Verbyla; T. Scott Rupp; F. Stuart Chapin

    2011-01-01

    Recent research has linked climate warming to global declines in caribou and reindeer (both Rangifer tarandus) populations. We hypothesize large-scale climate patterns are a contributing factor explaining why these declines are not universal. To test our hypothesis for such relationships among Alaska caribou herds, we calculated the population growth...

  17. Using Practitioner Inquiry within and against Large-Scale Educational Reform

    ERIC Educational Resources Information Center

    Hines, Mary Beth; Conner-Zachocki, Jennifer

    2015-01-01

    This research study examines the impact of teacher research on participants in a large-scale educational reform initiative in the United States, No Child Left Behind, and its strand for reading teachers, Reading First. Reading First supported professional development for teachers in order to increase student scores on standardized tests. The…

  18. The large scale microelectronics Computer-Aided Design and Test (CADAT) system

    NASA Technical Reports Server (NTRS)

    Gould, J. M.

    1978-01-01

    The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.

  19. Manufacturing Process Developments for Regeneratively-Cooled Channel Wall Rocket Nozzles

    NASA Technical Reports Server (NTRS)

    Gradl, Paul; Brandsmeier, Will

    2016-01-01

    Regeneratively cooled channel wall nozzles incorporate a series of integral coolant channels to contain the coolant to maintain adequate wall temperatures and expand hot gas providing engine thrust and specific impulse. NASA has been evaluating manufacturing techniques targeting large scale channel wall nozzles to support affordability of current and future liquid rocket engine nozzles and thrust chamber assemblies. The development of these large scale manufacturing techniques focus on the liner formation, channel slotting with advanced abrasive water-jet milling techniques and closeout of the coolant channels to replace or augment other cost reduction techniques being evaluated for nozzles. NASA is developing a series of channel closeout techniques including large scale additive manufacturing laser deposition and explosively bonded closeouts. A series of subscale nozzles were completed evaluating these processes. Fabrication of mechanical test and metallography samples, in addition to subscale hardware has focused on Inconel 625, 300 series stainless, aluminum alloys as well as other candidate materials. Evaluations of these techniques are demonstrating potential for significant cost reductions for large scale nozzles and chambers. Hot fire testing is planned using these techniques in the future.

  20. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  1. The requirements for a new full scale subsonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Kelly, M. W.; Mckinney, M. O.; Luidens, R. W.

    1972-01-01

    Justification and requirements are presented for a large subsonic wind tunnel capable of testing full scale aircraft, rotor systems, and advanced V/STOL propulsion systems. The design considerations and constraints for such a facility are reviewed, and the trades between facility test capability and costs are discussed.

  2. IRT Item Parameter Scaling for Developing New Item Pools

    ERIC Educational Resources Information Center

    Kang, Hyeon-Ah; Lu, Ying; Chang, Hua-Hua

    2017-01-01

    Increasing use of item pools in large-scale educational assessments calls for an appropriate scaling procedure to achieve a common metric among field-tested items. The present study examines scaling procedures for developing a new item pool under a spiraled block linking design. The three scaling procedures are considered: (a) concurrent…

  3. Externally blown flap noise research

    NASA Technical Reports Server (NTRS)

    Dorsch, R. G.

    1974-01-01

    The Lewis Research Center cold-flow model externally blown flap (EBF) noise research test program is summarized. Both engine under-the-wing and over-the-wing EBF wing section configurations were studied. Ten large scale and nineteen small scale EBF models were tested. A limited number of forward airspeed effect and flap noise suppression tests were also run. The key results and conclusions drawn from the flap noise tests are summarized and discussed.

  4. How large is large enough for insects? Forest fragmentation effects at three spatial scales

    NASA Astrophysics Data System (ADS)

    Ribas, C. R.; Sobrinho, T. G.; Schoereder, J. H.; Sperber, C. F.; Lopes-Andrade, C.; Soares, S. M.

    2005-02-01

    Several mechanisms may lead to species loss in fragmented habitats, such as edge and shape effects, loss of habitat and heterogeneity. Ants and crickets were sampled in 18 forest remnants in south-eastern Brazil, to test whether a group of small remnants maintains the same insect species richness as similar sized large remnants, at three spatial scales. We tested hypotheses about alpha and gamma diversity to explain the results. Groups of remnants conserve as many species of ants as a single one. Crickets, however, showed a scale-dependent pattern: at small scales there was no significant or important difference between groups of remnants and a single one, while at the larger scale the group of remnants maintained more species. Alpha diversity (local species richness) was similar in a group of remnants and in a single one, at the three spatial scales, both for ants and crickets. Gamma diversity, however, varied both with taxa (ants and crickets) and spatial scale, which may be linked to insect mobility, remnant isolation, and habitat heterogeneity. Biological characteristics of the organisms involved have to be considered when studying fragmentation effects, as well as spatial scale at which it operates. Mobility of the organisms influences fragmentation effects, and consequently conservation strategies.

  5. Liquid Oxygen Propellant Densification Production and Performance Test Results With a Large-Scale Flight-Weight Propellant Tank for the X33 RLV

    NASA Technical Reports Server (NTRS)

    Tomsik, Thomas M.; Meyer, Michael L.

    2010-01-01

    This paper describes in-detail a test program that was initiated at the Glenn Research Center (GRC) involving the cryogenic densification of liquid oxygen (LO2). A large scale LO2 propellant densification system rated for 200 gpm and sized for the X-33 LO2 propellant tank, was designed, fabricated and tested at the GRC. Multiple objectives of the test program included validation of LO2 production unit hardware and characterization of densifier performance at design and transient conditions. First, performance data is presented for an initial series of LO2 densifier screening and check-out tests using densified liquid nitrogen. The second series of tests show performance data collected during LO2 densifier test operations with liquid oxygen as the densified product fluid. An overview of LO2 X-33 tanking operations and load tests with the 20,000 gallon Structural Test Article (STA) are described. Tank loading testing and the thermal stratification that occurs inside of a flight-weight launch vehicle propellant tank were investigated. These operations involved a closed-loop recirculation process of LO2 flow through the densifier and then back into the STA. Finally, in excess of 200,000 gallons of densified LO2 at 120 oR was produced with the propellant densification unit during the demonstration program, an achievement that s never been done before in the realm of large-scale cryogenic tests.

  6. The Positivity Scale

    ERIC Educational Resources Information Center

    Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy; Kupfer, A.; Steca, Patrizia; Caprara, Maria Giovanna; Yamaguchi, Susumu; Fukuzawa, Ai; Abela, John

    2012-01-01

    Five studies document the validity of a new 8-item scale designed to measure "positivity," defined as the tendency to view life and experiences with a positive outlook. In the first study (N = 372), the psychometric properties of Positivity Scale (P Scale) were examined in accordance with classical test theory using a large number of…

  7. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations.

    PubMed

    Tučník, Petr; Bureš, Vladimír

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.

  8. Requirements for a mobile communications satellite system. Volume 3: Large space structures measurements study

    NASA Technical Reports Server (NTRS)

    Akle, W.

    1983-01-01

    This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.

  9. Performance Assessment of a Large Scale Pulsejet- Driven Ejector System

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.

    2006-01-01

    Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems

  10. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    ERIC Educational Resources Information Center

    Kraemer, David J. M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2017-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In 2 experiments, participants watched videos of routes through 4 virtual cities and were subsequently tested on their memory for observed landmarks and their ability to…

  11. Survival analysis for a large scale forest health issue: Missouri oak decline

    Treesearch

    C.W. Woodall; P.L. Grambsch; W. Thomas; W.K. Moser

    2005-01-01

    Survival analysis methodologies provide novel approaches for forest mortality analysis that may aid in detecting, monitoring, and mitigating of large-scale forest health issues. This study examined survivor analysis for evaluating a regional forest health issue - Missouri oak decline. With a statewide Missouri forest inventory, log-rank tests of the effects of...

  12. Modeling Alaska boreal forests with a controlled trend surface approach

    Treesearch

    Mo Zhou; Jingjing Liang

    2012-01-01

    An approach of Controlled Trend Surface was proposed to simultaneously take into consideration large-scale spatial trends and nonspatial effects. A geospatial model of the Alaska boreal forest was developed from 446 permanent sample plots, which addressed large-scale spatial trends in recruitment, diameter growth, and mortality. The model was tested on two sets of...

  13. Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment

    ERIC Educational Resources Information Center

    Cui, Ying; Mousavi, Amin

    2015-01-01

    The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…

  14. LARGE-SCALE NATURAL GRADIENT TRACER TEST IN SAND AND GRAVEL, CAPE COD, MASSACHUSETTS - 1. EXPERIMENTAL DESIGN AND OBSERVED TRACER MOVEMENT

    EPA Science Inventory

    A large-scale natural gradient tracer experiment was conducted on Cape Cod, Massachusetts, to examine the transport and dispersion of solutes in a sand and gravel aquifer. The nonreactive tracer, bromide, and the reactive tracers, lithium and molybdate, were injected as a pulse i...

  15. Large-Scale Investigation of the Role of Trait Activation Theory for Understanding Assessment Center Convergent and Discriminant Validity

    ERIC Educational Resources Information Center

    Lievens, Filip; Chasteen, Christopher S.; Day, Eric Anthony; Christiansen, Neil D.

    2006-01-01

    This study used trait activation theory as a theoretical framework to conduct a large-scale test of the interactionist explanation of the convergent and discriminant validity findings obtained in assessment centers. Trait activation theory specifies the conditions in which cross-situationally consistent and inconsistent candidate performances are…

  16. Accommodations for English Language Learners Taking Large-Scale Assessments: A Meta-Analysis on Effectiveness and Validity

    ERIC Educational Resources Information Center

    Kieffer, Michael J.; Lesaux, Nonie K.; Rivera, Mabel; Francis, David J.

    2009-01-01

    Including English language learners (ELLs) in large-scale assessments raises questions about the validity of inferences based on their scores. Test accommodations for ELLs are intended to reduce the impact of limited English proficiency on the assessment of the target construct, most often mathematic or science proficiency. This meta-analysis…

  17. Successful scaling-up of self-sustained pyrolysis of oil palm biomass under pool-type reactor.

    PubMed

    Idris, Juferi; Shirai, Yoshihito; Andou, Yoshito; Mohd Ali, Ahmad Amiruddin; Othman, Mohd Ridzuan; Ibrahim, Izzudin; Yamamoto, Akio; Yasuda, Nobuhiko; Hassan, Mohd Ali

    2016-02-01

    An appropriate technology for waste utilisation, especially for a large amount of abundant pressed-shredded oil palm empty fruit bunch (OFEFB), is important for the oil palm industry. Self-sustained pyrolysis, whereby oil palm biomass was combusted by itself to provide the heat for pyrolysis without an electrical heater, is more preferable owing to its simplicity, ease of operation and low energy requirement. In this study, biochar production under self-sustained pyrolysis of oil palm biomass in the form of oil palm empty fruit bunch was tested in a 3-t large-scale pool-type reactor. During the pyrolysis process, the biomass was loaded layer by layer when the smoke appeared on the top, to minimise the entrance of oxygen. This method had significantly increased the yield of biochar. In our previous report, we have tested on a 30-kg pilot-scale capacity under self-sustained pyrolysis and found that the higher heating value (HHV) obtained was 22.6-24.7 MJ kg(-1) with a 23.5%-25.0% yield. In this scaled-up study, a 3-t large-scale procedure produced HHV of 22.0-24.3 MJ kg(-1) with a 30%-34% yield based on a wet-weight basis. The maximum self-sustained pyrolysis temperature for the large-scale procedure can reach between 600 °C and 700 °C. We concluded that large-scale biochar production under self-sustained pyrolysis was successfully conducted owing to the comparable biochar produced, compared with medium-scale and other studies with an electrical heating element, making it an appropriate technology for waste utilisation, particularly for the oil palm industry. © The Author(s) 2015.

  18. Fabrication of the HIAD Large-Scale Demonstration Assembly and Upcoming Mission Applications

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; Dinonno, J. M.; Cheatwood, F M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale.In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.

  19. Fabrication of the HIAD Large-Scale Demonstration Assembly

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; DiNonno, J. M.; Cheatwood, F. M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale. In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.

  20. Large scale wind tunnel investigation of a folding tilt rotor

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A twenty-five foot diameter folding tilt rotor was tested in a large scale wind tunnel to determine its aerodynamic characteristics in unfolded, partially folded, and fully folded configurations. During the tests, the rotor completed over forty start/stop sequences. After completing the sequences in a stepwise manner, smooth start/stop transitions were made in approximately two seconds. Wind tunnel speeds up through seventy-five knots were used, at which point the rotor mast angle was increased to four degrees, corresponding to a maneuver condition of one and one-half g.

  1. Implementation of the Large-Scale Operations Management Test in the State of Washington.

    DTIC Science & Technology

    1982-12-01

    During FY 79, the U.S. Army Engineer Waterways Experiment Station (WES), Vicksburg, Miss., completed the first phase of its 3-year Large-Scale Operations Management Test (LSOMT). The LSOMT was designed to develop an operational plan to identify methodologies that can be implemented by the U.S. Army Engineer District, Seattle (NPS), to prevent the exotic aquatic macrophyte Eurasian watermilfoil (Myrophyllum spicatum L.) from reaching problem-level proportions in water bodies in the state of Washington. The WES developed specific plans as integral elements

  2. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  3. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  4. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  5. Progress in the Development of a Global Quasi-3-D Multiscale Modeling Framework

    NASA Astrophysics Data System (ADS)

    Jung, J.; Konor, C. S.; Randall, D. A.

    2017-12-01

    The Quasi-3-D Multiscale Modeling Framework (Q3D MMF) is a second-generation MMF, which has following advances over the first-generation MMF: 1) The cloud-resolving models (CRMs) that replace conventional parameterizations are not confined to the large-scale dynamical-core grid cells, and are seamlessly connected to each other, 2) The CRMs sense the three-dimensional large- and cloud-scale environment, 3) Two perpendicular sets of CRM channels are used, and 4) The CRMs can resolve the steep surface topography along the channel direction. The basic design of the Q3D MMF has been developed and successfully tested in a limited-area modeling framework. Currently, global versions of the Q3D MMF are being developed for both weather and climate applications. The dynamical cores governing the large-scale circulation in the global Q3D MMF are selected from two cube-based global atmospheric models. The CRM used in the model is the 3-D nonhydrostatic anelastic Vector-Vorticity Model (VVM), which has been tested with the limited-area version for its suitability for this framework. As a first step of the development, the VVM has been reconstructed on the cubed-sphere grid so that it can be applied to global channel domains and also easily fitted to the large-scale dynamical cores. We have successfully tested the new VVM by advecting a bell-shaped passive tracer and simulating the evolutions of waves resulted from idealized barotropic and baroclinic instabilities. For improvement of the model, we also modified the tracer advection scheme to yield positive-definite results and plan to implement a new physics package that includes a double-moment microphysics and an aerosol physics. The interface for coupling the large-scale dynamical core and the VVM is under development. In this presentation, we shall describe the recent progress in the development and show some test results.

  6. Additional Results of Glaze Icing Scaling in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching

    2016-01-01

    New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 inches and the scale model had a chord of 21 inches. Reference tests were run with airspeeds of 100 and 130.3 knots and with MVD's of 85 and 170 microns. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number W (sub eL). The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the non-dimensional water-film thickness expression and the film Weber number W (sub ef). All tests were conducted at 0 degrees angle of arrival. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For non-dimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-dimensional ice shape profiles at any selected span-wise location from the high fidelity 3-dimensional scanned ice shapes obtained in the IRT.

  7. Additional Results of Glaze Icing Scaling in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching

    2016-01-01

    New guidance of acceptable means of compliance with the super-cooled large drops (SLD) conditions has been issued by the U.S. Department of Transportation's Federal Aviation Administration (FAA) in its Advisory Circular AC 25-28 in November 2014. The Part 25, Appendix O is developed to define a representative icing environment for super-cooled large drops. Super-cooled large drops, which include freezing drizzle and freezing rain conditions, are not included in Appendix C. This paper reports results from recent glaze icing scaling tests conducted in NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the scaling methods recommended for Appendix C conditions might apply to SLD conditions. The models were straight NACA 0012 wing sections. The reference model had a chord of 72 in. and the scale model had a chord of 21 in. Reference tests were run with airspeeds of 100 and 130.3 kn and with MVD's of 85 and 170 micron. Two scaling methods were considered. One was based on the modified Ruff method with scale velocity found by matching the Weber number WeL. The other was proposed and developed by Feo specifically for strong glaze icing conditions, in which the scale liquid water content and velocity were found by matching reference and scale values of the nondimensional water-film thickness expression and the film Weber number Wef. All tests were conducted at 0 deg AOA. Results will be presented for stagnation freezing fractions of 0.2 and 0.3. For nondimensional reference and scale ice shape comparison, a new post-scanning ice shape digitization procedure was developed for extracting 2-D ice shape profiles at any selected span-wise location from the high fidelity 3-D scanned ice shapes obtained in the IRT.

  8. An outdoor test facility for the large-scale production of microalgae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, D.A.; Weissman, J.; Goebel, R.

    The goal of the US Department of EnergySolar Energy Research Institute's Aquatic Species Program is to develop the technology base to produce liquid fuels from microalgae. This technology is being initially developed for the desert Southwest. As part of this program an outdoor test facility has been designed and constructed in Roswell, New Mexico. The site has a large existing infrastructure, a suitable climate, and abundant saline groundwater. This facility will be used to evaluate productivity of microalgae strains and conduct large-scale experiments to increase biomass productivity while decreasing production costs. Six 3-m/sup 2/ fiberglass raceways were constructed. Several microalgaemore » strains were screened for growth, one of which had a short-term productivity rate of greater than 50 g dry wt m/sup /minus/2/ d/sup /minus/1/. Two large-scale, 0.1-ha raceways have also been built. These are being used to evaluate the performance trade-offs between low-cost earthen liners and higher cost plastic liners. A series of hydraulic measurements is also being carried out to evaluate future improved pond designs. Future plans include a 0.5-ha pond, which will be built in approximately 2 years to test a scaled-up system. This unique facility will be available to other researchers and industry for studies on microalgae productivity. 6 refs., 9 figs., 1 tab.« less

  9. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  10. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, M N, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as P N, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which aremore » used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the M N algorithm that do not appear for the P N algorithm. We also observe that in weak scaling tests, the ratio in time to solution of M N to P N decreases.« less

  11. Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.

    2013-01-01

    In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.

  12. The three-point function as a probe of models for large-scale structure

    NASA Astrophysics Data System (ADS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-04-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  13. The EX-SHADWELL-Full Scale Fire Research and Test Ship

    DTIC Science & Technology

    1988-01-20

    If shipboard testing is necessary after the large scale land tests at China Lake, the EX-SHADWELL has a helo pad and well deck available which makes...8217 *,~. *c ’q.. ~ I b. Data acquistion system started. c. Fire started d. Data is recorded until all fire activity has ceased. 3.0 THE TEST AREA 3.1 Test...timing clocks will be started at the instant the fuel is lighted. That instant will be time zero . The time the cables become involved will be recorded

  14. Simulations of hypervelocity impacts for asteroid deflection studies

    NASA Astrophysics Data System (ADS)

    Heberling, T.; Ferguson, J. M.; Gisler, G. R.; Plesko, C. S.; Weaver, R.

    2016-12-01

    The possibility of kinetic-impact deflection of threatening near-earth asteroids will be tested for the first time in the proposed AIDA (Asteroid Impact Deflection Assessment) mission, involving two independent spacecraft, NASAs DART (Double Asteroid Redirection Test) and ESAs AIM (Asteroid Impact Mission). The impact of the DART spacecraft onto the secondary of the binary asteroid 65803 Didymos, at a speed of 5 to 7 km/s, is expected to alter the mutual orbit by an observable amount. The velocity imparted to the secondary depends on the geometry and dynamics of the impact, and especially on the momentum enhancement factor, conventionally called beta. We use the Los Alamos hydrocodes Rage and Pagosa to estimate beta in laboratory-scale benchmark experiments and in the large-scale asteroid deflection test. Simulations are performed in two- and three-dimensions, using a variety of equations of state and strength models for both the lab-scale and large-scale cases. This work is being performed as part of a systematic benchmarking study for the AIDA mission that includes other hydrocodes.

  15. Formability analysis of sheet metals by cruciform testing

    NASA Astrophysics Data System (ADS)

    Güler, B.; Alkan, K.; Efe, M.

    2017-09-01

    Cruciform biaxial tests are increasingly becoming popular for testing the formability of sheet metals as they achieve frictionless, in-plane, multi-axial stress states with a single sample geometry. However, premature fracture of the samples during testing prevents large strain deformation necessary for the formability analysis. In this work, we introduce a miniature cruciform sample design (few mm test region) and a test setup to achieve centre fracture and large uniform strains. With its excellent surface finish and optimized geometry, the sample deforms with diagonal strain bands intersecting at the test region. These bands prevent local necking and concentrate the strains at the sample centre. Imaging and strain analysis during testing confirm the uniform strain distributions and the centre fracture are possible for various strain paths ranging from plane-strain to equibiaxial tension. Moreover, the sample deforms without deviating from the predetermined strain ratio at all test conditions, allowing formability analysis under large strains. We demonstrate these features of the cruciform test for three sample materials: Aluminium 6061-T6 alloy, DC-04 steel and Magnesium AZ31 alloy, and investigate their formability at both the millimetre scale and the microstructure scale.

  16. Propulsion simulator for magnetically-suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.

    1991-01-01

    The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.

  17. Small scale model static acoustic investigation of hybrid high lift systems combining upper surface blowing with the internally blown flap

    NASA Technical Reports Server (NTRS)

    Cole, T. W.; Rathburn, E. A.

    1974-01-01

    A static acoustic and propulsion test of a small radius Jacobs-Hurkamp and a large radius Flex Flap combined with four upper surface blowing (USB) nozzles was performed. Nozzle force and flow data, flap trailing edge total pressure survey data, and acoustic data were obtained. Jacobs-Hurkamp flap surface pressure data, flow visualization photographs, and spoiler acoustic data from the limited mid-year tests are reported. A pressure ratio range of 1.2 to 1.5 was investigated for the USB nozzles and for the auxiliary blowing slots. The acoustic data were scaled to a four-engine STOL airplane of roughly 110,000 kilograms or 50,000 pounds gross weight, corresponding to a model scale of approximately 0.2 for the nozzles without deflector. The model nozzle scale is actually reduced to about .17 with deflector although all results in this report assume 0.2 scale factor. Trailing edge pressure surveys indicated that poor flow attachment was obtained even at large flow impingement angles unless a nozzle deflector plate was used. Good attachment was obtained with the aspect ratio four nozzle with deflector, confirming the small scale wind tunnel tests.

  18. Small-scale dynamic confinement gap test

    NASA Astrophysics Data System (ADS)

    Cook, Malcolm

    2011-06-01

    Gap tests are routinely used to ascertain the shock sensitiveness of new explosive formulations. The tests are popular since that are easy and relatively cheap to perform. However, with modern insensitive formulations with big critical diameters, large test samples are required. This can make testing and screening of new formulations expensive since large quantities of test material are required. Thus a new test that uses significantly smaller sample quantities would be very beneficial. In this paper we describe a new small-scale test that has been designed using our CHARM ignition and growth routine in the DYNA2D hydrocode. The new test is a modified gap test and uses detonating nitromethane to provide dynamic confinement (instead of a thick metal case) whilst exposing the sample to a long duration shock wave. The long duration shock wave allows less reactive materials that are below their critical diameter, more time to react. We present details on the modelling of the test together with some preliminary experiments to demonstrate the potential of the new test method.

  19. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasismore » was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.« less

  20. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.

  1. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Bernard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20 percent semispan), Midspan (64 percent semispan) and Outboard stations (83 percent semispan) of a wing based upon a 65 percent scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 to -1.4 C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 to -6.3 C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that there are morphological characteristics of glaze and scallop ice accretion on these swept-wing models that are dependent upon the velocity. This work has resulted in a large database of ice-accretion geometry on large-scale, swept-wing models.

  2. Shake Test Results and Dynamic Calibration Efforts for the Large Rotor Test Apparatus

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.

    2014-01-01

    A shake test of the Large Rotor Test Apparatus (LRTA) was performed in an effort to enhance NASAscapability to measure dynamic hub loads for full-scale rotor tests. This paper documents the results of theshake test as well as efforts to calibrate the LRTA balance system to measure dynamic loads.Dynamic rotor loads are the primary source of vibration in helicopters and other rotorcraft, leading topassenger discomfort and damage due to fatigue of aircraft components. There are novel methods beingdeveloped to reduce rotor vibrations, but measuring the actual vibration reductions on full-scale rotorsremains a challenge. In order to measure rotor forces on the LRTA, a balance system in the non-rotatingframe is used. The forces at the balance can then be translated to the hub reference frame to measure therotor loads. Because the LRTA has its own dynamic response, the balance system must be calibrated toinclude the natural frequencies of the test rig.

  3. Using Satellite Imagery to Assess Large-Scale Habitat Characteristics of Adirondack Park, New York, USA

    NASA Astrophysics Data System (ADS)

    McClain, Bobbi J.; Porter, William F.

    2000-11-01

    Satellite imagery is a useful tool for large-scale habitat analysis; however, its limitations need to be tested. We tested these limitations by varying the methods of a habitat evaluation for white-tailed deer ( Odocoileus virginianus) in the Adirondack Park, New York, USA, utilizing harvest data to create and validate the assessment models. We used two classified images, one with a large minimum mapping unit but high accuracy and one with no minimum mapping unit but slightly lower accuracy, to test the sensitivity of the evaluation to these differences. We tested the utility of two methods of assessment, habitat suitability index modeling, and pattern recognition modeling. We varied the scale at which the models were applied by using five separate sizes of analysis windows. Results showed that the presence of a large minimum mapping unit eliminates important details of the habitat. Window size is relatively unimportant if the data are averaged to a large resolution (i.e., township), but if the data are used at the smaller resolution, then the window size is an important consideration. In the Adirondacks, the proportion of hardwood and softwood in an area is most important to the spatial dynamics of deer populations. The low occurrence of open area in all parts of the park either limits the effect of this cover type on the population or limits our ability to detect the effect. The arrangement and interspersion of cover types were not significant to deer populations.

  4. Large-scale, on-site confirmatory, and varietal testing of a methyl bromide quarantine treatment to control codling moth (Lepidoptera: Tortricidae) in nectarines exported to Japan.

    PubMed

    Yokoyama, V Y; Miller, G T; Hartsell, P L; Leesch, J G

    2000-06-01

    In total, 30,491 codling moth, Cydia pomonella (L.), 1-d-old eggs on May Grand nectarines in two large-scale tests, and 17,410 eggs on Royal Giant nectarines in four on-site confirmatory tests were controlled with 100% mortality after fumigation with a methyl bromide quarantine treatment (48 g3 for 2 h at > or = 21 degrees C and 50% volume chamber load) on fruit in shipping containers for export to Japan. Ranges (mean +/- SEM) were for percentage sorption 34.7 +/- 6.2 to 46.5 +/- 2.5, and for concentration multiplied by time products 54.3 +/- 0.9 to 74.5 +/- 0.6 g.h/m3 in all tests. In large-scale tests with May Grand nectarines, inorganic bromide residues 48 h after fumigation ranged from 6.8 +/- 0.7 to 6.9 +/- 0.5 ppm, which were below the U.S. Environmental Protection Agency tolerance of 20 ppm; and, organic bromide residues were < 0.01 ppm after 1 d and < 0.001 ppm after 3 d in storage at 0-1 degree C. After completion of larger-scale and on-site confirmatory test requirements, fumigation of 10 nectarine cultivars in shipping containers for export to Japan was approved in 1995. Comparison of LD50s developed for methyl bromide on 1-d-old codling moth eggs on May Grand and Summer Grand nectarines in 1997 versus those developed for nine cultivars in the previous 11 yr showed no significant differences in codling moth response among the cultivars.

  5. Large-scale thermal storage systems. Possibilities of operation and state of the art

    NASA Astrophysics Data System (ADS)

    Jank, R.

    1983-05-01

    The state of the art of large scale thermal energy storage concepts is reviewed. With earth pit storage, the materials question has to be concentrated on. The use of container storage in conventional long distance thermal nets has to be stimulated. Aquifer storage should be tested in a pilot plant to obtain experience in natural aquifer use.

  6. Latest COBE results, large-scale data, and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    One of the predictions of the inflationary scenario of cosmology is that the initial spectrum of primordial density fluctuations (PDFs) must have the Harrison-Zeldovich (HZ) form. Here, in order to test the inflationary scenario, predictions of the microwave background radiation (MBR) anisotropies measured by COBE are computed based on large-scale data for the universe and assuming Omega-1 and the HZ spectrum on large scales. It is found that the minimal scale where the spectrum can first enter the HZ regime is found, constraining the power spectrum of the mass distribution to within the bias factor b. This factor is determined and used to predict parameters of the MBR anisotropy field. For the spectrum of PDFs that reaches the HZ regime immediately after the scale accessible to the APM catalog, the numbers on MBR anisotropies are consistent with the COBE detections and thus the standard inflation can indeed be considered a viable theory for the origin of the large-scale structure in the universe.

  7. Wind tunnel investigation of aerodynamic characteristics of scale models of three rectangular shaped cargo containers

    NASA Technical Reports Server (NTRS)

    Laub, G. H.; Kodani, H. M.

    1972-01-01

    Wind tunnel tests were conducted on scale models of three rectangular shaped cargo containers to determine the aerodynamic characteristics of these typical externally-suspended helicopter cargo configurations. Tests were made over a large range of pitch and yaw attitudes at a nominal Reynolds number per unit length of 1.8 x one million. The aerodynamic data obtained from the tests are presented.

  8. Lifetime evaluation of large format CMOS mixed signal infrared devices

    NASA Astrophysics Data System (ADS)

    Linder, A.; Glines, Eddie

    2015-09-01

    New large scale foundry processes continue to produce reliable products. These new large scale devices continue to use industry best practice to screen for failure mechanisms and validate their long lifetime. The Failure-in-Time analysis in conjunction with foundry qualification information can be used to evaluate large format device lifetimes. This analysis is a helpful tool when zero failure life tests are typical. The reliability of the device is estimated by applying the failure rate to the use conditions. JEDEC publications continue to be the industry accepted methods.

  9. Camphor-Enabled Transfer and Mechanical Testing of Centimeter-Scale Ultrathin Films.

    PubMed

    Wang, Bin; Luo, Da; Li, Zhancheng; Kwon, Youngwoo; Wang, Meihui; Goo, Min; Jin, Sunghwan; Huang, Ming; Shen, Yongtao; Shi, Haofei; Ding, Feng; Ruoff, Rodney S

    2018-05-21

    Camphor is used to transfer centimeter-scale ultrathin films onto custom-designed substrates for mechanical (tensile) testing. Compared to traditional transfer methods using dissolving/peeling to remove the support-layers, camphor is sublimed away in air at low temperature, thereby avoiding additional stress on the as-transferred films. Large-area ultrathin films can be transferred onto hollow substrates without damage by this method. Tensile measurements are made on centimeter-scale 300 nm-thick graphene oxide film specimens, much thinner than the ≈2 μm minimum thickness of macroscale graphene-oxide films previously reported. Tensile tests were also done on two different types of large-area samples of adlayer free CVD-grown single-layer graphene supported by a ≈100 nm thick polycarbonate film; graphene stiffens this sample significantly, thus the intrinsic mechanical response of the graphene can be extracted. This is the first tensile measurement of centimeter-scale monolayer graphene films. The Young's modulus of polycrystalline graphene ranges from 637 to 793 GPa, while for near single-crystal graphene, it ranges from 728 to 908 GPa (folds parallel to the tensile loading direction) and from 683 to 775 GPa (folds orthogonal to the tensile loading direction), demonstrating the mechanical performance of large-area graphene in a size scale relevant to many applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Testing the Predictors of Boredom at School: Development and Validation of the Precursors to Boredom Scales

    ERIC Educational Resources Information Center

    Daschmann, Elena C.; Goetz, Thomas; Stupnisky, Robert H.

    2011-01-01

    Background: Boredom has been found to be an important emotion for students' learning processes and achievement outcomes; however, the precursors of this emotion remain largely unexplored. Aim: In the current study, scales assessing the precursors to boredom in academic achievement settings were developed and tested. Sample: Participants were 1,380…

  11. The value of cows in reference populations for genomic selection of new functional traits.

    PubMed

    Buch, L H; Kargo, M; Berg, P; Lassen, J; Sørensen, A C

    2012-06-01

    Today, almost all reference populations consist of progeny tested bulls. However, older progeny tested bulls do not have reliable estimated breeding values (EBV) for new traits. Thus, to be able to select for these new traits, it is necessary to build a reference population. We used a deterministic prediction model to test the hypothesis that the value of cows in reference populations depends on the availability of phenotypic records. To test the hypothesis, we investigated different strategies of building a reference population for a new functional trait over a 10-year period. The trait was either recorded on a large scale (30 000 cows per year) or on a small scale (2000 cows per year). For large-scale recording, we compared four scenarios where the reference population consisted of 30 sires; 30 sires and 170 test bulls; 30 sires and 2000 cows; or 30 sires, 2000 cows and 170 test bulls in the first year with measurements of the new functional trait. In addition to varying the make-up of the reference population, we also varied the heritability of the trait (h2 = 0.05 v. 0.15). The results showed that a reference population of test bulls, cows and sires results in the highest accuracy of the direct genomic values (DGV) for a new functional trait, regardless of its heritability. For small-scale recording, we compared two scenarios where the reference population consisted of the 2000 cows with phenotypic records or the 30 sires of these cows in the first year with measurements of the new functional trait. The results showed that a reference population of cows results in the highest accuracy of the DGV whether the heritability is 0.05 or 0.15, because variation is lost when phenotypic data on cows are summarized in EBV of their sires. The main conclusions from this study are: (i) the fewer phenotypic records, the larger effect of including cows in the reference population; (ii) for small-scale recording, the accuracy of the DGV will continue to increase for several years, whereas the increases in the accuracy of the DGV quickly decrease with large-scale recording; (iii) it is possible to achieve accuracies of the DGV that enable selection for new functional traits recorded on a large scale within 3 years from commencement of recording; and (iv) a higher heritability benefits a reference population of cows more than a reference population of bulls.

  12. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  13. Wind-tunnel investigation of the thrust augmentor performance of a large-scale swept wing model. [in the Ames 40 by 80 foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Falarski, M. D.

    1979-01-01

    Tests were made in the Ames 40- by 80-foot wind tunnel to determine the forward speed effects on wing-mounted thrust augmentors. The large-scale model was powered by the compressor output of J-85 driven viper compressors. The flap settings used were 15 deg and 30 deg with 0 deg, 15 deg, and 30 deg aileron settings. The maximum duct pressure, and wind tunnel dynamic pressure were 66 cmHg (26 in Hg) and 1190 N/sq m (25 lb/sq ft), respectively. All tests were made at zero sideslip. Test results are presented without analysis.

  14. Aft-End Flow of a Large-Scale Lifting Body During Free-Flight Tests

    NASA Technical Reports Server (NTRS)

    Banks, Daniel W.; Fisher, David F.

    2006-01-01

    Free-flight tests of a large-scale lifting-body configuration, the X-38 aircraft, were conducted using tufts to characterize the flow on the aft end, specifically in the inboard region of the vertical fins. Pressure data was collected on the fins and base. Flow direction and movement were correlated with surface pressure and flight condition. The X-38 was conceived to be a rescue vehicle for the International Space Station. The vehicle shape was derived from the U.S. Air Force X-24 lifting body. Free-flight tests of the X-38 configuration were conducted at the NASA Dryden Flight Research Center at Edwards Air Force Base, California from 1997 to 2001.

  15. Geospatial Augmented Reality for the interactive exploitation of large-scale walkable orthoimage maps in museums

    NASA Astrophysics Data System (ADS)

    Wüest, Robert; Nebiker, Stephan

    2018-05-01

    In this paper we present an app framework for augmenting large-scale walkable maps and orthoimages in museums or public spaces using standard smartphones and tablets. We first introduce a novel approach for using huge orthoimage mosaic floor prints covering several hundred square meters as natural Augmented Reality (AR) markers. We then present a new app architecture and subsequent tests in the Swissarena of the Swiss National Transport Museum in Lucerne demonstrating the capabilities of accurately tracking and augmenting different map topics, including dynamic 3d data such as live air traffic. The resulting prototype was tested with everyday visitors of the museum to get feedback on the usability of the AR app and to identify pitfalls when using AR in the context of a potentially crowded museum. The prototype is to be rolled out to the public after successful testing and optimization of the app. We were able to show that AR apps on standard smartphone devices can dramatically enhance the interactive use of large-scale maps for different purposes such as education or serious gaming in a museum context.

  16. Thermal/structural modeling of a large scale in situ overtest experiment for defense high level waste at the Waste Isolation Pilot Plant Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, H.S.; Stone, C.M.; Krieg, R.D.

    Several large scale in situ experiments in bedded salt formations are currently underway at the Waste Isolation Pilot Plant (WIPP) near Carlsbad, New Mexico, USA. In these experiments, the thermal and creep responses of salt around several different underground room configurations are being measured. Data from the tests are to be compared to thermal and structural responses predicted in pretest reference calculations. The purpose of these comparisons is to evaluate computational models developed from laboratory data prior to fielding of the in situ experiments. In this paper, the computational models used in the pretest reference calculation for one of themore » large scale tests, The Overtest for Defense High Level Waste, are described; and the pretest computed thermal and structural responses are compared to early data from the experiment. The comparisons indicate that computed and measured temperatures for the test agree to within ten percent but that measured deformation rates are between two and three times greater than corresponsing computed rates. 10 figs., 3 tabs.« less

  17. The Theory about CD-CAT Based on FCA and Its Application

    ERIC Educational Resources Information Center

    Shuqun, Yang; Shuliang, Ding; Zhiqiang, Yao

    2009-01-01

    Cognitive diagnosis (CD) plays an important role in intelligent tutoring system. Computerized adaptive testing (CAT) is adaptive, fair, and efficient, which is suitable to large-scale examination. Traditional cognitive diagnostic test needs quite large number of items, the efficient and tailored CAT could be a remedy for it, so the CAT with…

  18. Testing the gravitational instability hypothesis?

    NASA Technical Reports Server (NTRS)

    Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.

    1994-01-01

    We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.

  19. Comparison of Fault Detection Algorithms for Real-time Diagnosis in Large-Scale System. Appendix E

    NASA Technical Reports Server (NTRS)

    Kirubarajan, Thiagalingam; Malepati, Venkat; Deb, Somnath; Ying, Jie

    2001-01-01

    In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of I's and O's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMSAT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) HidderMarkov Model based diagnosis.

  20. Experimental Investigation of Natural-Circulation Flow Behavior Under Low-Power/Low-Pressure Conditions in the Large-Scale PANDA Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auban, Olivier; Paladino, Domenico; Zboray, Robert

    2004-12-15

    Twenty-five tests have been carried out in the large-scale thermal-hydraulic facility PANDA to investigate natural-circulation and stability behavior under low-pressure/low-power conditions, when void flashing might play an important role. This work, which extends the current experimental database to a large geometric scale, is of interest notably with regard to the start-up procedures in natural-circulation-cooled boiling water reactors. It should help the understanding of the physical phenomena that may cause flow instability in such conditions and can be used for validation of thermal-hydraulics system codes. The tests were performed at a constant power, balanced by a specific condenser heat removal capacity.more » The test matrix allowed the reactor pressure vessel power and pressure to be varied, as well as other parameters influencing the natural-circulation flow. The power spectra of flow oscillations showed in a few tests a major and unique resonance peak, and decay ratios between 0.5 and 0.9 have been found. The remainder of the tests showed an even more pronounced stable behavior. A classification of the tests is presented according to the circulation modes (from single-phase to two-phase flow) that could be assumed and particularly to the importance and the localization of the flashing phenomenon.« less

  1. Interfacial film formation: influence on oil spreading rates in lab basin tests and dispersant effectiveness testing in a wave tank.

    PubMed

    King, Thomas L; Clyburne, Jason A C; Lee, Kenneth; Robinson, Brian J

    2013-06-15

    Test facilities such as lab basins and wave tanks are essential when evaluating the use of chemical dispersants to treat oil spills at sea. However, these test facilities have boundaries (walls) that provide an ideal environment for surface (interfacial) film formation on seawater. Surface films may form from surfactants naturally present in crude oil as well as dispersant drift/overspray when applied to an oil spill. The objective of this study was to examine the impact of surface film formation on oil spreading rates in a small scale lab basin and on dispersant effectiveness conducted in a large scale wave tank. The process of crude oil spreading on the surface of the basin seawater was influenced in the presence of a surface film as shown using a 1st order kinetic model. In addition, interfacial film formation can greatly influence chemically dispersed crude oil in a large scale dynamic wave tank. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  2. SCALE-UP OF RAPID SMALL-SCALE ADSORPTION TESTS TO FIELD-SCALE ADSORBERS: THEORETICAL BASIS AND EXPERIMENTAL RESULTS FOR A CONSTANT DIFFUSIVITY

    EPA Science Inventory

    Granular activated carbon (GAC) is an effective treatment technique for the removal of some toxic organics from drinking water or wastewater, however, it can be a relatively expensive process, especially if it is designed improperly. A rapid method for the design of large-scale f...

  3. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE PAGES

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  4. Constraints on the power spectrum of the primordial density field from large-scale data - Microwave background and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.

  5. Ballast degradation characterized through triaxial test : research results.

    DOT National Transportation Integrated Search

    2016-06-01

    Transportation Technology Center, Inc. (TTCI) : has supported the development of a large-scale : triaxial test device (Figure 1) for testing ballast : size aggregate materials at the University of : Illinois at Urbana-Champaign (UIUC). This new : tes...

  6. Status of DSMT research program

    NASA Technical Reports Server (NTRS)

    Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.

    1991-01-01

    The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.

  7. Experimental Evaluation of Suitability of Selected Multi-Criteria Decision-Making Methods for Large-Scale Agent-Based Simulations

    PubMed Central

    2016-01-01

    Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061

  8. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  9. Longitudinal Multistage Testing

    ERIC Educational Resources Information Center

    Pohl, Steffi

    2013-01-01

    This article introduces longitudinal multistage testing (lMST), a special form of multistage testing (MST), as a method for adaptive testing in longitudinal large-scale studies. In lMST designs, test forms of different difficulty levels are used, whereas the values on a pretest determine the routing to these test forms. Since lMST allows for…

  10. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  11. Normal variability of children's scaled scores on subtests of the Dutch Wechsler Preschool and Primary scale of Intelligence - third edition.

    PubMed

    Hurks, P P M; Hendriksen, J G M; Dek, J E; Kooij, A P

    2013-01-01

    Intelligence tests are included in millions of assessments of children and adults each year (Watkins, Glutting, & Lei, 2007a , Applied Neuropsychology, 14, 13). Clinicians often interpret large amounts of subtest scatter, or large differences between the highest and lowest scaled subtest scores, on an intelligence test battery as an index for abnormality or cognitive impairment. The purpose of the present study is to characterize "normal" patterns of variability among subtests of the Dutch Wechsler Preschool and Primary Scale of Intelligence - Third Edition (WPPSI-III-NL; Wechsler, 2010 ). Therefore, the frequencies of WPPSI-III-NL scaled subtest scatter were reported for 1039 healthy children aged 4:0-7:11 years. Results indicated that large differences between highest and lowest scaled subtest scores (or subtest scatter) were common in this sample. Furthermore, degree of subtest scatter was related to: (a) the magnitude of the highest scaled subtest score, i.e., more scatter was seen in children with the highest WPPSI-III-NL scaled subtest scores, (b) Full Scale IQ (FSIQ) scores, i.e., higher FSIQ scores were associated with an increase in subtest scatter, and (c) sex differences, with boys showing a tendency to display more scatter than girls. In conclusion, viewing subtest scatter as an index for abnormality in WPPSI-III-NL scores is an oversimplification as this fails to recognize disparate subtest heterogeneity that occurs within a population of healthy children aged 4:0-7:11 years.

  12. Wind-Tunnel Experiments for Gas Dispersion in an Atmospheric Boundary Layer with Large-Scale Turbulent Motion

    NASA Astrophysics Data System (ADS)

    Michioka, Takenobu; Sato, Ayumu; Sada, Koichi

    2011-10-01

    Large-scale turbulent motions enhancing horizontal gas spread in an atmospheric boundary layer are simulated in a wind-tunnel experiment. The large-scale turbulent motions can be generated using an active grid installed at the front of the test section in the wind tunnel, when appropriate parameters for the angular deflection and the rotation speed are chosen. The power spectra of vertical velocity fluctuations are unchanged with and without the active grid because they are strongly affected by the surface. The power spectra of both streamwise and lateral velocity fluctuations with the active grid increase in the low frequency region, and are closer to the empirical relations inferred from field observations. The large-scale turbulent motions do not affect the Reynolds shear stress, but change the balance of the processes involved. The relative contributions of ejections to sweeps are suppressed by large-scale turbulent motions, indicating that the motions behave as sweep events. The lateral gas spread is enhanced by the lateral large-scale turbulent motions generated by the active grid. The large-scale motions, however, do not affect the vertical velocity fluctuations near the surface, resulting in their having a minimal effect on the vertical gas spread. The peak concentration normalized using the root-mean-squared value of concentration fluctuation is remarkably constant over most regions of the plume irrespective of the operation of the active grid.

  13. Unique Testing Capabilities of the NASA Langley Transonic Dynamics Tunnel, an Exercise in Aeroelastic Scaling

    NASA Technical Reports Server (NTRS)

    Ivanco, Thomas G.

    2013-01-01

    NASA Langley Research Center's Transonic Dynamics Tunnel (TDT) is the world's most capable aeroelastic test facility. Its large size, transonic speed range, variable pressure capability, and use of either air or R-134a heavy gas as a test medium enable unparalleled manipulation of flow-dependent scaling quantities. Matching these scaling quantities enables dynamic similitude of a full-scale vehicle with a sub-scale model, a requirement for proper characterization of any dynamic phenomenon, and many static elastic phenomena. Select scaling parameters are presented in order to quantify the scaling advantages of TDT and the consequence of testing in other facilities. In addition to dynamic testing, the TDT is uniquely well-suited for high risk testing or for those tests that require unusual model mount or support systems. Examples of recently conducted dynamic tests requiring unusual model support are presented. In addition to its unique dynamic test capabilities, the TDT is also evaluated in its capability to conduct aerodynamic performance tests as a result of its flow quality. Results of flow quality studies and a comparison to a many other transonic facilities are presented. Finally, the ability of the TDT to support future NASA research thrusts and likely vehicle designs is discussed.

  14. Observational tests of convective core overshooting in stars of intermediate to high mass in the Galaxy

    NASA Technical Reports Server (NTRS)

    Stothers, Richard B.

    1991-01-01

    This study presents the results of 14 tests for the presence of convective overshooting in large convecting stellar cores for stars with masses of 4-17 solar masses which are members of detached close binary systems and of open clusters in the Galaxy. A large body of theoretical and observational data is scrutinized and subjected to averaging in order to minimize accidental and systematic errors. A conservative upper limit of d/HP less than 0.4 is found from at least four tests, as well as a tighter upper limit of d/HP less than 0.2 from one good test that is subject to only mild restrictions and is based on the maximum observed effective temperature of evolved blue supergiants. It is concluded that any current uncertainty about the distance scale for these stars is unimportant in conducting the present tests for convective core overshooting. The correct effective temperature scale for the B0.5-B2 stars is almost certainly close to one of the proposed hot scales.

  15. Development of a large-scale, outdoor, ground-based test capability for evaluating the effect of rain on airfoil lift

    NASA Technical Reports Server (NTRS)

    Bezos, Gaudy M.; Campbell, Bryan A.

    1993-01-01

    A large-scale, outdoor, ground-based test capability for acquiring aerodynamic data in a simulated rain environment was developed at the Langley Aircraft Landing Dynamics Facility (ALDF) to assess the effect of heavy rain on airfoil performance. The ALDF test carriage was modified to transport a 10-ft-chord NACA 64210 wing section along a 3000-ft track at full-scale aircraft approach speeds. An overhead rain simulation system was constructed along a 525-ft section of the track with the capability of producing simulated rain fields of 2, 10, 30, and 40 in/hr. The facility modifications, the aerodynamic testing and rain simulation capability, the design and calibration of the rain simulation system, and the operational procedures developed to minimize the effect of wind on the simulated rain field and aerodynamic data are described in detail. The data acquisition and reduction processes are also presented along with sample force data illustrating the environmental effects on data accuracy and repeatability for the 'rain-off' test condition.

  16. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  17. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  18. Cost estimate for a proposed GDF Suez LNG testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchat, Thomas K.; Brady, Patrick Dennis; Jernigan, Dann A.

    2014-02-01

    At the request of GDF Suez, a Rough Order of Magnitude (ROM) cost estimate was prepared for the design, construction, testing, and data analysis for an experimental series of large-scale (Liquefied Natural Gas) LNG spills on land and water that would result in the largest pool fires and vapor dispersion events ever conducted. Due to the expected cost of this large, multi-year program, the authors utilized Sandia's structured cost estimating methodology. This methodology insures that the efforts identified can be performed for the cost proposed at a plus or minus 30 percent confidence. The scale of the LNG spill, fire,more » and vapor dispersion tests proposed by GDF could produce hazard distances and testing safety issues that need to be fully explored. Based on our evaluations, Sandia can utilize much of our existing fire testing infrastructure for the large fire tests and some small dispersion tests (with some modifications) in Albuquerque, but we propose to develop a new dispersion testing site at our remote test area in Nevada because of the large hazard distances. While this might impact some testing logistics, the safety aspects warrant this approach. In addition, we have included a proposal to study cryogenic liquid spills on water and subsequent vaporization in the presence of waves. Sandia is working with DOE on applications that provide infrastructure pertinent to wave production. We present an approach to conduct repeatable wave/spill interaction testing that could utilize such infrastructure.« less

  19. Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA)

    NASA Technical Reports Server (NTRS)

    Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David

    2012-01-01

    With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind tunnel model design would be completed, manufactured, and calibrated. During the fifth year the large scale wind tunnel test was conducted. This technical memo will describe all phases of the Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) project and provide a brief summary of the background and modeling efforts involved in the NRA. The conceptual designs considered for this project and the decision process for the selected configuration adapted for a wind tunnel model will be briefly discussed. The internal configuration of AMELIA, and the internal measurements chosen in order to satisfy the requirements of obtaining a database of experimental data to be used for future computational model validations. The external experimental techniques that were employed during the test, along with the large-scale wind tunnel test facility are covered in great detail. Experimental measurements in the database include forces and moments, and surface pressure distributions, local skin friction measurements, boundary and shear layer velocity profiles, far-field acoustic data and noise signatures from turbofan propulsion simulators. Results and discussion of the circulation control performance, over-the-wing mounted engines, and the combined performance are also discussed in great detail.

  20. STE thrust chamber technology: Main injector technology program and nozzle Advanced Development Program (ADP)

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The purpose of the STME Main Injector Program was to enhance the technology base for the large-scale main injector-combustor system of oxygen-hydrogen booster engines in the areas of combustion efficiency, chamber heating rates, and combustion stability. The initial task of the Main Injector Program, focused on analysis and theoretical predictions using existing models, was complemented by the design, fabrication, and test at MSFC of a subscale calorimetric, 40,000-pound thrust class, axisymmetric thrust chamber operating at approximately 2,250 psi and a 7:1 expansion ratio. Test results were used to further define combustion stability bounds, combustion efficiency, and heating rates using a large injector scale similar to the Pratt & Whitney (P&W) STME main injector design configuration including the tangential entry swirl coaxial injection elements. The subscale combustion data was used to verify and refine analytical modeling simulation and extend the database range to guide the design of the large-scale system main injector. The subscale injector design incorporated fuel and oxidizer flow area control features which could be varied; this allowed testing of several design points so that the STME conditions could be bracketed. The subscale injector design also incorporated high-reliability and low-cost fabrication techniques such as a one-piece electrical discharged machined (EDMed) interpropellant plate. Both subscale and large-scale injectors incorporated outer row injector elements with scarfed tip features to allow evaluation of reduced heating rates to the combustion chamber.

  1. Detection of Test Collusion via Kullback-Leibler Divergence

    ERIC Educational Resources Information Center

    Belov, Dmitry I.

    2013-01-01

    The development of statistical methods for detecting test collusion is a new research direction in the area of test security. Test collusion may be described as large-scale sharing of test materials, including answers to test items. Current methods of detecting test collusion are based on statistics also used in answer-copying detection.…

  2. Assessment of Disturbance at Three Spatial Scales in Two Large Tropical Reservoirs

    EPA Science Inventory

    Large reservoirs vary from lentic to lotic systems in time and space. Therefore our objective was to assess disturbance gradients for two large tropical reservoirs and their influences on benthic macroinvertebrates. We tested three hypothesis: 1) a disturbance gradient of environ...

  3. Fire extinguishing agents for oxygen-enriched atmospheres

    NASA Astrophysics Data System (ADS)

    Plugge, M. A.; Wilson, C. W.; Zallen, D. M.; Walker, J. L.

    1985-12-01

    Fire-suppression agent requirements for extinguishing fires in oxygen-enriched atmospheres were determined employing small-, medium-, large-, and full-scale test apparatuses. The small- and medium-scale tests showed that a doubling of the oxygen concentration required five times more HALON for extinguishment. For fires of similar size and intensity, the effect of oxygen enrichment of the diluent volume in the HC-131A was not as grate as in the smaller compartments of the B-52 which presented a higher damage scenario. The full-scale tests showed that damage to the airframe was as important a factor in extinguishment as oxygen enrichment.

  4. Testing a New Generation: Implementing Clickers as an Extension Data Collection Tool

    ERIC Educational Resources Information Center

    Parmer, Sondra M.; Parmer, Greg; Struempler, Barb

    2012-01-01

    Using clickers to gauge student understanding in large classrooms is well documented. Less well known is the effectiveness of using clickers with youth for test taking in large-scale Extension programs. This article describes the benefits and challenges of collecting evaluation data using clickers with a third-grade population participating in a…

  5. An Overview of NASA Efforts on Zero Boiloff Storage of Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Hastings, Leon J.; Plachta, D. W.; Salerno, L.; Kittel, P.; Haynes, Davy (Technical Monitor)

    2001-01-01

    Future mission planning within NASA has increasingly motivated consideration of cryogenic propellant storage durations on the order of years as opposed to a few weeks or months. Furthermore, the advancement of cryocooler and passive insulation technologies in recent years has substantially improved the prospects for zero boiloff storage of cryogenics. Accordingly, a cooperative effort by NASA's Ames Research Center (ARC), Glenn Research Center (GRC), and Marshall Space Flight Center (MSFC) has been implemented to develop and demonstrate "zero boiloff" concepts for in-space storage of cryogenic propellants, particularly liquid hydrogen and oxygen. ARC is leading the development of flight-type cryocoolers, GRC the subsystem development and small scale testing, and MSFC the large scale and integrated system level testing. Thermal and fluid modeling involves a combined effort by the three Centers. Recent accomplishments include: 1) development of "zero boiloff" analytical modeling techniques for sizing the storage tankage, passive insulation, cryocooler, power source mass, and radiators; 2) an early subscale demonstration with liquid hydrogen 3) procurement of a flight-type 10 watt, 95 K pulse tube cryocooler for liquid oxygen storage and 4) assembly of a large-scale test article for an early demonstration of the integrated operation of passive insulation, destratification/pressure control, and cryocooler (commercial unit) subsystems to achieve zero boiloff storage of liquid hydrogen. Near term plans include the large-scale integrated system demonstration testing this summer, subsystem testing of the flight-type pulse-tube cryocooler with liquid nitrogen (oxygen simulant), and continued development of a flight-type liquid hydrogen pulse tube cryocooler.

  6. Process, pattern and scale: hydrogeomorphology and plant diversity in forested wetlands across multiple spatial scales

    NASA Astrophysics Data System (ADS)

    Alexander, L.; Hupp, C. R.; Forman, R. T.

    2002-12-01

    Many geodisturbances occur across large spatial scales, spanning entire landscapes and creating ecological phenomena in their wake. Ecological study at large scales poses special problems: (1) large-scale studies require large-scale resources, and (2) sampling is not always feasible at the appropriate scale, and researchers rely on data collected at smaller scales to interpret patterns across broad regions. A criticism of landscape ecology is that findings at small spatial scales are "scaled up" and applied indiscriminately across larger spatial scales. In this research, landscape scaling is addressed through process-pattern relationships between hydrogeomorphic processes and patterns of plant diversity in forested wetlands. The research addresses: (1) whether patterns and relationships between hydrogeomorphic, vegetation, and spatial variables can transcend scale; and (2) whether data collected at small spatial scales can be used to describe patterns and relationships across larger spatial scales. Field measurements of hydrologic, geomorphic, spatial, and vegetation data were collected or calculated for 15- 1-ha sites on forested floodplains of six (6) Chesapeake Bay Coastal Plain streams over a total area of about 20,000 km2. Hydroperiod (day/yr), floodplain surface elevation range (m), discharge (m3/s), stream power (kg-m/s2), sediment deposition (mm/yr), relative position downstream and other variables were used in multivariate analyses to explain differences in species richness, tree diversity (Shannon-Wiener Diversity Index H'), and plant community composition at four spatial scales. Data collected at the plot (400-m2) and site- (c. 1-ha) scales are applied to and tested at the river watershed and regional spatial scales. Results indicate that plant species richness and tree diversity (Shannon-Wiener diversity index H') can be described by hydrogeomorphic conditions at all scales, but are best described at the site scale. Data collected at plot and site scales are tested for spatial heterogeneity across the Chesapeake Bay Coastal Plain using a geostatistical variogram, and multiple regression analysis is used to relate plant diversity, spatial, and hydrogeomorphic variables across Coastal Plain regions and hydrologic regimes. Results indicate that relationships between hydrogeomorphic processes and patterns of plant diversity at finer scales can proxy relationships at coarser scales in some, not all, cases. Findings also suggest that data collected at small scales can be used to describe trends across broader scales under limited conditions.

  7. False-Positive Tuberculin Skin Test Results Among Low-Risk Healthcare Workers Following Implementation of Fifty-Dose Vials of Purified Protein Derivative.

    PubMed

    Collins, Jeffrey M; Hunter, Mary; Gordon, Wanda; Kempker, Russell R; Blumberg, Henry M; Ray, Susan M

    2018-06-01

    Following large declines in tuberculosis transmission the United States, large-scale screening programs targeting low-risk healthcare workers are increasingly a source of false-positive results. We report a large cluster of presumed false-positive tuberculin skin test results in healthcare workers following a change to 50-dose vials of Tubersol tuberculin.Infect Control Hosp Epidemiol 2018;39:750-752.

  8. Gravitational waves and large field inflation

    NASA Astrophysics Data System (ADS)

    Linde, Andrei

    2017-02-01

    According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.

  9. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  10. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  11. A new energy transfer model for turbulent free shear flow

    NASA Technical Reports Server (NTRS)

    Liou, William W.-W.

    1992-01-01

    A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.

  12. Planck 2015 results. XVI. Isotropy and statistics of the CMB

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Akrami, Y.; Aluri, P. K.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Basak, S.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Casaponsa, B.; Catalano, A.; Challinor, A.; Chamballu, A.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Contreras, D.; Couchot, F.; Coulais, A.; Crill, B. P.; Cruz, M.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fantaye, Y.; Fergusson, J.; Fernandez-Cobos, R.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Frolov, A.; Galeotta, S.; Galli, S.; Ganga, K.; Gauthier, C.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huang, Z.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kim, J.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; Liu, H.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Marinucci, D.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mikkelsen, K.; Mitra, S.; Miville-Deschênes, M.-A.; Molinari, D.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Pant, N.; Paoletti, D.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Rotti, A.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Souradeep, T.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Trombetti, T.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zibin, J. P.; Zonca, A.

    2016-09-01

    We test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect our studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The "Cold Spot" is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.

  13. Planck 2015 results: XVI. Isotropy and statistics of the CMB

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Akrami, Y.; ...

    2016-09-20

    In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Akrami, Y.

    In this paper, we test the statistical isotropy and Gaussianity of the cosmic microwave background (CMB) anisotropies using observations made by the Planck satellite. Our results are based mainly on the full Planck mission for temperature, but also include some polarization measurements. In particular, we consider the CMB anisotropy maps derived from the multi-frequency Planck data by several component-separation methods. For the temperature anisotropies, we find excellent agreement between results based on these sky maps over both a very large fraction of the sky and a broad range of angular scales, establishing that potential foreground residuals do not affect ourmore » studies. Tests of skewness, kurtosis, multi-normality, N-point functions, and Minkowski functionals indicate consistency with Gaussianity, while a power deficit at large angular scales is manifested in several ways, for example low map variance. The results of a peak statistics analysis are consistent with the expectations of a Gaussian random field. The “Cold Spot” is detected with several methods, including map kurtosis, peak statistics, and mean temperature profile. We thoroughly probe the large-scale dipolar power asymmetry, detecting it with several independent tests, and address the subject of a posteriori correction. Tests of directionality suggest the presence of angular clustering from large to small scales, but at a significance that is dependent on the details of the approach. We perform the first examination of polarization data, finding the morphology of stacked peaks to be consistent with the expectations of statistically isotropic simulations. Finally, where they overlap, these results are consistent with the Planck 2013 analysis based on the nominal mission data and provide our most thorough view of the statistics of the CMB fluctuations to date.« less

  15. A large-scale, long-term study of scale drift: The micro view and the macro view

    NASA Astrophysics Data System (ADS)

    He, W.; Li, S.; Kingsbury, G. G.

    2016-11-01

    The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.

  16. The Modified Abbreviated Math Anxiety Scale: A Valid and Reliable Instrument for Use with Children.

    PubMed

    Carey, Emma; Hill, Francesca; Devine, Amy; Szűcs, Dénes

    2017-01-01

    Mathematics anxiety (MA) can be observed in children from primary school age into the teenage years and adulthood, but many MA rating scales are only suitable for use with adults or older adolescents. We have adapted one such rating scale, the Abbreviated Math Anxiety Scale (AMAS), to be used with British children aged 8-13. In this study, we assess the scale's reliability, factor structure, and divergent validity. The modified AMAS (mAMAS) was administered to a very large ( n = 1746) cohort of British children and adolescents. This large sample size meant that as well as conducting confirmatory factor analysis on the scale itself, we were also able to split the sample to conduct exploratory and confirmatory factor analysis of items from the mAMAS alongside items from child test anxiety and general anxiety rating scales. Factor analysis of the mAMAS confirmed that it has the same underlying factor structure as the original AMAS, with subscales measuring anxiety about Learning and Evaluation in math. Furthermore, both exploratory and confirmatory factor analysis of the mAMAS alongside scales measuring test anxiety and general anxiety showed that mAMAS items cluster onto one factor (perceived to represent MA). The mAMAS provides a valid and reliable scale for measuring MA in children and adolescents, from a younger age than is possible with the original AMAS. Results from this study also suggest that MA is truly a unique construct, separate from both test anxiety and general anxiety, even in childhood.

  17. Noise reduction tests of large-scale-model externally blown flap using trailing-edge blowing and partial flap slot covering. [jet aircraft noise reduction

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.

    1976-01-01

    Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.

  18. Development of a metal-clad advanced composite shear web design concept

    NASA Technical Reports Server (NTRS)

    Laakso, J. H.

    1974-01-01

    An advanced composite web concept was developed for potential application to the Space Shuttle Orbiter main engine thrust structure. The program consisted of design synthesis, analysis, detail design, element testing, and large scale component testing. A concept was sought that offered significant weight saving by the use of Boron/Epoxy (B/E) reinforced titanium plate structure. The desired concept was one that was practical and that utilized metal to efficiently improve structural reliability. The resulting development of a unique titanium-clad B/E shear web design concept is described. Three large scale components were fabricated and tested to demonstrate the performance of the concept: a titanium-clad plus or minus 45 deg B/E web laminate stiffened with vertical B/E reinforced aluminum stiffeners.

  19. Integral criteria for large-scale multiple fingerprint solutions

    NASA Astrophysics Data System (ADS)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  20. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings.

    PubMed

    Nelson, Kerrie P; Mitani, Aya A; Edwards, Don

    2017-09-10

    Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Centrifuge impact cratering experiments: Scaling laws for non-porous targets

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.

    1987-01-01

    A geotechnical centrifuge was used to investigate large body impacts onto planetary surfaces. At elevated gravity, it is possible to match various dimensionless similarity parameters which were shown to govern large scale impacts. Observations of crater growth and target flow fields have provided detailed and critical tests of a complete and unified scaling theory for impact cratering. Scaling estimates were determined for nonporous targets. Scaling estimates for large scale cratering in rock proposed previously by others have assumed that the crater radius is proportional to powers of the impactor energy and gravity, with no additional dependence on impact velocity. The size scaling laws determined from ongoing centrifuge experiments differ from earlier ones in three respects. First, a distinct dependence of impact velocity is recognized, even for constant impactor energy. Second, the present energy exponent for low porosity targets, like competent rock, is lower than earlier estimates. Third, the gravity exponent is recognized here as being related to both the energy and the velocity exponents.

  2. Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.

    PubMed

    Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael

    2013-08-01

    We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.

  3. Low-speed wind-tunnel investigation of a large scale advanced arrow-wing supersonic transport configuration with engines mounted above wing for upper-surface blowing

    NASA Technical Reports Server (NTRS)

    Shivers, J. P.; Mclemore, H. C.; Coe, P. L., Jr.

    1976-01-01

    Tests have been conducted in a full scale tunnel to determine the low speed aerodynamic characteristics of a large scale advanced arrow wing supersonic transport configuration with engines mounted above the wing for upper surface blowing. Tests were made over an angle of attack range of -10 deg to 32 deg, sideslip angles of + or - 5 deg, and a Reynolds number range of 3,530,000 to 7,330,000. Configuration variables included trailing edge flap deflection, engine jet nozzle angle, engine thrust coefficient, engine out operation, and asymmetrical trailing edge boundary layer control for providing roll trim. Downwash measurements at the tail were obtained for different thrust coefficients, tail heights, and at two fuselage stations.

  4. Learned perceptual associations influence visuomotor programming under limited conditions: kinematic consistency.

    PubMed

    Haffenden, Angela M; Goodale, Melvyn A

    2002-12-01

    Previous findings have suggested that visuomotor programming can make use of learned size information in experimental paradigms where movement kinematics are quite consistent from trial to trial. The present experiment was designed to test whether or not this conclusion could be generalized to a different manipulation of kinematic variability. As in previous work, an association was established between the size and colour of square blocks (e.g. red = large; yellow = small, or vice versa). Associating size and colour in this fashion has been shown to reliably alter the perceived size of two test blocks halfway in size between the large and small blocks: estimations of the test block matched in colour to the group of large blocks are smaller than estimations of the test block matched to the group of small blocks. Subjects grasped the blocks, and on other trials estimated the size of the blocks. These changes in perceived block size were incorporated into grip scaling only when movement kinematics were highly consistent from trial to trial; that is, when the blocks were presented in the same location on each trial. When the blocks were presented in different locations grip scaling remained true to the metrics of the test blocks despite the changes in perceptual estimates of block size. These results support previous findings suggesting that kinematic consistency facilitates the incorporation of learned perceptual information into grip scaling.

  5. Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data

    EPA Science Inventory

    The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...

  6. ELUCIDATING THE MECHANISMS BEHIND SUCCESSFUL INDICATORS OF BIODIVERSITY

    EPA Science Inventory

    Groups of species have been proposed as indicators of biodiversity for use in conservation planning. Different tests of indicator groups have produced divergent results varying with taxonomy, methodology, scale, and location. At large scales, successful indicator groups should b...

  7. The NASA Glen Research Center's Hypersonic Tunnel Facility. Chapter 16

    NASA Technical Reports Server (NTRS)

    Woike, Mark R.; Willis, Brian P.

    2001-01-01

    The NASA Glenn Research Center's Hypersonic Tunnel Facility (HTF) is a blow-down, freejet wind tunnel that provides true enthalpy flight conditions for Mach numbers of 5, 6, and 7. The Hypersonic Tunnel Facility is unique due to its large scale and use of non-vitiated (clean air) flow. A 3MW graphite core storage heater is used to heat the test medium of gaseous nitrogen to the high stagnation temperatures required to produce true enthalpy conditions. Gaseous oxygen is mixed into the heated test flow to generate the true air simulation. The freejet test section is 1.07m (42 in.) in diameter and 4.3m (14 ft) in length. The facility is well suited for the testing of large scale airbreathing propulsion systems. In this chapter, a brief history and detailed description of the facility are presented along with a discussion of the facility's application towards hypersonic airbreathing propulsion testing.

  8. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  9. Results of Long Term Life Tests of Large Scale Lithium-Ion Cells

    NASA Astrophysics Data System (ADS)

    Inoue, Takefumi; Imamura, Nobutaka; Miyanaga, Naozumi; Yoshida, Hiroaki; Komada, Kanemi

    2008-09-01

    High energy density Li-ion cells have been introduced to latest satellites and another space usage. We have started development of large scale Li-ion cells for space applications in 1997. The chemical design was fixed in 1999.It is very important to confirm life performance to apply satellite applications because it requires long mission life such as 15 years for GEO and 5 to 7 years for LEO. Therefore we started life test at various conditions. And the tests have reached 8 to 9 years in actual calendar time. Semi - accelerated GEO tests which gives both calendar and cycle loss have been reached 42 season that corresponds 21 years in orbit. The specific energy range is 120 - 130 Wh/kg at EOL. According to the test results, we have confirmed that our Li-ion cell meets general requirements for space application such as GEO and LEO with quite high specific energy.

  10. Aerodynamic force measurement on a large-scale model in a short duration test facility

    NASA Astrophysics Data System (ADS)

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.

  11. Cosmological consistency tests of gravity theory and cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Ishak-Boushaki, Mustapha B.

    2017-01-01

    Testing general relativity at cosmological scales and probing the cause of cosmic acceleration are among the important objectives targeted by incoming and future astronomical surveys and experiments. I present our recent results on consistency tests that can provide insights about the underlying gravity theory and cosmic acceleration using cosmological data sets. We use statistical measures, the rate of cosmic expansion, the growth rate of large scale structure, and the physical consistency of these probes with one another.

  12. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  13. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    PubMed

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  14. Report on phase 1 of the Microprocessor Seminar. [and associated large scale integration

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Proceedings of a seminar on microprocessors and associated large scale integrated (LSI) circuits are presented. The potential for commonality of device requirements, candidate processes and mechanisms for qualifying candidate LSI technologies for high reliability applications, and specifications for testing and testability were among the topics discussed. Various programs and tentative plans of the participating organizations in the development of high reliability LSI circuits are given.

  15. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    DTIC Science & Technology

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  16. Accommodations for Students Who Are Deaf or Hard of Hearing in Large-Scale, Standardized Assessments: Surveying the Landscape and Charting a New Direction

    ERIC Educational Resources Information Center

    Cawthon, Stephanie W.

    2009-01-01

    Students who are deaf or hard of hearing (SDHH) often use test accommodations when they participate in large-scale, standardized assessments. The purpose of this article is to present findings from the "Third Annual Survey of Assessment and Accommodations for Students who are Deaf or Hard of Hearing". The "big five" accommodations were reported by…

  17. A large-scale initiative to disseminate an evidence-based drug abuse prevention program in Italy: Lessons learned for practitioners and researchers.

    PubMed

    Velasco, Veronica; Griffin, Kenneth W; Antichi, Mariella; Celata, Corrado

    2015-10-01

    Across developed countries, experimentation with alcohol, tobacco, and other drugs often begins in the early adolescent years. Several evidence-based programs have been developed to prevent adolescent substance use. Many of the most rigorously tested and empirically supported prevention programs were initially developed and tested in the United States. Increasingly, these interventions are being adopted for use in Europe and throughout the world. This paper reports on a large-scale comprehensive initiative designed to select, adapt, implement, and sustain an evidence-based drug abuse prevention program in Italy. As part of a large-scale regionally funded collaboration in the Lombardy region of Italy, we report on processes through which a team of stakeholders selected, translated and culturally adapted, planned, implemented and evaluated the Life Skills Training (LST) school-based drug abuse prevention program, an evidence-based intervention developed in the United States. We discuss several challenges and lessons learned and implications for prevention practitioners and researchers attempting to undertake similar international dissemination projects. We review several published conceptual models designed to promote the replication and widespread dissemination of effective programs, and discuss their strengths and limitations in the context of planning and implementing a complex, large-scale real-world dissemination effort. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Politics, Economics, and Testing: Some Reflections

    ERIC Educational Resources Information Center

    Feuer, Michael J.

    2011-01-01

    In this keynote address, the author shares his reflections on politics, economics, and testing. He focuses on assessment and accountability and begins with some data from large scale written educational testing, "circa 1840". The author argues that people's penchant for accountability and their appetite for standardized testing are, in…

  19. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the tensions between Planck, WMAP, and SPT temperature data and how the upcoming data release of the ACTpol experiment will contribute to this matter. We provide a description of the current status of the data-analysis pipeline and discuss its ability to recover large-scale modes.

  20. Psychometrics behind Computerized Adaptive Testing.

    PubMed

    Chang, Hua-Hua

    2015-03-01

    The paper provides a survey of 18 years' progress that my colleagues, students (both former and current) and I made in a prominent research area in Psychometrics-Computerized Adaptive Testing (CAT). We start with a historical review of the establishment of a large sample foundation for CAT. It is worth noting that the asymptotic results were derived under the framework of Martingale Theory, a very theoretical perspective of Probability Theory, which may seem unrelated to educational and psychological testing. In addition, we address a number of issues that emerged from large scale implementation and show that how theoretical works can be helpful to solve the problems. Finally, we propose that CAT technology can be very useful to support individualized instruction on a mass scale. We show that even paper and pencil based tests can be made adaptive to support classroom teaching.

  1. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  2. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  3. Production of primary mirror segments for the Giant Magellan Telescope

    NASA Astrophysics Data System (ADS)

    Martin, H. M.; Allen, R. G.; Burge, J. H.; Davis, J. M.; Davison, W. B.; Johns, M.; Kim, D. W.; Kingsley, J. S.; Law, K.; Lutz, R. D.; Strittmatter, P. A.; Su, P.; Tuell, M. T.; West, S. C.; Zhou, P.

    2014-07-01

    Segment production for the Giant Magellan Telescope is well underway, with the off-axis Segment 1 completed, off-axis Segments 2 and 3 already cast, and mold construction in progress for the casting of Segment 4, the center segment. All equipment and techniques required for segment fabrication and testing have been demonstrated in the manufacture of Segment 1. The equipment includes a 28 m test tower that incorporates four independent measurements of the segment's figure and geometry. The interferometric test uses a large asymmetric null corrector with three elements including a 3.75 m spherical mirror and a computer-generated hologram. For independent verification of the large-scale segment shape, we use a scanning pentaprism test that exploits the natural geometry of the telescope to focus collimated light to a point. The Software Configurable Optical Test System, loosely based on the Hartmann test, measures slope errors to submicroradian accuracy at high resolution over the full aperture. An enhanced laser tracker system guides the figuring through grinding and initial polishing. All measurements agree within the expected uncertainties, including three independent measurements of radius of curvature that agree within 0.3 mm. Segment 1 was polished using a 1.2 m stressed lap for smoothing and large-scale figuring, and a set of smaller passive rigid-conformal laps on an orbital polisher for deterministic small-scale figuring. For the remaining segments, the Mirror Lab is building a smaller, orbital stressed lap to combine the smoothing capability with deterministic figuring.

  4. Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Stasiunas, Eric C.; Parks, Russel A.

    2018-01-01

    A modal test of NASA’s Space Launch System (SLS) Core Stage is scheduled to occur prior to propulsion system verification testing at the Stennis Space Center B2 test stand. A derrick crane with a 180-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview of the results.

  5. Feasibility study of a large-scale tuned mass damper with eddy current damping mechanism

    NASA Astrophysics Data System (ADS)

    Wang, Zhihao; Chen, Zhengqing; Wang, Jianhui

    2012-09-01

    Tuned mass dampers (TMDs) have been widely used in recent years to mitigate structural vibration. However, the damping mechanisms employed in the TMDs are mostly based on viscous dampers, which have several well-known disadvantages, such as oil leakage and difficult adjustment of damping ratio for an operating TMD. Alternatively, eddy current damping (ECD) that does not require any contact with the main structure is a potential solution. This paper discusses the design, analysis, manufacture and testing of a large-scale horizontal TMD based on ECD. First, the theoretical model of ECD is formulated, then one large-scale horizontal TMD using ECD is constructed, and finally performance tests of the TMD are conducted. The test results show that the proposed TMD has a very low intrinsic damping ratio, while the damping ratio due to ECD is the dominant damping source, which can be as large as 15% in a proper configuration. In addition, the damping ratios estimated with the theoretical model are roughly consistent with those identified from the test results, and the source of this error is investigated. Moreover, it is demonstrated that the damping ratio in the proposed TMD can be easily adjusted by varying the air gap between permanent magnets and conductive plates. In view of practical applications, possible improvements and feasibility considerations for the proposed TMD are then discussed. It is confirmed that the proposed TMD with ECD is reliable and feasible for use in structural vibration control.

  6. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    PubMed

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect sensitivity and specificity. It allows cost efficient stratified testing and monitoring of worm burden at the sub-population level, ideally for large-scale surveillance generating hard data for performance of MDA programs and strategic planning when moving towards transmission-stop and elimination.

  7. Cosmic homogeneity: a spectroscopic and model-independent measurement

    NASA Astrophysics Data System (ADS)

    Gonçalves, R. S.; Carvalho, G. C.; Bengaly, C. A. P., Jr.; Carvalho, J. C.; Bernui, A.; Alcaniz, J. S.; Maartens, R.

    2018-03-01

    Cosmology relies on the Cosmological Principle, i.e. the hypothesis that the Universe is homogeneous and isotropic on large scales. This implies in particular that the counts of galaxies should approach a homogeneous scaling with volume at sufficiently large scales. Testing homogeneity is crucial to obtain a correct interpretation of the physical assumptions underlying the current cosmic acceleration and structure formation of the Universe. In this letter, we use the Baryon Oscillation Spectroscopic Survey to make the first spectroscopic and model-independent measurements of the angular homogeneity scale θh. Applying four statistical estimators, we show that the angular distribution of galaxies in the range 0.46 < z < 0.62 is consistent with homogeneity at large scales, and that θh varies with redshift, indicating a smoother Universe in the past. These results are in agreement with the foundations of the standard cosmological paradigm.

  8. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  9. Aerodynamic Design of a Dual-Flow Mach 7 Hypersonic Inlet System for a Turbine-Based Combined-Cycle Hypersonic Propulsion System

    NASA Technical Reports Server (NTRS)

    Sanders, Bobby W.; Weir, Lois J.

    2008-01-01

    A new hypersonic inlet for a turbine-based combined-cycle (TBCC) engine has been designed. This split-flow inlet is designed to provide flow to an over-under propulsion system with turbofan and dual-mode scramjet engines for flight from takeoff to Mach 7. It utilizes a variable-geometry ramp, high-speed cowl lip rotation, and a rotating low-speed cowl that serves as a splitter to divide the flow between the low-speed turbofan and the high-speed scramjet and to isolate the turbofan at high Mach numbers. The low-speed inlet was designed for Mach 4, the maximum mode transition Mach number. Integration of the Mach 4 inlet into the Mach 7 inlet imposed significant constraints on the low-speed inlet design, including a large amount of internal compression. The inlet design was used to develop mechanical designs for two inlet mode transition test models: small-scale (IMX) and large-scale (LIMX) research models. The large-scale model is designed to facilitate multi-phase testing including inlet mode transition and inlet performance assessment, controls development, and integrated systems testing with turbofan and scramjet engines.

  10. Large-Scale Advanced Prop-Fan (LAP) pitch change actuator and control design report

    NASA Technical Reports Server (NTRS)

    Schwartz, R. A.; Carvalho, P.; Cutler, M. J.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that the high inherent efficiency previously demonstrated by low speed turboprop propulsion systems may now be extended to today's higher speed aircraft if advanced high-speed propeller blades having thin airfoils and aerodynamic sweep are utilized. Hamilton Standard has designed a 9-foot diameter single-rotation Large-Scale Advanced Prop-Fan (LAP) which will be tested on a static test stand, in a high speed wind tunnel and on a research aircraft. The major objective of this testing is to establish the structural integrity of large-scale Prop-Fans of advanced construction in addition to the evaluation of aerodynamic performance and aeroacoustic design. This report describes the operation, design features and actual hardware of the (LAP) Prop-Fan pitch control system. The pitch control system which controls blade angle and propeller speed consists of two separate assemblies. The first is the control unit which provides the hydraulic supply, speed governing and feather function for the system. The second unit is the hydro-mechanical pitch change actuator which directly changes blade angle (pitch) as scheduled by the control.

  11. Cross-borehole flowmeter tests for transient heads in heterogeneous aquifers.

    PubMed

    Le Borgne, Tanguy; Paillet, Frederick; Bour, Olivier; Caudal, Jean-Pierre

    2006-01-01

    Cross-borehole flowmeter tests have been proposed as an efficient method to investigate preferential flowpaths in heterogeneous aquifers, which is a major task in the characterization of fractured aquifers. Cross-borehole flowmeter tests are based on the idea that changing the pumping conditions in a given aquifer will modify the hydraulic head distribution in large-scale flowpaths, producing measurable changes in the vertical flow profiles in observation boreholes. However, inversion of flow measurements to derive flowpath geometry and connectivity and to characterize their hydraulic properties is still a subject of research. In this study, we propose a framework for cross-borehole flowmeter test interpretation that is based on a two-scale conceptual model: discrete fractures at the borehole scale and zones of interconnected fractures at the aquifer scale. We propose that the two problems may be solved independently. The first inverse problem consists of estimating the hydraulic head variations that drive the transient borehole flow observed in the cross-borehole flowmeter experiments. The second inverse problem is related to estimating the geometry and hydraulic properties of large-scale flowpaths in the region between pumping and observation wells that are compatible with the head variations deduced from the first problem. To solve the borehole-scale problem, we treat the transient flow data as a series of quasi-steady flow conditions and solve for the hydraulic head changes in individual fractures required to produce these data. The consistency of the method is verified using field experiments performed in a fractured-rock aquifer.

  12. Transfer of movement sequences: bigger is better.

    PubMed

    Dean, Noah J; Kovacs, Attila J; Shea, Charles H

    2008-02-01

    Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.

  13. An Introduction to the Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Tian, Jian-quan; Miao, Dan-min; Zhu, Xia; Gong, Jing-jing

    2007-01-01

    Computerized adaptive testing (CAT) has unsurpassable advantages over traditional testing. It has become the mainstream in large scale examinations in modern society. This paper gives a brief introduction to CAT including differences between traditional testing and CAT, the principles of CAT, psychometric theory and computer algorithms of CAT, the…

  14. Students' Test Motivation in PISA: The Case of Norway

    ERIC Educational Resources Information Center

    Hopfenbeck, Therese N.; Kjaernsli, Marit

    2016-01-01

    Do students make their best effort in large-scale assessment studies such as the "Programme for International Student Assessment" (PISA)? Despite six cycles of PISA surveys from 2000 to 2015, empirical studies regarding students' test motivation and experience of the tests are sparse. The present study examines students' test motivation…

  15. Test Information Targeting Strategies for Adaptive Multistage Testing Designs.

    ERIC Educational Resources Information Center

    Luecht, Richard M.; Burgin, William

    Adaptive multistage testlet (MST) designs appear to be gaining popularity for many large-scale computer-based testing programs. These adaptive MST designs use a modularized configuration of preconstructed testlets and embedded score-routing schemes to prepackage different forms of an adaptive test. The conditional information targeting (CIT)…

  16. Multi-Objective Parallel Test-Sheet Composition Using Enhanced Particle Swarm Optimization

    ERIC Educational Resources Information Center

    Ho, Tsu-Feng; Yin, Peng-Yeng; Hwang, Gwo-Jen; Shyu, Shyong Jian; Yean, Ya-Nan

    2009-01-01

    For large-scale tests, such as certification tests or entrance examinations, the composed test sheets must meet multiple assessment criteria. Furthermore, to fairly compare the knowledge levels of the persons who receive tests at different times owing to the insufficiency of available examination halls or the occurrence of certain unexpected…

  17. Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center

    NASA Technical Reports Server (NTRS)

    Stasiunas, Eric C.; Parks, Russel A.; Sontag, Brendan D.

    2018-01-01

    A modal test of NASA's Space Launch System (SLS) Core Stage is scheduled to occur at the Stennis Space Center B2 test stand. A derrick crane with a 150-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview and application of the results.

  18. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  19. Predicting the propagation of concentration and saturation fronts in fixed-bed filters.

    PubMed

    Callery, O; Healy, M G

    2017-10-15

    The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Very-high-Reynolds-number vortex dynamics via Coherent-vorticity-Preserving (CvP) Large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Chapelier, Jean-Baptiste; Wasistho, Bono; Scalo, Carlo

    2017-11-01

    A new approach to Large-Eddy Simulation (LES) is introduced, where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated in regions dominated by large-scale vortical motion. The proposed CvP-LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy: σ = ξ ∧ / ξ . Values of σ = 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of any subgrid-scale model. Values of σ < 1 span conditions ranging from incipient spectral broadening σ <= 1 , to equilibrium turbulence σ =σeq < 1 , where σeq is solely as a function of the test-to-grid filter-width ratio Δ ∧ / Δ , derived assuming a Kolmogorov's spectrum. Eddy viscosity is fully restored for σ <=σeq . The proposed approach removes unnecessary SGS dissipation, can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. A CvP-LES of a pair of unstable helical vortices, representative of rotor-blade wake dynamics, show the ability of the method to sort the coherent motion from the small-scale dynamics. This work is funded by subcontract KSC-17-001 between Purdue University and Kord Technologies, Inc (Huntsville), under the US Navy Contract N68335-17-C-0159 STTR-Phase II, Purdue Proposal No. 00065007, Topic N15A-T002.

  1. Drought forecasting in Luanhe River basin involving climatic indices

    NASA Astrophysics Data System (ADS)

    Ren, Weinan; Wang, Yixuan; Li, Jianzhu; Feng, Ping; Smith, Ronald J.

    2017-11-01

    Drought is regarded as one of the most severe natural disasters globally. This is especially the case in Tianjin City, Northern China, where drought can affect economic development and people's livelihoods. Drought forecasting, the basis of drought management, is an important mitigation strategy. In this paper, we evolve a probabilistic forecasting model, which forecasts transition probabilities from a current Standardized Precipitation Index (SPI) value to a future SPI class, based on conditional distribution of multivariate normal distribution to involve two large-scale climatic indices at the same time, and apply the forecasting model to 26 rain gauges in the Luanhe River basin in North China. The establishment of the model and the derivation of the SPI are based on the hypothesis of aggregated monthly precipitation that is normally distributed. Pearson correlation and Shapiro-Wilk normality tests are used to select appropriate SPI time scale and large-scale climatic indices. Findings indicated that longer-term aggregated monthly precipitation, in general, was more likely to be considered normally distributed and forecasting models should be applied to each gauge, respectively, rather than to the whole basin. Taking Liying Gauge as an example, we illustrate the impact of the SPI time scale and lead time on transition probabilities. Then, the controlled climatic indices of every gauge are selected by Pearson correlation test and the multivariate normality of SPI, corresponding climatic indices for current month and SPI 1, 2, and 3 months later are demonstrated using Shapiro-Wilk normality test. Subsequently, we illustrate the impact of large-scale oceanic-atmospheric circulation patterns on transition probabilities. Finally, we use a score method to evaluate and compare the performance of the three forecasting models and compare them with two traditional models which forecast transition probabilities from a current to a future SPI class. The results show that the three proposed models outperform the two traditional models and involving large-scale climatic indices can improve the forecasting accuracy.

  2. Recent Regional Climate State and Change - Derived through Downscaling Homogeneous Large-scale Components of Re-analyses

    NASA Astrophysics Data System (ADS)

    Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.

    2015-12-01

    Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.

  3. The MMPI-2 Symptom Validity Scale (FBS) Not Influenced by Medical Impairment: A Large Sleep Center Investigation

    ERIC Educational Resources Information Center

    Greiffenstein, Manfred F.

    2010-01-01

    The Symptom Validity Scale (Minnesota Multiphasic Personality Inventory-2-FBS [MMPI-2-FBS]) is a standard MMPI-2 validity scale measuring overstatement of somatic distress and subjective disability. Some critics assert the MMPI-2-FBS misclassifies too many medically impaired persons as malingering symptoms. This study tests the assertion of…

  4. High-Performance Computing Unlocks Innovation at NREL - Video Text Version

    Science.gov Websites

    scales, data visualizations and large-scale modeling provide insights and test new ideas. But this type most energy-efficient data center in the world. NREL and Hewlett-Packard won an R&D 100 award-the

  5. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot gas expansion. These results clearly demonstrate the unique features of purely forced flow in microgravity on flame spread, the dependence of flame behavior on the scale of the experiment, and the importance of full-scale testing for spacecraft fire safety.

  6. Wafer level reliability for high-performance VLSI design

    NASA Technical Reports Server (NTRS)

    Root, Bryan J.; Seefeldt, James D.

    1987-01-01

    As very large scale integration architecture requires higher package density, reliability of these devices has approached a critical level. Previous processing techniques allowed a large window for varying reliability. However, as scaling and higher current densities push reliability to its limit, tighter control and instant feedback becomes critical. Several test structures developed to monitor reliability at the wafer level are described. For example, a test structure was developed to monitor metal integrity in seconds as opposed to weeks or months for conventional testing. Another structure monitors mobile ion contamination at critical steps in the process. Thus the reliability jeopardy can be assessed during fabrication preventing defective devices from ever being placed in the field. Most importantly, the reliability can be assessed on each wafer as opposed to an occasional sample.

  7. A novel representation of groundwater dynamics in large-scale land surface modelling

    NASA Astrophysics Data System (ADS)

    Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan

    2017-04-01

    Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.

  8. Large-scale fiber release and equipment exposure experiments. [aircraft fires

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    Outdoor tests were conducted to determine the amount of fiber released in a full scale fire and trace its dissemination away from the fire. Equipment vulnerability to fire released fibers was assessed through shock tests. The greatest fiber release was observed in the shock tube where the composite was burned with a continuous agitation to total consumption. The largest average fiber length obtained outdoors was 5 mm.

  9. CO 2 Storage and Enhanced Oil Recovery: Bald Unit Test Site, Mumford Hills Oil Field, Posey County, Indiana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frailey, Scott M.; Krapac, Ivan G.; Damico, James R.

    2012-03-30

    The Midwest Geological Sequestration Consortium (MGSC) carried out a small-scale carbon dioxide (CO 2) injection test in a sandstone within the Clore Formation (Mississippian System, Chesterian Series) in order to gauge the large-scale CO 2 storage that might be realized from enhanced oil recovery (EOR) of mature Illinois Basin oil fields via miscible liquid CO 2 flooding.

  10. Fabrication and testing of the first 8.4-m off-axis segment for the Giant Magellan Telescope

    NASA Astrophysics Data System (ADS)

    Martin, H. M.; Allen, R. G.; Burge, J. H.; Kim, D. W.; Kingsley, J. S.; Tuell, M. T.; West, S. C.; Zhao, C.; Zobrist, T.

    2010-07-01

    The primary mirror of the Giant Magellan Telescope consists of seven 8.4 m segments which are borosilicate honeycomb sandwich mirrors. Fabrication and testing of the off-axis segments is challenging and has led to a number of innovations in manufacturing technology. The polishing system includes an actively stressed lap that follows the shape of the aspheric surface, used for large-scale figuring and smoothing, and a passive "rigid conformal lap" for small-scale figuring and smoothing. Four independent measurement systems support all stages of fabrication and provide redundant measurements of all critical parameters including mirror figure, radius of curvature, off-axis distance and clocking. The first measurement uses a laser tracker to scan the surface, with external references to compensate for rigid body displacements and refractive index variations. The main optical test is a full-aperture interferometric measurement, but it requires an asymmetric null corrector with three elements, including a 3.75 m mirror and a computer-generated hologram, to compensate for the surface's 14 mm departure from the best-fit sphere. Two additional optical tests measure large-scale and small-scale structure, with some overlap. Together these measurements provide high confidence that the segments meet all requirements.

  11. Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)

    2002-01-01

    The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.

  12. Heavy hydrocarbon main injector technology

    NASA Technical Reports Server (NTRS)

    Fisher, S. C.; Arbit, H. A.

    1988-01-01

    One of the key components of the Advanced Launch System (ALS) is a large liquid rocket, booster engine. To keep the overall vehicle size and cost down, this engine will probably use liquid oxygen (LOX) and a heavy hydrocarbon, such as RP-1, as propellants and operate at relatively high chamber pressures to increase overall performance. A technology program (Heavy Hydrocarbon Main Injector Technology) is being studied. The main objective of this effort is to develop a logic plan and supporting experimental data base to reduce the risk of developing a large scale (approximately 750,000 lb thrust), high performance main injector system. The overall approach and program plan, from initial analyses to large scale, two dimensional combustor design and test, and the current status of the program are discussed. Progress includes performance and stability analyses, cold flow tests of injector model, design and fabrication of subscale injectors and calorimeter combustors for performance, heat transfer, and dynamic stability tests, and preparation of hot fire test plans. Related, current, high pressure, LOX/RP-1 injector technology efforts are also briefly discussed.

  13. MULTISPECIES REACTIVE TRACER TEST IN A SAND AND GRAVEL AQUIFER, CAPE COD, MASSACHUSETTS: PART 1: EXPERIMENTAL DESIGN AND TRANSPORT OF BROMIDE AND NICKEL-EDTA TRACERS

    EPA Science Inventory

    In this report, we summarize a portion of the results of a large-scale tracer test conducted at the U. S. Geological Survey research site on Cape Cod, Massachusetts. The site is located on a large sand and gravel glacial outwash plain in an unconfined aquifer. In April 1993, ab...

  14. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  15. Techniques for control of long-term reliability of complex integrated circuits. I - Reliability assurance by test vehicle qualification.

    NASA Technical Reports Server (NTRS)

    Van Vonno, N. W.

    1972-01-01

    Development of an alternate approach to the conventional methods of reliability assurance for large-scale integrated circuits. The product treated is a large-scale T squared L array designed for space applications. The concept used is that of qualification of product by evaluation of the basic processing used in fabricating the product, providing an insight into its potential reliability. Test vehicles are described which enable evaluation of device characteristics, surface condition, and various parameters of the two-level metallization system used. Evaluation of these test vehicles is performed on a lot qualification basis, with the lot consisting of one wafer. Assembled test vehicles are evaluated by high temperature stress at 300 C for short time durations. Stressing at these temperatures provides a rapid method of evaluation and permits a go/no go decision to be made on the wafer lot in a timely fashion.

  16. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  17. Experimental Investigation of a Large-Scale Low-Boom Inlet Concept

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie M.; Chima, Rodrick V.; Vyas, Manan A.; Wayman, Thomas R.; Conners, Timothy R.; Reger, Robert W.

    2011-01-01

    A large-scale low-boom inlet concept was tested in the NASA Glenn Research Center 8- x 6- foot Supersonic Wind Tunnel. The purpose of this test was to assess inlet performance, stability and operability at various Mach numbers and angles of attack. During this effort, two models were tested: a dual stream inlet designed to mimic potential aircraft flight hardware integrating a high-flow bypass stream; and a single stream inlet designed to study a configuration with a zero-degree external cowl angle and to permit surface visualization of the vortex generator flow on the internal centerbody surface. During the course of the test, the low-boom inlet concept was demonstrated to have high recovery, excellent buzz margin, and high operability. This paper will provide an overview of the setup, show a brief comparison of the dual stream and single stream inlet results, and examine the dual stream inlet characteristics.

  18. Large-area photogrammetry based testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.

  19. Large-scale machine learning and evaluation platform for real-time traffic surveillance

    NASA Astrophysics Data System (ADS)

    Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel

    2016-09-01

    In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.

  20. A Critical Review of the IELTS Writing Test

    ERIC Educational Resources Information Center

    Uysal, Hacer Hande

    2010-01-01

    Administered at local centres in 120 countries throughout the world, IELTS (International English Language Testing System) is one of the most widely used large-scale ESL tests that also offers a direct writing test component. Because of its popularity and its use for making critical decisions about test takers, it is crucial to draw attention to…

  1. Ongoing research experiments at the former Soviet nuclear test site in eastern Kazakhstan

    USGS Publications Warehouse

    Leith, William S.; Kluchko, Luke J.; Konovalov, Vladimir; Vouille, Gerard

    2002-01-01

    Degelen mountain, located in EasternKazakhstan near the city of Semipalatinsk, was once the Soviets most active underground nuclear test site. Two hundred fifteen nuclear tests were conducted in 181 tunnels driven horizontally into its many ridges--almost twice the number of tests as at any other Soviet underground nuclear test site. It was also the site of the first Soviet underground nuclear test--a 1-kiloton device detonated on October 11, 1961. Until recently, the details of testing at Degelen were kept secret and have been the subject of considerable speculation. However, in 1991, the Semipalatinsk test site became part of the newly independent Republic of Kazakhstan; and in 1995, the Kazakhstani government concluded an agreement with the U.S. Department of Defense to eliminate the nuclear testing infrastructure in Kazakhstan. This agreement, which calls for the "demilitarization of the infrastructure directly associated with the nuclear weapons test tunnels," has been implemented as the "Degelen Mountain Tunnel Closure Program." The U.S. Defense Threat Reduction Agency, in partnership with the Department of Energy, has permitted the use of the tunnel closure project at the former nuclear test site as a foundation on which to support cost-effective, research-and-development-funded experiments. These experiments are principally designed to improve U.S. capabilities to monitor and verify the Comprehensive Test Ban Treaty (CTBT), but have provided a new source of information on the effects of nuclear and chemical explosions on hard, fractured rock environments. These new data extends and confirms the results of recent Russian publications on the rock environment at the site and the mechanical effects of large-scale chemical and nuclear testing. In 1998, a large-scale tunnel closure experiment, Omega-1, was conducted in Tunnel 214 at Degelen mountain. In this experiment, a 100-ton chemical explosive blast was used to test technologies for monitoring the Comprehensive Nuclear Test Ban Treaty, and to calibrate a portion of the CTBT's International Monitoring System. This experiment has also provided important benchmark data on the mechanical behavior of hard, dense, fractured rock, and has demonstrated the feasibility of fielding large-scale calibration explosions, which are specified as a "confidence-building measure" in the CTBT Protocol. Two other large-scale explosion experiments, Omega-2 and Omega-3, are planned for the summer of 1999 and 2000. Like the Tunnel 214 test, the 1999 experiment will include close-in monitoring of near-source effects, as well as contributing to the calibration of key seismic stations for the Comprehensive Test Ban Treaty. The Omega-3 test will examine the effect of multiple blasts on the fractured rock environment.

  2. Test Design Considerations for Students with Significant Cognitive Disabilities

    ERIC Educational Resources Information Center

    Anderson, Daniel; Farley, Dan; Tindal, Gerald

    2015-01-01

    Students with significant cognitive disabilities present an assessment dilemma that centers on access and validity in large-scale testing programs. Typically, access is improved by eliminating construct-irrelevant barriers, while validity is improved, in part, through test standardization. In this article, one state's alternate assessment data…

  3. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    PubMed Central

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2015-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The consequence is that the proficiencies of the more proficient students are increased relative to those of the less proficient. Not controlling the guessing bias underestimates the progress of students across 7 years of schooling with important educational implications. PMID:29795871

  4. Criterion-Referenced Job Proficiency Testing: A Large Scale Application. Research Report 1193.

    ERIC Educational Resources Information Center

    Maier, Milton H.; Hirshfeld, Stephen F.

    The Army Skill Qualification Tests (SQT's) were designed to determine levels of competence in performance of the tasks crucial to an enlisted soldier's occupational specialty. SQT's are performance-based, criterion-referenced measures which offer two advantages over traditional proficiency and achievement testing programs: test content can be made…

  5. Mapping the integrated Sachs-Wolfe effect

    NASA Astrophysics Data System (ADS)

    Manzotti, A.; Dodelson, S.

    2014-12-01

    On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.

  6. Pupil Perceptions of National Tests in Science: Perceived Importance, Invested Effort, and Test Anxiety

    ERIC Educational Resources Information Center

    Eklof, Hanna; Nyroos, Mikaela

    2013-01-01

    Although large-scale national tests have been used for many years in Swedish compulsory schools, very little is known about how pupils actually react to these tests. The question is relevant, however, as pupil reactions in the test situation may affect test performance as well as future attitudes towards assessment. The question is relevant also…

  7. The impact of large-scale, long-term optical surveys on pulsating star research

    NASA Astrophysics Data System (ADS)

    Soszyński, Igor

    2017-09-01

    The era of large-scale photometric variability surveys began a quarter of a century ago, when three microlensing projects - EROS, MACHO, and OGLE - started their operation. These surveys initiated a revolution in the field of variable stars and in the next years they inspired many new observational projects. Large-scale optical surveys multiplied the number of variable stars known in the Universe. The huge, homogeneous and complete catalogs of pulsating stars, such as Cepheids, RR Lyrae stars, or long-period variables, offer an unprecedented opportunity to calibrate and test the accuracy of various distance indicators, to trace the three-dimensional structure of the Milky Way and other galaxies, to discover exotic types of intrinsically variable stars, or to study previously unknown features and behaviors of pulsators. We present historical and recent findings on various types of pulsating stars obtained from the optical large-scale surveys, with particular emphasis on the OGLE project which currently offers the largest photometric database among surveys for stellar variability.

  8. Chronic, Wireless Recordings of Large Scale Brain Activity in Freely Moving Rhesus Monkeys

    PubMed Central

    Schwarz, David A.; Lebedev, Mikhail A.; Hanson, Timothy L.; Dimitrov, Dragan F.; Lehew, Gary; Meloy, Jim; Rajangam, Sankaranarayani; Subramanian, Vivek; Ifft, Peter J.; Li, Zheng; Ramakrishnan, Arjun; Tate, Andrew; Zhuang, Katie; Nicolelis, Miguel A.L.

    2014-01-01

    Advances in techniques for recording large-scale brain activity contribute to both the elucidation of neurophysiological principles and the development of brain-machine interfaces (BMIs). Here we describe a neurophysiological paradigm for performing tethered and wireless large-scale recordings based on movable volumetric three-dimensional (3D) multielectrode implants. This approach allowed us to isolate up to 1,800 units per animal and simultaneously record the extracellular activity of close to 500 cortical neurons, distributed across multiple cortical areas, in freely behaving rhesus monkeys. The method is expandable, in principle, to thousands of simultaneously recorded channels. It also allows increased recording longevity (5 consecutive years), and recording of a broad range of behaviors, e.g. social interactions, and BMI paradigms in freely moving primates. We propose that wireless large-scale recordings could have a profound impact on basic primate neurophysiology research, while providing a framework for the development and testing of clinically relevant neuroprostheses. PMID:24776634

  9. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  10. Stability of large-scale systems with stable and unstable subsystems.

    NASA Technical Reports Server (NTRS)

    Grujic, Lj. T.; Siljak, D. D.

    1972-01-01

    The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.

  11. Implementation of Fiber Optic Sensing System on Sandwich Composite Cylinder Buckling Test

    NASA Technical Reports Server (NTRS)

    Pena, Francisco; Richards, W. Lance; Parker, Allen R.; Piazza, Anthony; Schultz, Marc R.; Rudd, Michelle T.; Gardner, Nathaniel W.; Hilburger, Mark W.

    2018-01-01

    The National Aeronautics and Space Administration (NASA) Engineering and Safety Center Shell Buckling Knockdown Factor Project is a multicenter project tasked with developing new analysis-based shell buckling design guidelines and design factors (i.e., knockdown factors) through high-fidelity buckling simulations and advanced test technologies. To validate these new buckling knockdown factors for future launch vehicles, the Shell Buckling Knockdown Factor Project is carrying out structural testing on a series of large-scale metallic and composite cylindrical shells at the NASA Marshall Space Flight Center (Marshall Space Flight Center, Alabama). A fiber optic sensor system was used to measure strain on a large-scale sandwich composite cylinder that was tested under multiple axial compressive loads up to more than 850,000 lb, and equivalent bending loads over 22 million in-lb. During the structural testing of the composite cylinder, strain data were collected from optical cables containing distributed fiber Bragg gratings using a custom fiber optic sensor system interrogator developed at the NASA Armstrong Flight Research Center. A total of 16 fiber-optic strands, each containing nearly 1,000 fiber Bragg gratings, measuring strain, were installed on the inner and outer cylinder surfaces to monitor the test article global structural response through high-density real-time and post test strain measurements. The distributed sensing system provided evidence of local epoxy failure at the attachment-ring-to-barrel interface that would not have been detected with conventional instrumentation. Results from the fiber optic sensor system were used to further refine and validate structural models for buckling of the large-scale composite structures. This paper discusses the techniques employed for real-time structural monitoring of the composite cylinder for structural load introduction and distributed bending-strain measurements over a large section of the cylinder by utilizing unique sensing capabilities of fiber optic sensors.

  12. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    NASA Astrophysics Data System (ADS)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  13. The Mothball, Sustainment, and Proposed Reactivation of the Hypersonic Tunnel Facility (HTF) at NASA Glenn Research Center Plum Brook Station

    NASA Technical Reports Server (NTRS)

    Thomas, Scott R.; Lee, Jinho; Stephens, John W.; Hostler, Robert W., Jr.; VonKamp, William D.

    2010-01-01

    The Hypersonic Tunnel Facility (HTF) located at the NASA Glenn Research Center s Plum Brook Station in Sandusky, Ohio, is the nation s only large-scale, non-vitiated, hypersonic propulsion test facility. The HTF, with its 4-story graphite induction heater, is capable of duplicating Mach 5, 6, and 7 flight conditions. This unique propulsion system test facility has experienced several standby and reactivation cycles. The intent of the paper is to overview the HTF capabilities to the propulsion community, present the current status of HTF, and share the lessons learned from putting a large-scale facility into mothball status for a later restart

  14. The cosmic ray muon tomography facility based on large scale MRPC detectors

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping

    2015-06-01

    Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.

  15. The development of a capability for aerodynamic testing of large-scale wing sections in a simulated natural rain environment

    NASA Technical Reports Server (NTRS)

    Bezos, Gaudy M.; Cambell, Bryan A.; Melson, W. Edward

    1989-01-01

    A research technique to obtain large-scale aerodynamic data in a simulated natural rain environment has been developed. A 10-ft chord NACA 64-210 wing section wing section equipped with leading-edge and trailing-edge high-lift devices was tested as part of a program to determine the effect of highly-concentrated, short-duration rainfall on airplane performance. Preliminary dry aerodynamic data are presented for the high-lift configuration at a velocity of 100 knots and an angle of attack of 18 deg. Also, data are presented on rainfield uniformity and rainfall concentration intensity levels obtained during the calibration of the rain simulation system.

  16. Large-Scale Wind-Tunnel Tests of an Airplane Model with an Unswept, Aspect-Ratio-10 Wing, Two Propellers, and Blowing Flaps

    NASA Technical Reports Server (NTRS)

    Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.

    1958-01-01

    An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.

  17. Fundamental tests of galaxy formation theory

    NASA Technical Reports Server (NTRS)

    Silk, J.

    1982-01-01

    The structure of the universe as an environment where traces exist of the seed fluctuations from which galaxies formed is studied. The evolution of the density fluctuation modes that led to the eventual formation of matter inhomogeneities is reviewed, How the resulting clumps developed into galaxies and galaxy clusters acquiring characteristic masses, velocity dispersions, and metallicities, is discussed. Tests are described that utilize the large scale structure of the universe, including the dynamics of the local supercluster, the large scale matter distribution, and the anisotropy of the cosmic background radiation, to probe the earliest accessible stages of evolution. Finally, the role of particle physics is described with regard to its observable implications for galaxy formation.

  18. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The MMPI-2 Symptom Validity Scale (FBS) not influenced by medical impairment: a large sleep center investigation.

    PubMed

    Greiffenstein, Manfred F

    2010-06-01

    The Symptom Validity Scale (Minnesota Multiphasic Personality Inventory-2-FBS [MMPI-2-FBS]) is a standard MMPI-2 validity scale measuring overstatement of somatic distress and subjective disability. Some critics assert the MMPI-2-FBS misclassifies too many medically impaired persons as malingering symptoms. This study tests the assertion of malingering misclassification with a large sample of 345 medical inpatients undergoing sleep studies that standardly included MMPI-2 testing. The variables included standard MMPI-2 validity scales (Lie Scale [L], Infrequency Scale [F], K-Correction [K]; FBS), objective medical data (e.g., body mass index, pulse oximetry), and polysomnographic scores (e.g., apnea/hypopnea index). The results showed the FBS had no substantial or unique association with medical/sleep variables, produced false positive rates <20% (median = 9, range = 4-11), and male inpatients showed marginally higher failure rates than females. The MMPI-2-FBS appears to have acceptable specificity, because it did not misclassify as biased responders those medical patients with sleep problems, male or female, with primary gain only (reducing sickness). Medical impairment does not appear to be a major influence on deviant MMPI-2-FBS scores.

  20. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  1. TESTING METHODS FOR DETECTION OF CRYPTOSPORIDIUM SPP. IN WATER SAMPLES

    EPA Science Inventory

    A large waterborne outbreak of cryptosporidiosis in Milwaukee, Wisconsin, U.S.A. in 1993 prompted a search for ways to prevent large scale waterborne outbreaks of protozoan parasitoses. Two principle strategies have emerged: risk assessment leading to appropriate treatment and ...

  2. Retooling Education: Testing and the Liberal Arts

    ERIC Educational Resources Information Center

    Jackson, Robert L.

    2007-01-01

    The motivation and methodology for measuring intelligence have changed repeatedly in the modern history of large-scale student testing. Test makers have always sought to identify raw aptitude for cultivation, but they have never figured out how to promote excellence while preserving equality. They've settled for egalitarianism, which gives rise to…

  3. State Test Results Are Predictable

    ERIC Educational Resources Information Center

    Tienken, Christopher H.

    2014-01-01

    Out-of-school, community demographic and family-level variables have an important influence on student achievement as measured by large-scale standardized tests. Studies described here demonstrated that about half of the test score is accounted for by variables outside the control of teachers and school administrators. The results from these…

  4. Designing Cognitive Complexity in Mathematical Problem-Solving Items

    ERIC Educational Resources Information Center

    Daniel, Robert C.; Embretson, Susan E.

    2010-01-01

    Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…

  5. Script Concordance Testing in Continuing Professional Development: Local or International Reference Panels?

    ERIC Educational Resources Information Center

    Pleguezuelos, E. M.; Hornos, E.; Dory, V.; Gagnon, R.; Malagrino, P.; Brailovsky, C. A.; Charlin, B.

    2013-01-01

    Context: The PRACTICUM Institute has developed large-scale international programs of on-line continuing professional development (CPD) based on self-testing and feedback using the Practicum Script Concordance Test© (PSCT). Aims: To examine the psychometric consequences of pooling the responses of panelists from different countries (composite…

  6. Statewide Physical Fitness Testing: Perspectives from the Gym

    ERIC Educational Resources Information Center

    Martin, Scott B.; Ede, Alison; Morrow, James R., Jr.; Jackson, Allen W.

    2010-01-01

    This paper provides observations of physical fitness testing in Texas schools and physical education teachers' insights about large-scale testing using the FITNESSGRAM[R] assessment (Cooper Institute, 2007) as mandated by Texas Senate Bill 530. In the first study, undergraduate and graduate students who were trained to observe and assess student…

  7. Detecting Item Drift in Large-Scale Testing

    ERIC Educational Resources Information Center

    Guo, Hongwen; Robin, Frederic; Dorans, Neil

    2017-01-01

    The early detection of item drift is an important issue for frequently administered testing programs because items are reused over time. Unfortunately, operational data tend to be very sparse and do not lend themselves to frequent monitoring analyses, particularly for on-demand testing. Building on existing residual analyses, the authors propose…

  8. Scaling up Psycholinguistics

    ERIC Educational Resources Information Center

    Smith, Nathaniel J.

    2011-01-01

    This dissertation contains several projects, each addressing different questions with different techniques. In chapter 1, I argue that they are unified thematically by their goal of "scaling up psycholinguistics"; they are all aimed at analyzing large data-sets using tools that reveal patterns to propose and test mechanism-neutral hypotheses about…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  10. Test-retest reliability of the Capute scales for neurodevelopmental screening of a high risk sample: Impact of test-retest interval and degree of neonatal risk.

    PubMed

    McCurdy, M; Bellows, A; Deng, D; Leppert, M; Mahone, E; Pritchard, A

    2015-01-01

    Reliable and valid screening and assessment tools are necessary to identify children at risk for neurodevelopmental disabilities who may require additional services. This study evaluated the test-retest reliability of the Capute Scales in a high-risk sample, hypothesizing adequate reliability across 6- and 12-month intervals. Capute Scales scores (N = 66) were collected via retrospective chart review from a NICU follow-up clinic within a large urban medical center spanning three age-ranges: 12-18, 19-24, and 25-36 months. On average, participants were classified as very low birth weight and premature. Reliability of the Capute Scales was evaluated with intraclass correlation coefficients across length of test-retest interval, age at testing, and degree of neonatal complications. The Capute Scales demonstrated high reliability, regardless of length of test-retest interval (ranging from 6 to 14 months) or age of participant, for all index scores, including overall Developmental Quotient (DQ), language-based skill index (CLAMS) and nonverbal reasoning index (CAT). Linear regressions revealed that greater neonatal risk was related to poorer test-retest reliability; however, reliability coefficients remained strong. The Capute Scales afford clinicians a reliable and valid means of screening and assessing for neurodevelopmental delay within high-risk infant populations.

  11. A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2016-02-01

    Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.

  12. Conducting Automated Test Assembly Using the Premium Solver Platform Version 7.0 with Microsoft Excel and the Large-Scale LP/QP Solver Engine Add-In

    ERIC Educational Resources Information Center

    Cor, Ken; Alves, Cecilia; Gierl, Mark J.

    2008-01-01

    This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…

  13. Performance of the first Japanese large-scale facility for radon inhalation experiments with small animals.

    PubMed

    Ishimori, Yuu; Mitsunobu, Fumihiro; Yamaoka, Kiyonori; Tanaka, Hiroshi; Kataoka, Takahiro; Sakoda, Akihiro

    2011-07-01

    A radon test facility for small animals was developed in order to increase the statistical validity of differences of the biological response in various radon environments. This paper illustrates the performances of that facility, the first large-scale facility of its kind in Japan. The facility has a capability to conduct approximately 150 mouse-scale tests at the same time. The apparatus for exposing small animals to radon has six animal chamber groups with five independent cages each. Different radon concentrations in each animal chamber group are available. Because the first target of this study is to examine the in vivo behaviour of radon and its effects, the major functions to control radon and to eliminate thoron were examined experimentally. Additionally, radon progeny concentrations and their particle size distributions in the cages were also examined experimentally to be considered in future projects.

  14. The Relationship between Spatial and Temporal Magnitude Estimation of Scientific Concepts at Extreme Scales

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Lee, H.

    2010-01-01

    Many astronomical objects, processes, and events exist and occur at extreme scales of spatial and temporal magnitudes. Our research draws upon the psychological literature, replete with evidence of linguistic and metaphorical links between the spatial and temporal domains, to compare how students estimate spatial and temporal magnitudes associated with objects and processes typically taught in science class.. We administered spatial and temporal scale estimation tests, with many astronomical items, to 417 students enrolled in 12 undergraduate science courses. Results show that while the temporal test was more difficult, students’ overall performance patterns between the two tests were mostly similar. However, asymmetrical correlations between the two tests indicate that students think of the extreme ranges of spatial and temporal scales in different ways, which is likely influenced by their classroom experience. When making incorrect estimations, students tended to underestimate the difference between the everyday scale and the extreme scales on both tests. This suggests the use of a common logarithmic mental number line for both spatial and temporal magnitude estimation. However, there are differences between the two tests in the errors student make in the everyday range. Among the implications discussed is the use of spatio-temporal reference frames, instead of smooth bootstrapping, to help students maneuver between scales of magnitude and the use of logarithmic transformations between reference frames. Implications for astronomy range from learning about spectra to large scale galaxy structure.

  15. What scaling means in wind engineering: Complementary role of the reduced scale approach in a BLWT and the full scale testing in a large climatic wind tunnel

    NASA Astrophysics Data System (ADS)

    Flamand, Olivier

    2017-12-01

    Wind engineering problems are commonly studied by wind tunnel experiments at a reduced scale. This introduces several limitations and calls for a careful planning of the tests and the interpretation of the experimental results. The talk first revisits the similitude laws and discusses how they are actually applied in wind engineering. It will also remind readers why different scaling laws govern in different wind engineering problems. Secondly, the paper focuses on the ways to simplify a detailed structure (bridge, building, platform) when fabricating the downscaled models for the tests. This will be illustrated by several examples from recent engineering projects. Finally, under the most severe weather conditions, manmade structures and equipment should remain operational. What “recreating the climate” means and aims to achieve will be illustrated through common practice in climatic wind tunnel modelling.

  16. A Pilot Study: Testing of the Psychological Conditions Scale Among Hospital Nurses.

    PubMed

    Fountain, Donna M; Thomas-Hawkins, Charlotte

    2016-11-01

    The aim of this study was to test the reliability and validity of the Psychological Conditions Scale (PCS), a measure of drivers of engagement in hospital-based nurses. Research suggests drivers of engagement are positive links to patient, employee, and hospital outcomes. Although this scale has been used in other occupations, it has not been tested in nursing. A cross-sectional, methodological study using a convenience sample of 200 nurses in a large Magnet® hospital in New Jersey. Cronbach's α's ranged from .64 to .95. Principal components exploratory factor analysis with oblique rotation revealed that 13 items loaded unambiguously in 3 domains and explained 76% of the variance. Mean PCS scores ranged from 3.62 to 4.68 on a 5-point Likert scale. The scale is an adequate measure of drivers of engagement in hospital-based nurses. Leadership efforts to promote the facilitators of engagement are recommended.

  17. Point contact tunneling spectroscopy apparatus for large scale mapping of surface superconducting properties

    DOE PAGES

    Groll, Nickolas; Pellin, Michael J.; Zasadzinksi, John F.; ...

    2015-09-18

    In this paper, we describe the design and testing of a point contact tunneling spectroscopy device that can measure material surface superconducting properties (i.e., the superconducting gap Δ and the critical temperature T C) and density of states over large surface areas with size up to mm 2. The tip lateral (X,Y) motion, mounted on a (X,Y,Z) piezo-stage, was calibrated on a patterned substrate consisting of Nb lines sputtered on a gold film using both normal (Al) and superconducting (PbSn) tips at 1.5 K. The tip vertical (Z) motion control enables some adjustment of the tip-sample junction resistance that canmore » be measured over 7 orders of magnitudes from a quasi-ohmic regime (few hundred Ω) to the tunnel regime (from tens of kΩ up to few GΩ). The low noise electronic and LabVIEW program interface are also presented. Finally, the point contact regime and the large-scale motion capabilities are of particular interest for mapping and testing the superconducting properties of macroscopic scale superconductor-based devices.« less

  18. Compression Buckling Behavior of Large-Scale Friction Stir Welded and Riveted 2090-T83 Al-Li Alloy Skin-Stiffener Panels

    NASA Technical Reports Server (NTRS)

    Hoffman, Eric K.; Hafley, Robert A.; Wagner, John A.; Jegley, Dawn C.; Pecquet, Robert W.; Blum, Celia M.; Arbegast, William J.

    2002-01-01

    To evaluate the potential of friction stir welding (FSW) as a replacement for traditional rivet fastening for launch vehicle dry bay construction, a large-scale friction stir welded 2090-T83 aluminum-lithium (Al-Li) alloy skin-stiffener panel was designed and fabricated by Lockheed-Martin Space Systems Company - Michoud Operations (LMSS) as part of NASA Space Act Agreement (SAA) 446. The friction stir welded panel and a conventional riveted panel were tested to failure in compression at the NASA Langley Research Center (LaRC). The present paper describes the compression test results, stress analysis, and associated failure behavior of these panels. The test results provide useful data to support future optimization of FSW processes and structural design configurations for launch vehicle dry bay structures.

  19. Posttest destructive examination of the steel liner in a 1:6-scale reactor containment model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, L.D.

    A 1:6-scale model of a nuclear reactor containment model was built and tested at Sandia National Laboratories as part of research program sponsored by the Nuclear Regulatory Commission to investigate containment overpressure test was terminated due to leakage from a large tear in the steel liner. A limited destructive examination of the liner and anchorage system was conducted to gain information about the failure mechanism and is described. Sections of liner were removed in areas where liner distress was evident or where large strains were indicated by instrumentation during the test. The condition of the liner, anchorage system, and concretemore » for each of the regions that were investigated are described. The probable cause of the observed posttest condition of the liner is discussed.« less

  20. A laser-sheet flow visualization technique for the large wind tunnels of the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Reinath, M. S.; Ross, J. C.

    1990-01-01

    A flow visualization technique for the large wind tunnels of the National Full Scale Aerodynamics Complex (NFAC) is described. The technique uses a laser sheet generated by the NFAC Long Range Laser Velocimeter (LRLV) to illuminate a smoke-like tracer in the flow. The LRLV optical system is modified slightly, and a scanned mirror is added to generate the sheet. These modifications are described, in addition to the results of an initial performance test conducted in the 80- by 120-Foot Wind Tunnel. During this test, flow visualization was performed in the wake region behind a truck as part of a vehicle drag reduction study. The problems encountered during the test are discussed, in addition to the recommended improvements needed to enhance the performance of the technique for future applications.

  1. Experience in managing a large-scale rescreening of Papanicolaou smears and the pros and cons of measuring proficiency with visual and written examinations.

    PubMed

    Rube, I F

    1989-01-01

    Experiences in a large-scale interlaboratory rescreening of Papanicolaou smears are detailed, and the pros and cons of measuring proficiency in cytology are discussed. Despite the additional work of the rescreening project and some psychological and technical problems, it proved to be a useful measure of the laboratory's performance as a whole. One problem to be avoided in future similar studies is the creation of too many diagnostic categories. Individual testing and certification have been shown to be accurate predictors of proficiency. For cytology, such tests require a strong visual component to test interpretation and judgment skills, such as by the use of glass slides or photomicrographs. The potential of interactive videodisc technology for facilitating cytopathologic teaching and assessment is discussed.

  2. Inferring field-scale properties of a fractured aquifer from ground surface deformation during a well test

    NASA Astrophysics Data System (ADS)

    Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Boudin, Frédérick; Durand, Stéphane; Lavenant, Nicolas

    2015-12-01

    Fractured aquifers which bear valuable water resources are often difficult to characterize with classical hydrogeological tools due to their intrinsic heterogeneities. Here we implement ground surface deformation tools (tiltmetry and optical leveling) to monitor groundwater pressure changes induced by a classical hydraulic test at the Ploemeur observatory. By jointly analyzing complementary time constraining data (tilt) and spatially constraining data (vertical displacement), our results strongly suggest that the use of these surface deformation observations allows for estimating storativity and structural properties (dip, root depth, and lateral extension) of a large hydraulically active fracture, in good agreement with previous studies. Hence, we demonstrate that ground surface deformation is a useful addition to traditional hydrogeological techniques and opens possibilities for characterizing important large-scale properties of fractured aquifers with short-term well tests as a controlled forcing.

  3. NASA/FAA general aviation crash dynamics program - An update

    NASA Technical Reports Server (NTRS)

    Hayduk, R. J.; Thomson, R. G.; Carden, H. D.

    1979-01-01

    Work in progress in the NASA/FAA General Aviation Crash Dynamics Program for the development of technology for increased crash-worthiness and occupant survivability of general aviation aircraft is presented. Full-scale crash testing facilities and procedures are outlined, and a chronological summary of full-scale tests conducted and planned is presented. The Plastic and Large Deflection Analysis of Nonlinear Structures and Modified Seat Occupant Model for Light Aircraft computer programs which form part of the effort to predict nonlinear geometric and material behavior of sheet-stringer aircraft structures subjected to large deformations are described, and excellent agreement between simulations and experiments is noted. The development of structural concepts to attenuate the load transmitted to the passenger through the seats and subfloor structure is discussed, and an apparatus built to test emergency locator transmitters in a realistic environment is presented.

  4. Evaluation of Alternative Altitude Scaling Methods for Thermal Ice Protection System in NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Addy, Harold; Broeren, Andy P.; Orchard, David M.

    2017-01-01

    A test was conducted at NASA Icing Research Tunnel to evaluate altitude scaling methods for thermal ice protection system. Two scaling methods based on Weber number were compared against a method based on the Reynolds number. The results generally agreed with the previous set of tests conducted in NRCC Altitude Icing Wind Tunnel. The Weber number based scaling methods resulted in smaller runback ice mass than the Reynolds number based scaling method. The ice accretions from the Weber number based scaling method also formed farther upstream. However there were large differences in the accreted ice mass between the two Weber number based scaling methods. The difference became greater when the speed was increased. This indicated that there may be some Reynolds number effects that isnt fully accounted for and warrants further study.

  5. ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores

    ERIC Educational Resources Information Center

    Allalouf, Avi

    2014-01-01

    The Quality Control (QC) Guidelines are intended to increase the efficiency, precision, and accuracy of the scoring, analysis, and reporting process of testing. The QC Guidelines focus on large-scale testing operations where multiple forms of tests are created for use on set dates. However, they may also be used for a wide variety of other testing…

  6. Multistage Adaptive Testing for a Large-Scale Classification Test: Design, Heuristic Assembly, and Comparison with Other Testing Modes. ACT Research Report Series, 2012 (6)

    ERIC Educational Resources Information Center

    Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua

    2012-01-01

    Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…

  7. Field test of a motorcycle safety education course for novice riders

    DOT National Transportation Integrated Search

    1982-07-01

    The purpose of this study was to subject the Motorcycle Safety Foundation's Motorcycle Rider Course (MRC) to a large-scale field test designed to evaluate the following aspects of the course: (1) Instructional Effectiveness, (2) User Acceptance, and ...

  8. SUMMARY OF SOLIDIFICATION/STABILIZATION SITE DEMONSTRATIONS AT UNCONTROLLED HAZARDOUS WASTE SITES

    EPA Science Inventory

    Four large-scale solidification/stabilization demonstrations have occurred under EPA's SITE program. In general, physical testing results have been acceptable. Reduction in metal leachability, as determined by the TCLP test, has been observed. Reduction in organic leachability ha...

  9. Applications of Magnetic Suspension Technology to Large Scale Facilities: Progress, Problems and Promises

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1997-01-01

    This paper will briefly review previous work in wind tunnel Magnetic Suspension and Balance Systems (MSBS) and will examine the handful of systems around the world currently known to be in operational condition or undergoing recommissioning. Technical developments emerging from research programs at NASA and elsewhere will be reviewed briefly, where there is potential impact on large-scale MSBSS. The likely aerodynamic applications for large MSBSs will be addressed, since these applications should properly drive system designs. A recently proposed application to ultra-high Reynolds number testing will then be addressed in some detail. Finally, some opinions on the technical feasibility and usefulness of a large MSBS will be given.

  10. Unintended Consequences or Testing the Integrity of Teachers and Students.

    ERIC Educational Resources Information Center

    Kimmel, Ernest W.

    Large-scale testing programs are generally based on the assumptions that the test-takers experience standard conditions for taking the test and that everyone will do his or her own work without having prior knowledge of specific questions. These assumptions are not necessarily true. The ways students and educators use to get around standardizing…

  11. Uncertainties in the Item Parameter Estimates and Robust Automated Test Assembly

    ERIC Educational Resources Information Center

    Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G.

    2013-01-01

    Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…

  12. TESTING METHODS FOR DETECTION OF CRYPTOSPORIDIUM SPP. IN WATER SAMPLES

    EPA Science Inventory

    A large waterborne outbreak of cryptosporidiosis in Milwaukee, Wisconsin, U.S.A. in 1993 prompted a search for ways to prevent large-scale waterborne outbreaks of protozoan parasitoses. Methods for detecting Cryptosporidium parvum play an integral role in strategies that lead to...

  13. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less

  14. Aquatic Plant Control Research Program. Large-Scale Operations Management Test (LSOMT) of Insects and Pathogens for Control of Waterhyacinth in Louisiana. Volume 1. Results for 1979-1981.

    DTIC Science & Technology

    1985-01-01

    RD-Ai56 759 AQUATIC PLANT CONTROL RESEARCH PROGRAM LARGE-SCALE 1/2 OPERATIONS MRNAGEMENT..(U) ARMY ENGINEER WATERAYS EXPERIMENT STATION VICKSBURG MS...PO Box 631, Vicksburg, Aquatic Plant Control Mississippi 39180-0631 and University of Research Program Tennessee-Chattanooga, Chattanooga, 12...19. KEY WORDS (Continue on reverse side if necesary nd identify by block number) - Aquatic plant control Louisiana Biological control Plant

  15. A mesostructured Y zeolite as a superior FCC catalyst--lab to refinery.

    PubMed

    García-Martínez, Javier; Li, Kunhao; Krishnaiah, Gautham

    2012-12-18

    A mesostructured Y zeolite was prepared by a surfactant-templated process at the commercial scale and tested in a refinery, showing superior hydrothermal stability and catalytic cracking selectivity, which demonstrates, for the first time, the promising future of mesoporous zeolites in large scale industrial applications.

  16. Composite Action in Prestressed NU I-Girder Bridge Deck Systems Constructed with Bond Breakers to Facilitate Deck Removal

    DOT National Transportation Integrated Search

    2017-11-01

    Results are reported from tests of small-scale push-off and large-scale composite NU I-girder specimens conducted to establish an interface connection detail that (1) Facilitates in-situ removal of the bridge deck without damaging prestressed girders...

  17. Composite Action in Prestressed NU I-Girder Bridge Deck Systems Constructed with Bond Breakers to Facilitate Deck Removal : Technical Summary

    DOT National Transportation Integrated Search

    2017-11-01

    Results are reported from tests of small-scale push-off and large-scale composite NU I-girder specimens conducted to establish an interface connection detail that (1) Facilitates in-situ removal of the bridge deck without damaging prestressed girders...

  18. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure.

    PubMed

    Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin

    2018-03-29

    Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures.

  19. Semi-Automated Air-Coupled Impact-Echo Method for Large-Scale Parkade Structure

    PubMed Central

    Epp, Tyler; Svecova, Dagmar; Cha, Young-Jin

    2018-01-01

    Structural Health Monitoring (SHM) has moved to data-dense systems, utilizing numerous sensor types to monitor infrastructure, such as bridges and dams, more regularly. One of the issues faced in this endeavour is the scale of the inspected structures and the time it takes to carry out testing. Installing automated systems that can provide measurements in a timely manner is one way of overcoming these obstacles. This study proposes an Artificial Neural Network (ANN) application that determines intact and damaged locations from a small training sample of impact-echo data, using air-coupled microphones from a reinforced concrete beam in lab conditions and data collected from a field experiment in a parking garage. The impact-echo testing in the field is carried out in a semi-autonomous manner to expedite the front end of the in situ damage detection testing. The use of an ANN removes the need for a user-defined cutoff value for the classification of intact and damaged locations when a least-square distance approach is used. It is postulated that this may contribute significantly to testing time reduction when monitoring large-scale civil Reinforced Concrete (RC) structures. PMID:29596332

  20. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    USGS Publications Warehouse

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  1. Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects.

    PubMed

    Ho, Andrew D; Yu, Carol C

    2015-06-01

    Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.

  2. Multi-Column Experimental Test Bed Using CaSDB MOF for Xe/Kr Separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welty, Amy Keil; Greenhalgh, Mitchell Randy; Garn, Troy Gerry

    Processing of spent nuclear fuel produces off-gas from which several volatile radioactive components must be separated for further treatment or storage. As part of the Off-gas Sigma Team, parallel research at INL and PNNL has produced several promising sorbents for the selective capture of xenon and krypton from these off-gas streams. In order to design full-scale treatment systems, sorbents that are promising on a laboratory scale must be proven under process conditions to be considered for pilot and then full-scale use. To that end, a bench-scale multi-column system with capability to test multiple sorbents was designed and constructed at INL.more » This report details bench-scale testing of CaSDB MOF, produced at PNNL, and compares the results to those reported last year using INL engineered sorbents. Two multi-column tests were performed with the CaSDB MOF installed in the first column, followed with HZ-PAN installed in the second column. The CaSDB MOF column was placed in a Stirling cryocooler while the cryostat was employed for the HZ-PAN column. Test temperatures of 253 K and 191 K were selected for the first column while the second column was held at 191 K for both tests. Calibrated volume sample bombs were utilized for gas stream analyses. At the conclusion of each test, samples were collected from each column and analyzed for gas composition. While CaSDB MOF does appear to have good capacity for Xe, the short time to initial breakthrough would make design of a continuous adsorption/desorption cycle difficult, requiring either very large columns or a large number of smaller columns. Because of the tenacity with which Xe and Kr adhere to the material once adsorbed, this CaSDB MOF may be more suitable for use as a long-term storage solution. Further testing is recommended to determine if CaSDB MOF is suitable for this purpose.« less

  3. Investigating a link between large and small-scale chaos features on Europa

    NASA Astrophysics Data System (ADS)

    Tognetti, L.; Rhoden, A.; Nelson, D. M.

    2017-12-01

    Chaos is one of the most recognizable, and studied, features on Europa's surface. Most models of chaos formation invoke liquid water at shallow depths within the ice shell; the liquid destabilizes the overlying ice layer, breaking it into mobile rafts and destroying pre-existing terrain. This class of model has been applied to both large-scale chaos like Conamara and small-scale features (i.e. microchaos), which are typically <10 km in diameter. Currently unknown, however, is whether both large-scale and small-scale features are produced together, e.g. through a network of smaller sills linked to a larger liquid water pocket. If microchaos features do form as satellites of large-scale chaos features, we would expect a drop off in the number density of microchaos with increasing distance from the large chaos feature; the trend should not be observed in regions without large-scale chaos features. Here, we test the hypothesis that large chaos features create "satellite" systems of smaller chaos features. Either outcome will help us better understand the relationship between large-scale chaos and microchaos. We focus first on regions surrounding the large chaos features Conamara and Murias (e.g. the Mitten). We map all chaos features within 90,000 sq km of the main chaos feature and assign each one a ranking (High Confidence, Probable, or Low Confidence) based on the observed characteristics of each feature. In particular, we look for a distinct boundary, loss of preexisting terrain, the existence of rafts or blocks, and the overall smoothness of the feature. We also note features that are chaos-like but lack sufficient characteristics to be classified as chaos. We then apply the same criteria to map microchaos features in regions of similar area ( 90,000 sq km) that lack large chaos features. By plotting the distribution of microchaos with distance from the center point of the large chaos feature or the mapping region (for the cases without a large feature), we determine whether there is a distinct signature linking large-scale chaos features with nearby microchaos. We discuss the implications of these results on the process of chaos formation and the extent of liquid water within Europa's ice shell.

  4. Some aspects of wind tunnel magnetic suspension systems with special application at large physical scales

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1983-01-01

    Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.

  5. A theory of forest dynamics: Spatially explicit models and issues of scale

    NASA Technical Reports Server (NTRS)

    Pacala, S.

    1990-01-01

    Good progress has been made in the first year of DOE grant (number sign) FG02-90ER60933. The purpose of the project is to develop and investigate models of forest dynamics that apply across a range of spatial scales. The grant is one third of a three-part project. The second third was funded by the NSF this year and is intended to provide the empirical data necessary to calibrate and test small-scale (less than or equal to 1000 ha) models. The final third was also funded this year (NASA), and will provide data to calibrate and test the large-scale features of the models.

  6. Extended general relativity: Large-scale antigravity and short-scale gravity with ω=-1 from five-dimensional vacuum

    NASA Astrophysics Data System (ADS)

    Madriz Aguilar, José Edgar; Bellini, Mauricio

    2009-08-01

    Considering a five-dimensional (5D) Riemannian spacetime with a particular stationary Ricci-flat metric, we obtain in the framework of the induced matter theory an effective 4D static and spherically symmetric metric which give us ordinary gravitational solutions on small (planetary and astrophysical) scales, but repulsive (anti gravitational) forces on very large (cosmological) scales with ω=-1. Our approach is an unified manner to describe dark energy, dark matter and ordinary matter. We illustrate the theory with two examples, the solar system and the great attractor. From the geometrical point of view, these results follow from the assumption that exists a confining force that make possible that test particles move on a given 4D hypersurface.

  7. Large-scale Advanced Prop-fan (LAP) technology assessment report

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    The technologically significant findings and accomplishments of the Large Scale Advanced Prop-Fan (LAP) program in the areas of aerodynamics, aeroelasticity, acoustics and materials and fabrication are described. The extent to which the program goals related to these disciplines were achieved is discussed, and recommendations for additional research are presented. The LAP program consisted of the design, manufacture and testing of a near full-scale Prop-Fan or advanced turboprop capable of operating efficiently at speeds to Mach .8. An aeroelastically scaled model of the LAP was also designed and fabricated. The goal of the program was to acquire data on Prop-Fan performance that would indicate the technology readiness of Prop-Fans for practical applications in commercial and military aviation.

  8. [Avian influenza virus infection in people occupied in poultry fields in Guangzhou city].

    PubMed

    Liu, Yang; Lu, En-jie; Wang, Yu-lin; Di, Biao; Li, Tie-gang; Zhou, Yong; Yang, Li-li; Xu, Xiao-yin; Fu, Chuan-xi; Wang, Ming

    2009-11-01

    To conduct serological investigation on H5N1/H9N2/H7N7 infection among people occupied in poultry fields. Serum samples were collected from people working in live poultry and none-poultry retailing food markets, poultry wholesaling, large-scale poultry breading factories and in small-scale farms, wide birds breeding, swine slaughtering houses and from normal population. Antibodies of H5, H9 and H7 with hemagglutination inhibition and neutralization tests were tested and analyzed. Logistic regression and chi(2) test were used. Among 2881 samples, 4 were positive to H5-Ab (0.14%), 146 were positive to H9-Ab (5.07%) and the prevalence of H9 among people from live poultry retailing (14.96%) was the highest. Prevalence rates of H9 were as follows: 8.90% in people working in the large-scale poultry breading factories, 6.69% in the live poultry wholesaling business, 3.75% in the wide birds breeding, 2.40% in the swine slaughtering, 2.21% in the non-poultry retailing, 1.77% in the rural poultry farmers and 2.30% in normal population. None was positive to H7-Ab among 1926 poultry workers. The H5 prevalence among people was much lower than expected, but the H9 prevalence was higher. None of the populations tested was found positive to H7-Ab. There was a higher risk of AIV infection in live poultry retailing, wholesaling and large-scale breading businesses, with the risk of live poultry retailing the highest. The longer the service length was, the higher the risk existed.

  9. Validation of large-scale, monochromatic UV disinfection systems for drinking water using dyed microspheres.

    PubMed

    Blatchley, E R; Shen, C; Scheible, O K; Robinson, J P; Ragheb, K; Bergstrom, D E; Rokjer, D

    2008-02-01

    Dyed microspheres have been developed as a new method for validation of ultraviolet (UV) reactor systems. When properly applied, dyed microspheres allow measurement of the UV dose distribution delivered by a photochemical reactor for a given operating condition. Prior to this research, dyed microspheres had only been applied to a bench-scale UV reactor. The goal of this research was to extend the application of dyed microspheres to large-scale reactors. Dyed microsphere tests were conducted on two prototype large-scale UV reactors at the UV Validation and Research Center of New York (UV Center) in Johnstown, NY. All microsphere tests were conducted under conditions that had been used previously in biodosimetry experiments involving two challenge bacteriophage: MS2 and Qbeta. Numerical simulations based on computational fluid dynamics and irradiance field modeling were also performed for the same set of operating conditions used in the microspheres assays. Microsphere tests on the first reactor illustrated difficulties in sample collection and discrimination of microspheres against ambient particles. Changes in sample collection and work-up were implemented in tests conducted on the second reactor that allowed for improvements in microsphere capture and discrimination against the background. Under these conditions, estimates of the UV dose distribution from the microspheres assay were consistent with numerical simulations and the results of biodosimetry, using both challenge organisms. The combined application of dyed microspheres, biodosimetry, and numerical simulation offers the potential to provide a more in-depth description of reactor performance than any of these methods individually, or in combination. This approach also has the potential to substantially reduce uncertainties in reactor validation, thereby leading to better understanding of reactor performance, improvements in reactor design, and decreases in reactor capital and operating costs.

  10. Full-scale testing and progressive damage modeling of sandwich composite aircraft fuselage structure

    NASA Astrophysics Data System (ADS)

    Leone, Frank A., Jr.

    A comprehensive experimental and computational investigation was conducted to characterize the fracture behavior and structural response of large sandwich composite aircraft fuselage panels containing artificial damage in the form of holes and notches. Full-scale tests were conducted where panels were subjected to quasi-static combined pressure, hoop, and axial loading up to failure. The panels were constructed using plain-weave carbon/epoxy prepreg face sheets and a Nomex honeycomb core. Panel deformation and notch tip damage development were monitored during the tests using several techniques, including optical observations, strain gages, digital image correlation (DIC), acoustic emission (AE), and frequency response (FR). Additional pretest and posttest inspections were performed via thermography, computer-aided tap tests, ultrasound, x-radiography, and scanning electron microscopy. The framework to simulate damage progression and to predict residual strength through use of the finite element (FE) method was developed. The DIC provided local and full-field strain fields corresponding to changes in the state-of-damage and identified the strain components driving damage progression. AE was monitored during loading of all panels and data analysis methodologies were developed to enable real-time determination of damage initiation, progression, and severity in large composite structures. The FR technique has been developed, evaluating its potential as a real-time nondestructive inspection technique applicable to large composite structures. Due to the large disparity in scale between the fuselage panels and the artificial damage, a global/local analysis was performed. The global FE models fully represented the specific geometries, composite lay-ups, and loading mechanisms of the full-scale tests. A progressive damage model was implemented in the local FE models, allowing the gradual failure of elements in the vicinity of the artificial damage. A set of modifications to the definitions of the local FE model boundary conditions is proposed and developed to address several issues related to the scalability of progressive damage modeling concepts, especially in regards to full-scale fuselage structures. Notable improvements were observed in the ability of the FE models to predict the strength of damaged composite fuselage structures. Excellent agreement has been established between the FE model predictions and the experimental results recorded by DIC, AE, FR, and visual observations.

  11. Fire tests for airplane interior materials

    NASA Technical Reports Server (NTRS)

    Tustin, E. A.

    1980-01-01

    Large scale, simulated fire tests of aircraft interior materials were carried out in salvaged airliner fuselage. Two "design" fire sources were selected: Jet A fuel ignited in fuselage midsection and trash bag fire. Comparison with six established laboratory fire tests show that some laboratory tests can rank materials according to heat and smoke production, but existing tests do not characterize toxic gas emissions accurately. Report includes test parameters and test details.

  12. Prediction of Large Vessel Occlusions in Acute Stroke: National Institute of Health Stroke Scale Is Hard to Beat.

    PubMed

    Vanacker, Peter; Heldner, Mirjam R; Amiguet, Michael; Faouzi, Mohamed; Cras, Patrick; Ntaios, George; Arnold, Marcel; Mattle, Heinrich P; Gralla, Jan; Fischer, Urs; Michel, Patrik

    2016-06-01

    Endovascular treatment for acute ischemic stroke with a large vessel occlusion was recently shown to be effective. We aimed to develop a score capable of predicting large vessel occlusion eligible for endovascular treatment in the early hospital management. Retrospective, cohort study. Two tertiary, Swiss stroke centers. Consecutive acute ischemic stroke patients (1,645 patients; Acute STroke Registry and Analysis of Lausanne registry), who had CT angiography within 6 and 12 hours of symptom onset, were categorized according to the occlusion site. Demographic and clinical information was used in logistic regression analysis to derive predictors of large vessel occlusion (defined as intracranial carotid, basilar, and M1 segment of middle cerebral artery occlusions). Based on logistic regression coefficients, an integer score was created and validated internally and externally (848 patients; Bernese Stroke Registry). None. Large vessel occlusions were present in 316 patients (21%) in the derivation and 566 (28%) in the external validation cohort. Five predictors added significantly to the score: National Institute of Health Stroke Scale at admission, hemineglect, female sex, atrial fibrillation, and no history of stroke and prestroke handicap (modified Rankin Scale score, < 2). Diagnostic accuracy in internal and external validation cohorts was excellent (area under the receiver operating characteristic curve, 0.84 both). The score performed slightly better than National Institute of Health Stroke Scale alone regarding prediction error (Wilcoxon signed rank test, p < 0.001) and regarding discriminatory power in derivation and pooled cohorts (area under the receiver operating characteristic curve, 0.81 vs 0.80; DeLong test, p = 0.02). Our score accurately predicts the presence of emergent large vessel occlusions, which are eligible for endovascular treatment. However, incorporation of additional demographic and historical information available on hospital arrival provides minimal incremental predictive value compared with the National Institute of Health Stroke Scale alone.

  13. Design, Construction and Testing of an In-Pile Loop for PWR (Pressurized Water Reactor) Simulation.

    DTIC Science & Technology

    1987-06-01

    computer modeling remains at best semiempirical (C-i), this large variation in scaling factor makes extrapolation of data impossible. The DIDO Water...in a full scale PWR are not practical. The reactor plant is not controlled to tolerances necessary for research, and utilities are reluctant to vary...MIT Reactor Safeguards Committee, in revision 1 to the PCCL Safety Evaluation Report (SER), for final approval to begin in-pile testing and

  14. Confirmation of general relativity on large scales from weak lensing and galaxy velocities.

    PubMed

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E; Lombriser, Lucas; Smith, Robert E

    2010-03-11

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, E(G), that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to 'galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of E(G) different from the general relativistic prediction because, in these theories, the 'gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that E(G) = 0.39 +/- 0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of E(G) approximately 0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f(R) theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  15. Confirmation of general relativity on large scales from weak lensing and galaxy velocities

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.

    2010-03-01

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  16. Modeling High Temperature Deformation Behavior of Large-Scaled Mg-Al-Zn Magnesium Alloy Fabricated by Semi-continuous Casting

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Xia, Xiangsheng

    2015-09-01

    In order to improve the understanding of the hot deformation and dynamic recrystallization (DRX) behaviors of large-scaled AZ80 magnesium alloy fabricated by semi-continuous casting, compression tests were carried out in the temperature range from 250 to 400 °C and strain rate range from 0.001 to 0.1 s-1 on a Gleeble 1500 thermo-mechanical machine. The effects of the temperature and strain rate on the hot deformation behavior have been expressed by means of the conventional hyperbolic sine equation, and the influence of the strain has been incorporated in the equation by considering its effect on different material constants for large-scaled AZ80 magnesium alloy. In addition, the DRX behavior has been discussed. The result shows that the deformation temperature and strain rate exerted remarkable influences on the flow stress. The constitutive equation of large-scaled AZ80 magnesium alloy for hot deformation at steady-state stage (ɛ = 0.5) was The true stress-true strain curves predicted by the extracted model were in good agreement with the experimental results, thereby confirming the validity of the developed constitutive relation. The DRX kinetic model of large-scaled AZ80 magnesium alloy was established as X d = 1 - exp[-0.95((ɛ - ɛc)/ɛ*)2.4904]. The rate of DRX increases with increasing deformation temperature, and high temperature is beneficial for achieving complete DRX in the large-scaled AZ80 magnesium alloy.

  17. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    PubMed

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.

  18. bigSCale: an analytical framework for big-scale single-cell data.

    PubMed

    Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger

    2018-06-01

    Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Applying Adaptive Variables in Computerised Adaptive Testing

    ERIC Educational Resources Information Center

    Triantafillou, Evangelos; Georgiadou, Elissavet; Economides, Anastasios A.

    2007-01-01

    Current research in computerised adaptive testing (CAT) focuses on applications, in small and large scale, that address self assessment, training, employment, teacher professional development for schools, industry, military, assessment of non-cognitive skills, etc. Dynamic item generation tools and automated scoring of complex, constructed…

  20. Climbing the Corporate Ladder.

    ERIC Educational Resources Information Center

    Smith, Christopher

    The employment records of a large northeastern manufacturing plant were analyzed to test the opportunity for career advancement within a large-scale industrial establishment. The employment records analyzed covered the years 1921 through 1937 and more than 28,000 different employees (male and female). The company was selected as being…

  1. ARC-1996-AC95-0154-333

    NASA Image and Video Library

    1996-02-23

    CALF/JAST X-32 test program: the LSPM (Large Scale Powered Model), Lockheed's concept for a tri-service aircraft (Air Force, Navy, Marines) CALF (Common Affordable Lightweight Fighter) as part of the Department of Defense's Joint Advanced Strike Technology (JAST) is being tested in the 80x120ft w.t. test-930 with rear horizontal stabilizer

  2. Measuring What Really Matters

    ERIC Educational Resources Information Center

    Wei, Ruth Chung; Pecheone, Raymond L.; Wilczak, Katherine L.

    2015-01-01

    Since the passage of No Child Left Behind, large-scale assessments have come to play a central role in federal and state education accountability systems. Teachers and parents have expressed a number of concerns about their state testing programs, such as too much time devoted to testing and the high-stakes use of testing for teacher evaluation.…

  3. Bi-Factor MIRT Observed-Score Equating for Mixed-Format Tests

    ERIC Educational Resources Information Center

    Lee, Guemin; Lee, Won-Chan

    2016-01-01

    The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…

  4. Heritability in Cognitive Performance: Evidence Using Computer-Based Testing

    ERIC Educational Resources Information Center

    Hervey, Aaron S.; Greenfield, Kathryn; Gualtieri, C. Thomas

    2012-01-01

    There is overwhelming evidence of genetic influence on cognition. The effect is seen in general cognitive ability, as well as in specific cognitive domains. A conventional assessment approach using face-to-face paper and pencil testing is difficult for large-scale studies. Computerized neurocognitive testing is a suitable alternative. A total of…

  5. Small scale noise and wind tunnel tests of upper surface blowing nozzle flap concepts. Volume 1. Aerodynamic test results

    NASA Technical Reports Server (NTRS)

    Renselaer, D. J.; Nishida, R. S.; Wilkin, C. A.

    1975-01-01

    The results and analyses of aerodynamic and acoustic studies conducted on the small scale noise and wind tunnel tests of upper surface blowing nozzle flap concepts are presented. Various types of nozzle flap concepts were tested. These are an upper surface blowing concept with a multiple slot arrangement with seven slots (seven slotted nozzle), an upper surface blowing type with a large nozzle exit at approximately mid-chord location in conjunction with a powered trailing edge flap with multiple slots (split flow or partially slotted nozzle). In addition, aerodynamic tests were continued on a similar multi-slotted nozzle flap, but with 14 slots. All three types of nozzle flap concepts tested appear to be about equal in overall aerodynamic performance but with the split flow nozzle somewhat better than the other two nozzle flaps in the landing approach mode. All nozzle flaps can be deflected to a large angle to increase drag without significant loss in lift. The nozzle flap concepts appear to be viable aerodynamic drag modulation devices for landing.

  6. A normal stress subgrid-scale eddy viscosity model in large eddy simulation

    NASA Technical Reports Server (NTRS)

    Horiuti, K.; Mansour, N. N.; Kim, John J.

    1993-01-01

    The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  8. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  9. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment Using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David; Johnson, Kenneth

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  10. Characterizing the Response of Composite Panels to a Pyroshock Induced Environment using Design of Experiments Methodology

    NASA Technical Reports Server (NTRS)

    Parsons, David S.; Ordway, David O.; Johnson, Kenneth L.

    2013-01-01

    This experimental study seeks to quantify the impact various composite parameters have on the structural response of a composite structure in a pyroshock environment. The prediction of an aerospace structure's response to pyroshock induced loading is largely dependent on empirical databases created from collections of development and flight test data. While there is significant structural response data due to pyroshock induced loading for metallic structures, there is much less data available for composite structures. One challenge of developing a composite pyroshock response database as well as empirical prediction methods for composite structures is the large number of parameters associated with composite materials. This experimental study uses data from a test series planned using design of experiments (DOE) methods. Statistical analysis methods are then used to identify which composite material parameters most greatly influence a flat composite panel's structural response to pyroshock induced loading. The parameters considered are panel thickness, type of ply, ply orientation, and pyroshock level induced into the panel. The results of this test will aid in future large scale testing by eliminating insignificant parameters as well as aid in the development of empirical scaling methods for composite structures' response to pyroshock induced loading.

  11. Evaluating the Comparability of Paper-and-Pencil and Computerized Versions of a Large-Scale Certification Test. Research Report. ETS RR-05-21

    ERIC Educational Resources Information Center

    Puhan, Gautam; Boughton, Keith A.; Kim, Sooyeon

    2005-01-01

    The study evaluated the comparability of two versions of a teacher certification test: a paper-and-pencil test (PPT) and computer-based test (CBT). Standardized mean difference (SMD) and differential item functioning (DIF) analyses were used as measures of comparability at the test and item levels, respectively. Results indicated that effect sizes…

  12. Wind Tunnel Testing of a 120th Scale Large Civil Tilt-Rotor Model in Airplane and Helicopter Modes

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.; Willink, Gina C.; Russell, Carl R.; Amy, Alexander R.; Pete, Ashley E.

    2014-01-01

    In April 2012 and October 2013, NASA and the U.S. Army jointly conducted a wind tunnel test program examining two notional large tilt rotor designs: NASA's Large Civil Tilt Rotor and the Army's High Efficiency Tilt Rotor. The approximately 6%-scale airframe models (unpowered) were tested without rotors in the U.S. Army 7- by 10-foot wind tunnel at NASA Ames Research Center. Measurements of all six forces and moments acting on the airframe were taken using the wind tunnel scale system. In addition to force and moment measurements, flow visualization using tufts, infrared thermography and oil flow were used to identify flow trajectories, boundary layer transition and areas of flow separation. The purpose of this test was to collect data for the validation of computational fluid dynamics tools, for the development of flight dynamics simulation models, and to validate performance predictions made during conceptual design. This paper focuses on the results for the Large Civil Tilt Rotor model in an airplane mode configuration up to 200 knots of wind tunnel speed. Results are presented with the full airframe model with various wing tip and nacelle configurations, and for a wing-only case also with various wing tip and nacelle configurations. Key results show that the addition of a wing extension outboard of the nacelles produces a significant increase in the lift-to-drag ratio, and interestingly decreases the drag compared to the case where the wing extension is not present. The drag decrease is likely due to complex aerodynamic interactions between the nacelle and wing extension that results in a significant drag benefit.

  13. Second-Generation Large Civil Tiltrotor 7- by 10-Foot Wind Tunnel Test Data Report

    NASA Technical Reports Server (NTRS)

    Theodore, Colin R.; Russell, Carl R.; Willink, Gina C.; Pete, Ashley E.; Adibi, Sierra A.; Ewert, Adam; Theuns, Lieselotte; Beierle, Connor

    2016-01-01

    An approximately 6-percent scale model of the NASA Second-Generation Large Civil Tiltrotor (LCTR2) Aircraft was tested in the U.S. Army 7- by 10-Foot Wind Tunnel at NASA Ames Research Center January 4 to April 19, 2012, and September 18 to November 1, 2013. The full model was tested, along with modified versions in order to determine the effects of the wing tip extensions and nacelles; the wing was also tested separately in the various configurations. In both cases, the wing and nacelles used were adopted from the U.S. Army High Efficiency Tilt Rotor (HETR) aircraft, in order to limit the cost of the experiment. The full airframe was tested in high-speed cruise and low-speed hover flight conditions, while the wing was tested only in cruise conditions, with Reynolds numbers ranging from 0 to 1.4 million. In all cases, the external scale system of the wind tunnel was used to collect data. Both models were mounted to the scale using two support struts attached underneath the wing; the full airframe model also used a third strut attached at the tail. The collected data provides insight into the performance of the preliminary design of the LCTR2 and will be used for computational fluid dynamics (CFD) validation and the development of flight dynamics simulation models.

  14. Pharmaceutical Raw Material Identification Using Miniature Near-Infrared (MicroNIR) Spectroscopy and Supervised Pattern Recognition Using Support Vector Machine

    PubMed Central

    Hsiung, Chang; Pederson, Christopher G.; Zou, Peng; Smith, Valton; von Gunten, Marc; O’Brien, Nada A.

    2016-01-01

    Near-infrared spectroscopy as a rapid and non-destructive analytical technique offers great advantages for pharmaceutical raw material identification (RMID) to fulfill the quality and safety requirements in pharmaceutical industry. In this study, we demonstrated the use of portable miniature near-infrared (MicroNIR) spectrometers for NIR-based pharmaceutical RMID and solved two challenges in this area, model transferability and large-scale classification, with the aid of support vector machine (SVM) modeling. We used a set of 19 pharmaceutical compounds including various active pharmaceutical ingredients (APIs) and excipients and six MicroNIR spectrometers to test model transferability. For the test of large-scale classification, we used another set of 253 pharmaceutical compounds comprised of both chemically and physically different APIs and excipients. We compared SVM with conventional chemometric modeling techniques, including soft independent modeling of class analogy, partial least squares discriminant analysis, linear discriminant analysis, and quadratic discriminant analysis. Support vector machine modeling using a linear kernel, especially when combined with a hierarchical scheme, exhibited excellent performance in both model transferability and large-scale classification. Hence, ultra-compact, portable and robust MicroNIR spectrometers coupled with SVM modeling can make on-site and in situ pharmaceutical RMID for large-volume applications highly achievable. PMID:27029624

  15. Flap noise measurements for STOL configurations using external upper surface blowing

    NASA Technical Reports Server (NTRS)

    Dorsch, R. G.; Reshotko, M.; Olsen, W. A.

    1972-01-01

    Screening tests of upper surface blowing on externally blown flaps configurations were conducted. Noise and turning effectiveness data were obtained with small-scale, engine-over-the-wing models. One large model was tested to determine scale effects. Nozzle types included circular, slot, D-shaped, and multilobed. Tests were made with and without flow attachment devices. For STOL applications the particular multilobed mixer and the D-shaped nozzles tested were found to offer little or no noise advantage over the round convergent nozzle. High aspect ratio slot nozzles provided the quietest configurations. In general, upper surface blowing was quieter than lower surface blowing for equivalent EBF models.

  16. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  17. Formation of outflow channels on Mars: Testing the origin of Reull Vallis in Hesperia Planum by large-scale lava-ice interactions and top-down melting

    NASA Astrophysics Data System (ADS)

    Cassanelli, James P.; Head, James W.

    2018-05-01

    The Reull Vallis outflow channel is a segmented system of fluvial valleys which originates from the volcanic plains of the Hesperia Planum region of Mars. Explanation of the formation of the Reull Vallis outflow channel by canonical catastrophic groundwater release models faces difficulties with generating sufficient hydraulic head, requiring unreasonably high aquifer permeability, and from limited recharge sources. Recent work has proposed that large-scale lava-ice interactions could serve as an alternative mechanism for outflow channel formation on the basis of predictions of regional ice sheet formation in areas that also underwent extensive contemporaneous volcanic resurfacing. Here we assess in detail the potential formation of outflow channels by large-scale lava-ice interactions through an applied case study of the Reull Vallis outflow channel system, selected for its close association with the effusive volcanic plains of the Hesperia Planum region. We first review the geomorphology of the Reull Vallis system to outline criteria that must be met by the proposed formation mechanism. We then assess local and regional lava heating and loading conditions and generate model predictions for the formation of Reull Vallis to test against the outlined geomorphic criteria. We find that successive events of large-scale lava-ice interactions that melt ice deposits, which then undergo re-deposition due to climatic mechanisms, best explains the observed geomorphic criteria, offering improvements over previously proposed formation models, particularly in the ability to supply adequate volumes of water.

  18. On the Scaling Laws for Jet Noise in Subsonic and Supersonic Flow

    NASA Technical Reports Server (NTRS)

    Vu, Bruce; Kandula, Max

    2003-01-01

    The scaling laws for the simulation of noise from subsonic and ideally expanded supersonic jets are examined with regard to their applicability to deduce full scale conditions from small-scale model testing. Important parameters of scale model testing for the simulation of jet noise are identified, and the methods of estimating full-scale noise levels from simulated scale model data are addressed. The limitations of cold-jet data in estimating high-temperature supersonic jet noise levels are discussed. It is shown that the jet Mach number (jet exit velocity/sound speed at jet exit) is a more general and convenient parameter for noise scaling purposes than the ratio of jet exit velocity to ambient speed of sound. A similarity spectrum is also proposed, which accounts for jet Mach number, angle to the jet axis, and jet density ratio. The proposed spectrum reduces nearly to the well-known similarity spectra proposed by Tam for the large-scale and the fine-scale turbulence noise in the appropriate limit.

  19. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  20. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  1. Fundamental Concerns in High-Stakes Language Testing: The Case of the College English Test

    ERIC Educational Resources Information Center

    Jin, Yan

    2011-01-01

    The College English Test (CET) is an English language test designed for educational purposes, administered on a very large scale, and used for making high-stakes decisions. This paper discusses the key issues facing the CET during the course of its development in the past two decades. It argues that the most fundamental and critical concerns of…

  2. Involving Diverse Communities of Practice to Minimize Unintended Consequences of Test-Based Accountability Systems

    ERIC Educational Resources Information Center

    Behizadeh, Nadia; Engelhard, George, Jr.

    2015-01-01

    In his focus article, Koretz (this issue) argues that accountability has become the primary function of large-scale testing in the United States. He then points out that tests being used for accountability purposes are flawed and that the high-stakes nature of these tests creates a context that encourages score inflation. Koretz is concerned about…

  3. The Nature of the Average Difference Between Whites and Blacks on Psychometric Tests: Spearman's Hypothesis.

    ERIC Educational Resources Information Center

    Jensen, Arthur R.

    Charles Spearman originally suggested in 1927 that the varying magnitudes of the mean differences between whites and blacks in standardized scores on a variety of mental tests are directly related to the size of the tests' loadings on g, the general factor common to all complex tests of mental ability. Several independent large-scale studies…

  4. The Search for the Holy Grail: Content-Referenced Score Interpretations from Large-Scale Tests

    ERIC Educational Resources Information Center

    Marion, Scott F.

    2015-01-01

    The measurement industry is in crisis. The public outcry against "over testing" and the opt-out movement are symptoms of a larger sociopolitical battle being fought over Common Core, teacher evaluation, federal intrusion, and a host of other issues, but much of the vitriol is directed at the tests and the testing industry. If we, as…

  5. Test and Measurement Expert Opinions: A Dialogue about Testing Students with Disabilities Out of Level in Large-Scale Assessments. Out-of-Level Testing Report.

    ERIC Educational Resources Information Center

    Minnema, Jane; Thurlow, Martha; Bielinski, John

    Two focus groups of test and measurement experts were held to explore the use of out-of-level testing for students with disabilities. The participants (n=17) included state and federal level assessment personnel, test company employees, and university professors. A content analysis of the narrative results indicated that there was no clear…

  6. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    NASA Astrophysics Data System (ADS)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  7. Energy Efficient Engine acoustic supporting technology report

    NASA Technical Reports Server (NTRS)

    Lavin, S. P.; Ho, P. Y.

    1985-01-01

    The acoustic development of the Energy Efficient Engine combined testing and analysis using scale model rigs and an integrated Core/Low Spool demonstration engine. The scale model tests show that a cut-on blade/vane ratio fan with a large spacing (S/C = 2.3) is as quiet as a cut-off blade/vane ratio with a tighter spacing (S/C = 1.27). Scale model mixer tests show that separate flow nozzles are the noisiest, conic nozzles the quietest, with forced mixers in between. Based on projections of ICLS data the Energy Efficient Engine (E3) has FAR 36 margins of 3.7 EPNdB at approach, 4.5 EPNdB at full power takeoff, and 7.2 EPNdB at sideline conditions.

  8. Testing Einstein's Gravity on Large Scales

    NASA Technical Reports Server (NTRS)

    Prescod-Weinstein, Chandra

    2011-01-01

    A little over a decade has passed since two teams studying high redshift Type Ia supernovae announced the discovery that the expansion of the universe was accelerating. After all this time, we?re still not sure how cosmic acceleration fits into the theory that tells us about the large-scale universe: General Relativity (GR). As part of our search for answers, we have been forced to question GR itself. But how will we test our ideas? We are fortunate enough to be entering the era of precision cosmology, where the standard model of gravity can be subjected to more rigorous testing. Various techniques will be employed over the next decade or two in the effort to better understand cosmic acceleration and the theory behind it. In this talk, I will describe cosmic acceleration, current proposals to explain it, and weak gravitational lensing, an observational effect that allows us to do the necessary precision cosmology.

  9. Large-Scale Wind-Tunnel Tests and Evaluation of the Low-Speed Performance of a 35 deg Sweptback Wing Jet Transport Model Equipped with a Blowing Boundary-Layer-Control Flap and Leading-Edge Slat

    NASA Technical Reports Server (NTRS)

    Hickey, David H.; Aoyagi, Kiyoshi

    1960-01-01

    A wind-tunnel investigation was conducted to determine the effect of trailing-edge flaps with blowing-type boundary-layer control and leading-edge slats on the low-speed performance of a large-scale jet transport model with four engines and a 35 deg. sweptback wing of aspect ratio 7. Two spanwise extents and several deflections of the trailing-edge flap were tested. Results were obtained with a normal leading-edge and with full-span leading-edge slats. Three-component longitudinal force and moment data and boundary-layer-control flow requirements are presented. The test results are analyzed in terms of possible improvements in low-speed performance. The effect on performance of the source of boundary-layer-control air flow is considered in the analysis.

  10. Analysis of detection performance of multi band laser beam analyzer

    NASA Astrophysics Data System (ADS)

    Du, Baolin; Chen, Xiaomei; Hu, Leili

    2017-10-01

    Compared with microwave radar, Laser radar has high resolution, strong anti-interference ability and good hiding ability, so it becomes the focus of laser technology engineering application. A large scale Laser radar cross section (LRCS) measurement system is designed and experimentally tested. First, the boundary conditions are measured and the long range laser echo power is estimated according to the actual requirements. The estimation results show that the echo power is greater than the detector's response power. Secondly, a large scale LRCS measurement system is designed according to the demonstration and estimation. The system mainly consists of laser shaping, beam emitting device, laser echo receiving device and integrated control device. Finally, according to the designed lidar cross section measurement system, the scattering cross section of target is simulated and tested. The simulation results are basically the same as the test results, and the correctness of the system is proved.

  11. Low-speed wind tunnel investigation of the lateral-directional characterisitcs of a large-scale variable wing-sweep fighter model in the high-lift configuration

    NASA Technical Reports Server (NTRS)

    Eckert, W. T.; Maki, R. L.

    1973-01-01

    The low-speed characteristics of a large-scale model of the F-14A aircraft were studied in tests conducted in the Ames Research Center 40- by 80-Foot Wind Tunnel. The primary purpose of the present tests was the determination of lateral-directional stability levels and control effectiveness of the aircraft in its high-lift configuration. Tests were conducted at wing angles of attack between minus 2 deg and 30 deg and with sideslip angles between minus 12 deg and 12 deg. Data were taken at a Reynolds number of 8.0 million based on a wing mean aerodynamic chord of 2.24 m (7.36 ft). The model configuration was changed as required to show the effects of direct lift control (spoilers) at yaw, yaw angle with speed brake deflected, and various amounts and combinations of roll control.

  12. Low Pressure Seeder Development for PIV in Large Scale Open Loop Wind Tunnels

    NASA Astrophysics Data System (ADS)

    Schmit, Ryan

    2010-11-01

    A low pressure seeding techniques have been developed for Particle Image Velocimetry (PIV) in large scale wind tunnel facilities was performed at the Subsonic Aerodynamic Research Laboratory (SARL) facility at Wright-Patterson Air Force Base. The SARL facility is an open loop tunnel with a 7 by 10 foot octagonal test section that has 56% optical access and the Mach number varies from 0.2 to 0.5. A low pressure seeder sprayer was designed and tested in the inlet of the wind tunnel. The seeder sprayer was designed to produce an even and uniform distribution of seed while reducing the seeders influence in the test section. ViCount Compact 5000 using Smoke Oil 180 was using as the seeding material. The results show that this low pressure seeder does produce streaky seeding but excellent PIV images are produced.

  13. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  14. Infrastructure for Large-Scale Tests in Marine Autonomy

    DTIC Science & Technology

    2012-02-01

    suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis...8217+!0$%+()!()+($+!15+$! (#.%$&$)$-!%-!.BK*3$-(+$!$)&$-!.%$&$)+ *$+$+-3$)$$!. NHI

  15. Crash test and evaluation of temporary wood sign support system for large guide signs.

    DOT National Transportation Integrated Search

    2016-07-01

    The objective of this research task was to evaluate the impact performance of a temporary wood sign support : system for large guide signs. It was desired to use existing TxDOT sign hardware in the design to the extent possible. : The full-scale cras...

  16. Microprocessor Seminar, phase 2

    NASA Technical Reports Server (NTRS)

    Scott, W. R.

    1977-01-01

    Workshop sessions and papers were devoted to various aspects of microprocessor and large scale integrated circuit technology. Presentations were made on advanced LSI developments for high reliability military and NASA applications. Microprocessor testing techniques were discussed, and test data were presented. High reliability procurement specifications were also discussed.

  17. A large scale test of the gaming-enhancement hypothesis.

    PubMed

    Przybylski, Andrew K; Wang, John C

    2016-01-01

    A growing research literature suggests that regular electronic game play and game-based training programs may confer practically significant benefits to cognitive functioning. Most evidence supporting this idea, the gaming-enhancement hypothesis , has been collected in small-scale studies of university students and older adults. This research investigated the hypothesis in a general way with a large sample of 1,847 school-aged children. Our aim was to examine the relations between young people's gaming experiences and an objective test of reasoning performance. Using a Bayesian hypothesis testing approach, evidence for the gaming-enhancement and null hypotheses were compared. Results provided no substantive evidence supporting the idea that having preference for or regularly playing commercially available games was positively associated with reasoning ability. Evidence ranged from equivocal to very strong in support for the null hypothesis over what was predicted. The discussion focuses on the value of Bayesian hypothesis testing for investigating electronic gaming effects, the importance of open science practices, and pre-registered designs to improve the quality of future work.

  18. In flight measurement of steady and unsteady blade surface pressure of a single rotation large scale advanced prop-fan installed on the PTA aircraft

    NASA Technical Reports Server (NTRS)

    Parzych, D.; Boyd, L.; Meissner, W.; Wyrostek, A.

    1991-01-01

    An experiment was performed by Hamilton Standard, Division of United Technologies Corporation, under contract by LeRC, to measure the blade surface pressure of a large scale, 8 blade model prop-fan in flight. The test bed was the Gulfstream 2 Prop-Fan Test Assessment (PTA) aircraft. The objective of the test was to measure the steady and periodic blade surface pressure resulting from three different Prop-Fan air inflow angles at various takeoff and cruise conditions. The inflow angles were obtained by varying the nacelle tilt angles, which ranged from -3 to +2 degrees. A range of power loadings, tip speeds, and altitudes were tested at each nacelle tilt angle over the flight Mach number range of 0.30 to 0.80. Unsteady blade pressure data tabulated as Fourier coefficients for the first 35 harmonics of shaft rotational frequency and the steady (non-varying) pressure component are presented.

  19. Relative Costs of Various Types of Assessments.

    ERIC Educational Resources Information Center

    Wheeler, Patricia H.

    Issues of the relative costs of multiple choice tests and alternative types of assessment are explored. Before alternative assessments in large-scale or small-scale programs are used, attention must be given to cost considerations and the resources required to develop and implement the assessment. Major categories of cost to be considered are…

  20. Small-scale behavior in distorted turbulent boundary layers at low Reynolds number

    NASA Technical Reports Server (NTRS)

    Saddoughi, Seyed G.

    1994-01-01

    During the last three years we have conducted high- and low-Reynolds-number experiments, including hot-wire measurements of the velocity fluctuations, in the test-section-ceiling boundary layer of the 80- by 120-foot Full-Scale Aerodynamics Facility at NASA Ames Research Center, to test the local-isotropy predictions of Kolmogorov's universal equilibrium theory. This hypothesis, which states that at sufficiently high Reynolds numbers the small-scale structures of turbulent motions are independent of large-scale structures and mean deformations, has been used in theoretical studies of turbulence and computational methods such as large-eddy simulation; however, its range of validity in shear flows has been a subject of controversy. The present experiments were planned to enhance our understanding of the local-isotropy hypothesis. Our experiments were divided into two sets. First, measurements were taken at different Reynolds numbers in a plane boundary layer, which is a 'simple' shear flow. Second, experiments were designed to address this question: will our criteria for the existence of local isotropy hold for 'complex' nonequilibrium flows in which extra rates of mean strain are added to the basic mean shear?

  1. Force Characteristics in the Submerged and Planing Condition of a 1/5.78-Scale Model of a Hydro-Ski-Wheel Combination for the Grumman JRF-5 Airplane. TED No. NACA DE 357

    NASA Technical Reports Server (NTRS)

    Land, Norman S.; Pelz, Charles A.

    1952-01-01

    Force characteristics determined from tank tests of a 1/5.78 scale model of a hydro-ski-wheel combination for the Grumman JRF-5 airplane are presented. The model was tested in both the submerged and planing conditions over a range of trim, speed, and load sufficiently large to represent the most probable full-size conditions.

  2. Large-Scale Spacecraft Fire Safety Experiments in ISS Resupply Vehicles

    NASA Technical Reports Server (NTRS)

    Ruff, Gary A.; Urban, David

    2013-01-01

    Our understanding of the fire safety risk in manned spacecraft has been limited by the small scale of the testing we have been able to conduct in low-gravity. Fire growth and spread cannot be expected to scale linearly with sample size so we cannot make accurate predictions of the behavior of realistic scale fires in spacecraft based on the limited low-g testing to date. As a result, spacecraft fire safety protocols are necessarily very conservative and costly. Future crewed missions are expected to be longer in duration than previous exploration missions outside of low-earth orbit and accordingly, more complex in terms of operations, logistics, and safety. This will increase the challenge of ensuring a fire-safe environment for the crew throughout the mission. Based on our fundamental uncertainty of the behavior of fires in low-gravity, the need for realistic scale testing at reduced gravity has been demonstrated. To address this concern, a spacecraft fire safety research project is underway to reduce the uncertainty and risk in the design of spacecraft fire safety systems by testing at nearly full scale in low-gravity. This project is supported by the NASA Advanced Exploration Systems Program Office in the Human Exploration and Operations Mission Directorate. The activity of this project is supported by an international topical team of fire experts from other space agencies to maximize the utility of the data and to ensure the widest possible scrutiny of the concept. The large-scale space flight experiment will be conducted on three missions; each in an Orbital Sciences Corporation Cygnus vehicle after it has deberthed from the ISS. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew allows the fire products to be released into the cabin. The tests will be fully automated with the data downlinked at the conclusion of the test before the Cygnus vehicle reenters the atmosphere. The international topical team is collaborating with the NASA team in the definition of the experiment requirements and performing supporting analysis, experimentation and technology development.

  3. Research at NASA's NFAC wind tunnels

    NASA Technical Reports Server (NTRS)

    Edenborough, H. Kipling

    1990-01-01

    The National Full-Scale Aerodynamics Complex (NFAC) is a unique combination of wind tunnels that allow the testing of aerodynamic and dynamic models at full or large scale. It can even accommodate actual aircraft with their engines running. Maintaining full-scale Reynolds numbers and testing with surface irregularities, protuberances, and control surface gaps that either closely match the full-scale or indeed are those of the full-scale aircraft help produce test data that accurately predict what can be expected from future flight investigations. This complex has grown from the venerable 40- by 80-ft wind tunnel that has served for over 40 years helping researchers obtain data to better understand the aerodynamics of a wide range of aircraft from helicopters to the space shuttle. A recent modification to the tunnel expanded its maximum speed capabilities, added a new 80- by 120-ft test section and provided extensive acoustic treatment. The modification is certain to make the NFAC an even more useful facility for NASA's ongoing research activities. A brief background is presented on the original facility and the kind of testing that has been accomplished using it through the years. A summary of the modification project and the measured capabilities of the two test sections is followed by a review of recent testing activities and of research projected for the future.

  4. NASA advanced turboprop research and concept validation program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitlow, J.B. Jr.; Sievers, G.K.

    1988-01-01

    NASA has determined by experimental and analytical effort that use of advanced turboprop propulsion instead of the conventional turbofans in the older narrow-body airline fleet could reduce fuel consumption for this type of aircraft by up to 50 percent. In cooperation with industry, NASA has defined and implemented an Advanced Turboprop (ATP) program to develop and validate the technology required for these new high-speed, multibladed, thin, swept propeller concepts. This paper presents an overview of the analysis, model-scale test, and large-scale flight test elements of the program together with preliminary test results, as available.

  5. High Temperature Electrolysis 4 kW Experiment Design, Operation, and Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.E. O'Brien; X. Zhang; K. DeWall

    2012-09-01

    This report provides results of long-term stack testing completed in the new high-temperature steam electrolysis multi-kW test facility recently developed at INL. The report includes detailed descriptions of the piping layout, steam generation and delivery system, test fixture, heat recuperation system, hot zone, instrumentation, and operating conditions. This facility has provided a demonstration of high-temperature steam electrolysis operation at the 4 kW scale with advanced cell and stack technology. This successful large-scale demonstration of high-temperature steam electrolysis will help to advance the technology toward near-term commercialization.

  6. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  7. Ground-water flow in low permeability environments

    USGS Publications Warehouse

    Neuzil, Christopher E.

    1986-01-01

    Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.

  8. Scaling and self-organized criticality in proteins I

    PubMed Central

    Phillips, J. C.

    2009-01-01

    The complexity of proteins is substantially simplified by regarding them as archetypical examples of self-organized criticality (SOC). To test this idea and elaborate on it, this article applies the Moret–Zebende SOC hydrophobicity scale to the large-scale scaffold repeat protein of the HEAT superfamily, PR65/A. Hydrophobic plasticity is defined and used to identify docking platforms and hinges from repeat sequences alone. The difference between the MZ scale and conventional hydrophobicity scales reflects long-range conformational forces that are central to protein functionality. PMID:19218446

  9. Low Cost Manufacturing of Composite Cryotanks

    NASA Technical Reports Server (NTRS)

    Meredith, Brent; Palm, Tod; Deo, Ravi; Munafo, Paul M. (Technical Monitor)

    2002-01-01

    This viewgraph presentation reviews research and development of cryotank manufacturing conducted by Northrup Grumman. The objectives of the research and development included the development and validation of manufacturing processes and technology for fabrication of large scale cryogenic tanks, the establishment of a scale-up and facilitization plan for full scale cryotanks, the development of non-autoclave composite manufacturing processes, the fabrication of subscale tank joints for element tests, the performance of manufacturing risk reduction trials for the subscale tank, and the development of full-scale tank manufacturing concepts.

  10. Assignment of boundary conditions in embedded ground water flow models

    USGS Publications Warehouse

    Leake, S.A.

    1998-01-01

    Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger-scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.Many small-scale ground water models are too small to incorporate distant aquifer boundaries. If a larger-scale model exists for the area of interest, flow and head values can be specified for boundaries in the smaller-scale model using values from the larger-scale model. Flow components along rows and columns of a large-scale block-centered finite-difference model can be interpolated to compute horizontal flow across any segment of a perimeter of a small-scale model. Head at cell centers of the larger.scale model can be interpolated to compute head at points on a model perimeter. Simple linear interpolation is proposed for horizontal interpolation of horizontal-flow components. Bilinear interpolation is proposed for horizontal interpolation of head values. The methods of interpolation provided satisfactory boundary conditions in tests using models of hypothetical aquifers.

  11. A numerical study of the string function using a primitive equation ocean model

    NASA Astrophysics Data System (ADS)

    Tyler, R. H.; Käse, R.

    We use results from a primitive-equation ocean numerical model (SCRUM) to test a theoretical 'string function' formulation put forward by Tyler and Käse in another article in this issue. The string function acts as a stream function for the large-scale potential energy flow under the combined beta and topographic effects. The model results verify that large-scale anomalies propagate along the string function contours with a speed correctly given by the cross-string gradient. For anomalies having a scale similar to the Rossby radius, material rates of change in the layer mass following the string velocity are balanced by material rates of change in relative vorticity following the flow velocity. It is shown that large-amplitude anomalies can be generated when wind stress is resonant with the string function configuration.

  12. A Coherent vorticity preserving eddy-viscosity correction for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Chapelier, J.-B.; Wasistho, B.; Scalo, C.

    2018-04-01

    This paper introduces a new approach to Large-Eddy Simulation (LES) where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated or deactivated in regions dominated by large-scale and/or laminar vortical motion. The proposed coherent-vorticity preserving (CvP) LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy, σ. Values of σ close to 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of the SGS dissipation. The intensity of the SGS dissipation is progressively increased for σ < 1 which corresponds to a small-scale spectral broadening. The SGS dissipation is then fully activated in developed turbulence characterized by σ ≤σeq, where the value σeq is derived assuming a Kolmogorov spectrum. The proposed approach can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. LES of Taylor-Green vortex breakdown demonstrates that the CvP methodology improves the performance of traditional, non-dynamic dissipative SGS models, capturing the peak of total turbulent kinetic energy dissipation during transition. Similar accuracy is obtained by adopting Germano's dynamic procedure albeit at more than twice the computational overhead. A CvP-LES of a pair of unstable periodic helical vortices is shown to predict accurately the experimentally observed growth rate using coarse resolutions. The ability of the CvP methodology to dynamically sort the coherent, large-scale motion from the smaller, broadband scales during transition is demonstrated via flow visualizations. LES of compressible channel are carried out and show a good match with a reference DNS.

  13. The Relationship between English Language Learners' Language Proficiency and Standardized Test Scores

    ERIC Educational Resources Information Center

    Thakkar, Darshan

    2013-01-01

    It is generally theorized that English Language Learner (ELL) students do not succeed on state standardized tests because ELL students lack the cognitive academic language skills necessary to function on the large scale content assessments. The purpose of this dissertation was to test that theory. Through the use of quantitative methodology, ELL…

  14. The Fallibility of High Stakes "11-Plus" Testing in Northern Ireland

    ERIC Educational Resources Information Center

    Gardner, John; Cowan, Pamela

    2005-01-01

    This paper sets out the findings from a large-scale analysis of the Northern Ireland Transfer Procedure Tests, used to select pupils for grammar schools. As it was not possible to get completed test scripts from government agencies, over 3000 practice scripts were completed in simulated conditions and were analysed to establish whether the tests…

  15. Classroom Activity Connections: Demonstrating Various Flame Tests Using Common Household Materials

    ERIC Educational Resources Information Center

    Baldwin, Bruce W.; Hasbrouck, Scott; Smith, Jordan; Kuntzleman, Thomas S.

    2010-01-01

    In "JCE" Activity #67, "Flame Tests: Which Ion Causes the Color?", Michael Sanger describes how to conduct flame tests with household items. We have used this activity in outreach settings, and have extended it in a variety of ways. For example, we have demonstrated large-scale strontium (red), copper (green), and carbon (blue) flames using only…

  16. Adapting Educational Measurement to the Demands of Test-Based Accountability

    ERIC Educational Resources Information Center

    Koretz, Daniel

    2015-01-01

    Accountability has become a primary function of large-scale testing in the United States. The pressure on educators to raise scores is vastly greater than it was several decades ago. Research has shown that high-stakes testing can generate behavioral responses that inflate scores, often severely. I argue that because of these responses, using…

  17. The Effect of Using Item Parameters Calibrated from Paper Administrations in Computer Adaptive Test Administrations

    ERIC Educational Resources Information Center

    Pommerich, Mary

    2007-01-01

    Computer administered tests are becoming increasingly prevalent as computer technology becomes more readily available on a large scale. For testing programs that utilize both computer and paper administrations, mode effects are problematic in that they can result in examinee scores that are artificially inflated or deflated. As such, researchers…

  18. Differential Item Functioning Analysis for Accommodated versus Nonaccommodated Students

    ERIC Educational Resources Information Center

    Finch, Holmes; Barton, Karen; Meyer, Patrick

    2009-01-01

    The No Child Left Behind act resulted in an increased reliance on large-scale standardized tests to assess the progress of individual students as well as schools. In addition, emphasis was placed on including all students in the testing programs as well as those with disabilities. As a result, the role of testing accommodations has become more…

  19. Incentives and Test-Based Accountability in Education

    ERIC Educational Resources Information Center

    Hout, Michael, Ed.; Elliott, Stuart W., Ed.

    2011-01-01

    In recent years there have been increasing efforts to use accountability systems based on large-scale tests of students as a mechanism for improving student achievement. The federal No Child Left Behind Act (NCLB) is a prominent example of such an effort, but it is only the continuation of a steady trend toward greater test-based accountability in…

  20. Framing Appropriate Accommodations in Terms of Individual Need: Examining the Fit of Four Approaches to Selecting Test Accommodations of English Language Learners

    ERIC Educational Resources Information Center

    Koran, Jennifer; Kopriva, Rebecca J.

    2017-01-01

    Providing appropriate test accommodations to most English language learners (ELLs) is important to facilitate meaningful inferences about learning. This study compared teacher large-scale test accommodation recommendations to those from a literature- and practitioner-grounded accommodation selection taxonomy. The taxonomy links student-specific…

  1. Functional Connectivity in Multiple Cortical Networks Is Associated with Performance Across Cognitive Domains in Older Adults.

    PubMed

    Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey

    2015-10-01

    Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.

  2. Test of Gravity on Large Scales with Weak Gravitational Lensing and Clustering Measurements of SDSS Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, R.; Seljak, U.; Gunn, J.; Lombriser, L.

    2009-01-01

    We perform a test of gravity on large scales (5-50 Mpc/h) using 70,000 luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) DR7 with redshifts 0.16

  3. A minimum distance estimation approach to the two-sample location-scale problem.

    PubMed

    Zhang, Zhiyi; Yu, Qiqing

    2002-09-01

    As reported by Kalbfleisch and Prentice (1980), the generalized Wilcoxon test fails to detect a difference between the lifetime distributions of the male and female mice died from Thymic Leukemia. This failure is a result of the test's inability to detect a distributional difference when a location shift and a scale change exist simultaneously. In this article, we propose an estimator based on the minimization of an average distance between two independent quantile processes under a location-scale model. Large sample inference on the proposed estimator, with possible right-censorship, is discussed. The mouse leukemia data are used as an example for illustration purpose.

  4. ATLAS Large Scale Thin Gap Chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soha, Aria

    This is a technical scope of work (TSW) between the Fermi National Accelerator Laboratory (Fermilab) and the experimenters of the ATLAS sTGC New Small Wheel collaboration who have committed to participate in beam tests to be carried out during the FY2014 Fermilab Test Beam Facility program.

  5. Statewide Physical Fitness Testing: A BIG Waist or a BIG Waste?

    ERIC Educational Resources Information Center

    Morrow, James R., Jr.; Ede, Alison

    2009-01-01

    Statewide physical fitness testing is gaining popularity in the United States because of increased childhood obesity levels, the relations between physical fitness and academic performance, and the hypothesized relations between adult characteristics and childhood physical activity, physical fitness, and health behaviors. Large-scale physical…

  6. COMPARISON OF THE SINK CHARACTERISTICS OF THREE FULL-SCALE ENVIRONMENTAL CHAMBERS

    EPA Science Inventory

    The paper gives results of an investigation of the interaction of vapor-phase organic compounds with the interior surfaces of three large dynamic test chambers. A pattern of adsorption and reemission of the test compounds was observed in all three chambers. Quantitative compari...

  7. COPS: Large-scale nonlinearly constrained optimization problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondarenko, A.S.; Bortz, D.M.; More, J.J.

    2000-02-10

    The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.

  8. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  9. Application-level regression testing framework using Jenkins

    DOE PAGES

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    2017-09-26

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  10. Application-level regression testing framework using Jenkins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  11. Ground-Handling Forces on a 1/40-scale Model of the U. S. Airship "Akron."

    NASA Technical Reports Server (NTRS)

    Silverstein, Abe; Gulick, B G

    1937-01-01

    This report presents the results of full-scale wind tunnel tests conducted to determine the ground-handling forces on a 1/40-scale model of the U. S. Airship "Akron." Ground-handling conditions were simulated by establishing a velocity gradient above a special ground board in the tunnel comparable with that encountered over a landing field. The tests were conducted at Reynolds numbers ranging from 5,000,000 to 19,000,000 at each of six angles of yaw between 0 degree and 180 degrees and at four heights of the model above the ground board. The ground-handling forces vary greatly with the angle of yaw and reach large values at appreciable angles of yaw. Small changes in height, pitch, or roll did not critically affect the forces on the model. In the range of Reynolds numbers tested, no significant variation of the forces with the scale was disclosed.

  12. Vertical Descent and Landing Tests of a 0.13-Scale Model of the Convair XFY-1 Vertically Rising Airplane in Still Air, TED No. NACA DE 368

    NASA Technical Reports Server (NTRS)

    Smith, Charlee C., Jr.; Lovell, Powell M., Jr.

    1954-01-01

    An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.

  13. Dark matter, long-range forces, and large-scale structure

    NASA Technical Reports Server (NTRS)

    Gradwohl, Ben-Ami; Frieman, Joshua A.

    1992-01-01

    If the dark matter in galaxies and clusters is nonbaryonic, it can interact with additional long-range fields that are invisible to experimental tests of the equivalence principle. We discuss the astrophysical and cosmological implications of a long-range force coupled only to the dark matter and find rather tight constraints on its strength. If the force is repulsive (attractive), the masses of galaxy groups and clusters (and the mean density of the universe inferred from them) have been systematically underestimated (overestimated). We explore the consequent effects on the two-point correlation function, large-scale velocity flows, and microwave background anisotropies, for models with initial scale-invariant adiabatic perturbations and cold dark matter.

  14. Does the Position of Response Options in Multiple-Choice Tests Matter?

    ERIC Educational Resources Information Center

    Hohensinn, Christine; Baghaei, Purya

    2017-01-01

    In large scale multiple-choice (MC) tests alternate forms of a test may be developed to prevent cheating by changing the order of items or by changing the position of the response options. The assumption is that since the content of the test forms are the same the order of items or the positions of the response options do not have any effect on…

  15. Wafer level reliability testing: An idea whose time has come

    NASA Technical Reports Server (NTRS)

    Trapp, O. D.

    1987-01-01

    Wafer level reliability testing has been nurtured in the DARPA supported workshops, held each autumn since 1982. The seeds planted in 1982 have produced an active crop of very large scale integration manufacturers applying wafer level reliability test methods. Computer Aided Reliability (CAR) is a new seed being nurtured. Users are now being awakened by the huge economic value of the wafer reliability testing technology.

  16. Generalized Master Equation with Non-Markovian Multichromophoric Förster Resonance Energy Transfer for Modular Exciton Densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Seogjoo; Hoyer, Stephan; Fleming, Graham

    2014-10-31

    A generalized master equation (GME) governing quantum evolution of modular exciton density (MED) is derived for large scale light harvesting systems composed of weakly interacting modules of multiple chromophores. The GME-MED offers a practical framework to incorporate real time coherent quantum dynamics calculations of small length scales into dynamics over large length scales, and also provides a non-Markovian generalization and rigorous derivation of the Pauli master equation employing multichromophoric Förster resonance energy transfer rates. A test of the GME-MED for four sites of the Fenna-Matthews-Olson complex demonstrates how coherent dynamics of excitonic populations over coupled chromophores can be accurately describedmore » by transitions between subgroups (modules) of delocalized excitons. Application of the GME-MED to the exciton dynamics between a pair of light harvesting complexes in purple bacteria demonstrates its promise as a computationally efficient tool to investigate large scale exciton dynamics in complex environments.« less

  17. War, space, and the evolution of Old World complex societies.

    PubMed

    Turchin, Peter; Currie, Thomas E; Turner, Edward A L; Gavrilets, Sergey

    2013-10-08

    How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies-primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita.

  18. War, space, and the evolution of Old World complex societies

    PubMed Central

    Turchin, Peter; Currie, Thomas E.; Turner, Edward A. L.; Gavrilets, Sergey

    2013-01-01

    How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies—primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita. PMID:24062433

  19. Monitoring scale scores over time via quality control charts, model-based approaches, and time series techniques.

    PubMed

    Lee, Yi-Hsuan; von Davier, Alina A

    2013-07-01

    Maintaining a stable score scale over time is critical for all standardized educational assessments. Traditional quality control tools and approaches for assessing scale drift either require special equating designs, or may be too time-consuming to be considered on a regular basis with an operational test that has a short time window between an administration and its score reporting. Thus, the traditional methods are not sufficient to catch unusual testing outcomes in a timely manner. This paper presents a new approach for score monitoring and assessment of scale drift. It involves quality control charts, model-based approaches, and time series techniques to accommodate the following needs of monitoring scale scores: continuous monitoring, adjustment of customary variations, identification of abrupt shifts, and assessment of autocorrelation. Performance of the methodologies is evaluated using manipulated data based on real responses from 71 administrations of a large-scale high-stakes language assessment.

  20. A Follow-Up Web-Based Survey: Test and Measurement Expert Opinions on the Psychometric Properties of Out-of-Level Tests. Out-of-Level Testing Report.

    ERIC Educational Resources Information Center

    Bielinski, John; Minnema, Jane; Thurlow, Martha

    A Web-based survey of 25 experts in testing theory and large-scale assessment examined the utility of out-of-level testing for making decisions about students and schools. Survey respondents were given a series of scenarios and asked to judge the degree to which out-of-level testing would affect the reliability and validity of test scores within…

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  2. Large increase in fracture resistance of stishovite with crack extension less than one micrometer

    PubMed Central

    Yoshida, Kimiko; Wakai, Fumihiro; Nishiyama, Norimasa; Sekine, Risako; Shinoda, Yutaka; Akatsu, Takashi; Nagoshi, Takashi; Sone, Masato

    2015-01-01

    The development of strong, tough, and damage-tolerant ceramics requires nano/microstructure design to utilize toughening mechanisms operating at different length scales. The toughening mechanisms so far known are effective in micro-scale, then, they require the crack extension of more than a few micrometers to increase the fracture resistance. Here, we developed a micro-mechanical test method using micro-cantilever beam specimens to determine the very early part of resistance-curve of nanocrystalline SiO2 stishovite, which exhibited fracture-induced amorphization. We revealed that this novel toughening mechanism was effective even at length scale of nanometer due to narrow transformation zone width of a few tens of nanometers and large dilatational strain (from 60 to 95%) associated with the transition of crystal to amorphous state. This testing method will be a powerful tool to search for toughening mechanisms that may operate at nanoscale for attaining both reliability and strength of structural materials. PMID:26051871

  3. Development of Dynamic Flow Field Pressure Probes Suitable for Use in Large Scale Supersonic Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Porro, A. Robert

    2000-01-01

    A series of dynamic flow field pressure probes were developed for use in large-scale supersonic wind tunnels at NASA Glenn Research Center. These flow field probes include pitot, static, and five-hole conical pressure probes that are capable of capturing fast acting flow field pressure transients that occur on a millisecond time scale. The pitot and static probes can be used to determine local Mach number time histories during a transient event. The five-hole conical pressure probes are used primarily to determine local flow angularity, but can also determine local Mach number. These probes were designed, developed, and tested at the NASA Glenn Research Center. They were also used in a NASA Glenn 10-by 10-Foot Supersonic Wind Tunnel (SWT) test program where they successfully acquired flow field pressure data in the vicinity of a propulsion system during an engine compressor staff and inlet unstart transient event. Details of the design, development, and subsequent use of these probes are discussed in this report.

  4. Tests on Models of Three British Airplanes in the Variable Density Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Higgins, George J; Defoe, George L; Diehl, W S

    1928-01-01

    This report contains the results of tests made in the National Advisory Committee for Aeronautics variable density wind tunnel on three airplane models supplied by the British Aeronautical Research Committee. These models, the BE-2E with R.A.F. 19 wings, the British Fighter with R.A.F. 15 wings, and the Bristol Fighter with R.A.F. 30 wings, were tested over a wide range in Reynolds numbers in order to supply data desired by the Aeronautical Research Committee for scale effect studies. The maximum lifts obtained in these tests are in excellent agreement with the published results of British tests, both model and full scale. No attempt is made to compare drag data, owing to the emission of tail surfaces, radiator, etc., from the model, but is shown that the scale effect observed on the drag coefficients in these tests is due to a large extent to the parts of the models other than the wings. (author)

  5. Applicability of ambient toxicity testing to national or regional water-quality assessment

    USGS Publications Warehouse

    Elder, John F.

    1990-01-01

    Comprehensive assessment of the quality of natural waters requires a multifaceted approach. Descriptions of existing conditions may be achieved by various kinds of chemical and hydrologic analyses, whereas information about the effects of such conditions on living organisms depends on biological monitoring. Toxicity testing is one type of biological monitoring that can be used to identify possible effects of toxic contaminants. Based on experimentation designed to monitor responses of organisms to environmental stresses, toxicity testing may have diverse purposes in water-quality assessments. These purposes may include identification of areas that warrant further study because of poor water quality or unusual ecological features, verification of other types of monitoring, or assessment of contaminant effects on aquatic communities. Toxicity-test results are most effective when used as a complement to chemical analyses, hydrologic measurements, and other biological monitoring. However, all toxicity-testing procedures have certain limitations that must be considered in developing the methodology and applications of toxicity testing in any large-scale water-quality-assessment program. A wide variety of toxicity-test methods have been developed to fulfill the needs of diverse applications. The methods differ primarily in the selections made relative to four characteristics: (1) test species, (2) endpoint (acute or chronic), (3) test-enclosure type, and (4) test substance (toxicant) that functions as the environmental stress. Toxicity-test approaches vary in their capacity to meet the needs of large-scale assessments of existing water quality. Ambient testing, whereby the test organism is exposed to naturally occurring substances that contain toxicant mixtures in an organic or inorganic matrix, is more likely to meet these needs than are procedures that call for exposure of the test organisms to known concentrations of a single toxicant. However, meaningful interpretation of ambient test results depends on the existence of accompanying chemical analysis of the ambient media. The ambient test substance may be water or sediments. Sediment tests have had limited application, but they are useful because most toxicants tend to accumulate in sediments and many test species either inhabit the sediments or are in frequent contact with them. Biochemical testing methods, which have been developing rapidly in recent years, are likely to be among the most useful procedures for large-scale water-quality assessments. They are relatively rapid and simple, and more. importantly, they focus on biochemical changes that are the initial responses of virtually all organisms to environmental stimuli. Most species are sensitive to relatively few toxicants, and their sensitivities vary as conditions change. Therefore, each test method has particular uses and limitations, and no single test has universal applicability. One of the most informative approaches to toxicity testing is to combine biochemical tests with other test methods in a 'battery of tests' that is diversified enough to characterize different types of toxicants and different trophic levels. However, such an approach can be costly, and if not carefully designed, it may not yield enough additional information to warrant the additional cost. The application of toxicity tests to large-scale water-quality assessments is hampered by a number of difficulties. Toxicity tests often are not sensitive enough to enable detection of most contaminant problems in the natural environment. Furthermore, because sensitivities among different species and test conditions can be highly variable, conclusions about the toxicant problems of an ecosystem are strongly dependent on the test procedure used. In addition, the experimental systems used in toxicity tests cannot replicate the complexity or variability of natural conditions, and positive test results cannot identify the source or nature of

  6. Study of stress-strain and volume change behavior of emplaced municipal solid waste using large-scale triaxial testing.

    PubMed

    Ramaiah, B J; Ramana, G V

    2017-05-01

    The article presents the stress-strain and volume change behavior, shear strength and stiffness parameters of landfilled municipal solid waste (MSW) collected from two dump sites located in Delhi, India. Over 30 drained triaxial compression (TXC) tests were conducted on reconstituted large-scale specimens of 150mm diameter to study the influence of fiber content, age, density and confining pressure on the shear strength of MSW. In addition, a few TXC tests were also conducted on 70mm diameter specimen to examine the effect of specimen size on the mobilized shear strength. It is observed that the fibrous materials such as textiles and plastics, and their percentage by weight have a significant effect on the stress-strain-volume change behavior, shear strength and stiffness of solid waste. The stress-strain-volume change behavior of MSW at Delhi is qualitatively in agreement with the behavior reported for MSW from different countries. Results of large-scale direct shear tests conducted on MSW with an identical composition used for TXC tests revealed the cross-anisotropic behavior as reported by previous researchers. Effective shear strength parameters of solid waste evaluated from this study is best characterized by ϕ'=39° and c'=0kPa for the limiting strain-based failure criteria of K 0 =0.3+5% axial strain and are in the range of the data reported for MSW from different countries. Data presented in this article is useful for the stress-deformation and stability analysis of the dump sites during their operation as well as closure plans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The Emergence of a Learning Progression in Middle School Chemistry

    ERIC Educational Resources Information Center

    Johnson, Philip; Tymms, Peter

    2011-01-01

    Previously, a small scale, interview-based, 3-year longitudinal study (ages 11-14) in one school had suggested a learning progression related to the concept of a substance. This article presents the results of a large-scale, cross-sectional study which used Rasch modeling to test the hypothesis of the learning progression. Data were collected from…

  8. Ignition and flame-growth modeling on realistic building and landscape objects in changing environments

    Treesearch

    Mark A. Dietenberger

    2010-01-01

    Effective mitigation of external fires on structures can be achieved flexibly, economically, and aesthetically by (1) preventing large-area ignition on structures by avoiding close proximity of burning vegetation; and (2) stopping flame travel from firebrands landing on combustible building objects. Using bench-scale and mid-scale fire tests to obtain flammability...

  9. Ignition and flame travel on realistic building and landscape objects in changing environments

    Treesearch

    Mark A. Dietenberger

    2007-01-01

    Effective mitigation of external fires on structures can be achieved flexibly, economically, and aesthetically by (1) preventing large-area ignition on structures from close proximity of burning vegetations and (2) stopping flame travel from firebrands landing on combustible building objects. In using bench-scale and mid-scale fire tests to obtain fire growth...

  10. Upscaling of U(VI) Desorption and Transport from Decimeter-Scale Heterogeneity to Plume-Scale Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan

    2015-02-24

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research weremore » to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.« less

  11. Forced Alignment for Understudied Language Varieties: Testing Prosodylab-Aligner with Tongan Data

    ERIC Educational Resources Information Center

    Johnson, Lisa M.; Di Paolo, Marianna; Bell, Adrian

    2018-01-01

    Automated alignment of transcriptions to audio files expedites the process of preparing data for acoustic analysis. Unfortunately, the benefits of auto-alignment have generally been available only to researchers studying majority languages, for which large corpora exist and for which acoustic models have been created by large-scale research…

  12. Use of Existing Data Bases in a Large Scale Correlational/Regression Study. for Period January 1977-January 1978.

    ERIC Educational Resources Information Center

    Greene, Jennifer C.; Kellogg, Theodore

    Statewide assessment data available from two school years, two grade levels, and five sources (achievement tests; student, principal, and teacher questionnaires; and principal interviews), were aggregated to more closely investigate the relationship between student/school characteristics and student achievement. To organize this large number of…

  13. Sparse Measurement Systems: Applications, Analysis, Algorithms and Design

    ERIC Educational Resources Information Center

    Narayanaswamy, Balakrishnan

    2011-01-01

    This thesis deals with "large-scale" detection problems that arise in many real world applications such as sensor networks, mapping with mobile robots and group testing for biological screening and drug discovery. These are problems where the values of a large number of inputs need to be inferred from noisy observations and where the…

  14. Early Gender Test Score Gaps across OECD Countries

    ERIC Educational Resources Information Center

    Bedard, Kelly; Cho, Insook

    2010-01-01

    The results reported in this paper contribute to the debate about gender skill gaps in at least three ways. First, we document the large differences in early gender gaps across developed countries using a large scale, modern, representative data source. Second, we show that countries with pro-female sorting, countries that place girls in classes…

  15. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  16. A Navy Shore Activity Manpower Planning System for Civilians. Technical Report No. 24.

    ERIC Educational Resources Information Center

    Niehaus, R. J.; Sholtz, D.

    This report describes the U.S. Navy Shore Activity Manpower Planning System (SAMPS) advanced development research project. This effort is aimed at large-scale feasibility tests of manpower models for large Naval installations. These local planning systems are integrated with Navy-wide information systems on a data-communications network accessible…

  17. Development of a Large Scale, High Speed Wheel Test Facility

    NASA Technical Reports Server (NTRS)

    Kondoleon, Anthony; Seltzer, Donald; Thornton, Richard; Thompson, Marc

    1996-01-01

    Draper Laboratory, with its internal research and development budget, has for the past two years been funding a joint effort with the Massachusetts Institute of Technology (MIT) for the development of a large scale, high speed wheel test facility. This facility was developed to perform experiments and carry out evaluations on levitation and propulsion designs for MagLev systems currently under consideration. The facility was developed to rotate a large (2 meter) wheel which could operate with peripheral speeds of greater than 100 meters/second. The rim of the wheel was constructed of a non-magnetic, non-conductive composite material to avoid the generation of errors from spurious forces. A sensor package containing a multi-axis force and torque sensor mounted to the base of the station, provides a signal of the lift and drag forces on the package being tested. Position tables mounted on the station allow for the introduction of errors in real time. A computer controlled data acquisition system was developed around a Macintosh IIfx to record the test data and control the speed of the wheel. This paper describes the development of this test facility. A detailed description of the major components is presented. Recently completed tests carried out on a novel Electrodynamic (EDS) suspension system, developed by MIT as part of this joint effort are described and presented. Adaptation of this facility for linear motor and other propulsion and levitation testing is described.

  18. Robust Detection of Examinees with Aberrant Answer Changes

    ERIC Educational Resources Information Center

    Belov, Dmitry I.

    2015-01-01

    The statistical analysis of answer changes (ACs) has uncovered multiple testing irregularities on large-scale assessments and is now routinely performed at testing organizations. However, AC data has an uncertainty caused by technological or human factors. Therefore, existing statistics (e.g., number of wrong-to-right ACs) used to detect examinees…

  19. The Thomas Self-Concept Values Test.

    ERIC Educational Resources Information Center

    Thomas, Walter L.

    A test was developed to assess personal self-concept values of preprimary and primary aged children. If large scale preschool programs are to be justified, effects in the areas of intellectual growth, achievement performance, and personal-social growth must be observable in children several years after preschool experience and must be measurable…

  20. Disaggregated Effects of Device on Score Comparability

    ERIC Educational Resources Information Center

    Davis, Laurie; Morrison, Kristin; Kong, Xiaojing; McBride, Yuanyuan

    2017-01-01

    The use of tablets for large-scale testing programs has transitioned from concept to reality for many state testing programs. This study extended previous research on score comparability between tablets and computers with high school students to compare score distributions across devices for reading, math, and science and to evaluate device…

  1. Limited Aspects of Reality: Frames of Reference in Language Assessment

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Svalberg, Agneta

    2013-01-01

    Language testers operate within two frames of reference: norm-referenced (NRT) and criterion-referenced testing (CRT). The former underpins the world of large-scale standardized testing that prioritizes variability and comparison. The latter supports substantive score meaning in formative and domain specific assessment. Some claim that the…

  2. Bayesian Estimation of Multi-Unidimensional Graded Response IRT Models

    ERIC Educational Resources Information Center

    Kuo, Tzu-Chun

    2015-01-01

    Item response theory (IRT) has gained an increasing popularity in large-scale educational and psychological testing situations because of its theoretical advantages over classical test theory. Unidimensional graded response models (GRMs) are useful when polytomous response items are designed to measure a unified latent trait. They are limited in…

  3. 77 FR 71574 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-03

    ... Test. OMB Control Number: None Form Number(s): The automated survey instrument has no form number. Type... have been developed and are now slated for a large-scale field test to evaluate the questions and the... reference period and timing of data collection. Qualitative research has [[Page 71575

  4. Test Review: ACCESS for ELLs[R

    ERIC Educational Resources Information Center

    Fox, Janna; Fairbairn, Shelley

    2011-01-01

    This article reviews Assessing Comprehension and Communication in English State-to-State for English Language Learners ("ACCESS for ELLs"[R]), which is a large-scale, high-stakes, standards-based, and criterion-referenced English language proficiency test administered in the USA annually to more than 840,000 English Language Learners (ELLs), in…

  5. SCIMITAR: Scalable Stream-Processing for Sensor Information Brokering

    DTIC Science & Technology

    2013-11-01

    IaaS) cloud frameworks including Amazon Web Services and Eucalyptus . For load testing, we used The Grinder [9], a Java load testing framework that...internal Eucalyptus cluster which we could not scale as large as the Amazon environment due to a lack of computation resources. We recreated our

  6. An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Wallace, James M.; Ong, L.; Balint, J.-L.

    1993-01-01

    The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.

  7. Improving Large-Scale Testing Capability by Modifying the 40- by 80-ft Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Mort, Kenneth W.; Soderman, Paul T.; Eckert, William T.

    1979-01-01

    Interagency studies conducted during the last several years have indicated the need to Improve full-scale testing capabilities. The studies showed that the most effective trade between test capability and facility cost was provided by re-powering the existing Ames Research Center 40- by 80-ft Wind Tunnel to Increase the maximum speed from about 100 m/s (200 knots) lo about 150 m/s (300 knots) and by adding a new 24- by 37-m (80- by 120-ft) test section powered for about a 50-m/s (100-knot) maximum speed. This paper reviews the design of the facility, a few or its capabilities, and some of its unique features.

  8. Force test of a 0.88 percent scale 142-inch diameter solid rocket booster (MSFC model number 461) in the NASA/MSFC high Reynolds number wind tunnel (SA13F)

    NASA Technical Reports Server (NTRS)

    Johnson, J. D.; Winkler, G. W.

    1976-01-01

    The results are presented of a force test of a .88 percent scale model of the 142 inch solid rocket booster without protuberances, conducted in the MSFC high Reynolds number wind tunnel. The objective of this test was to obtain aerodynamic force data over a large range of Reynolds numbers. The test was conducted over a Mach number range from 0.4 to 3.5. Reynolds numbers based on model diameter (1.25 inches) ranged from .75 million to 13.5 million. The angle of attack range was from 35 to 145 degrees.

  9. Cost of Community Integrated Prevention Campaign for Malaria, HIV, and Diarrhea in Rural Kenya

    PubMed Central

    2011-01-01

    Background Delivery of community-based prevention services for HIV, malaria, and diarrhea is a major priority and challenge in rural Africa. Integrated delivery campaigns may offer a mechanism to achieve high coverage and efficiency. Methods We quantified the resources and costs to implement a large-scale integrated prevention campaign in Lurambi Division, Western Province, Kenya that reached 47,133 individuals (and 83% of eligible adults) in 7 days. The campaign provided HIV testing, condoms, and prevention education materials; a long-lasting insecticide-treated bed net; and a water filter. Data were obtained primarily from logistical and expenditure data maintained by implementing partners. We estimated the projected cost of a Scaled-Up Replication (SUR), assuming reliance on local managers, potential efficiencies of scale, and other adjustments. Results The cost per person served was $41.66 for the initial campaign and was projected at $31.98 for the SUR. The SUR cost included 67% for commodities (mainly water filters and bed nets) and 20% for personnel. The SUR projected unit cost per person served, by disease, was $6.27 for malaria (nets and training), $15.80 for diarrhea (filters and training), and $9.91 for HIV (test kits, counseling, condoms, and CD4 testing at each site). Conclusions A large-scale, rapidly implemented, integrated health campaign provided services to 80% of a rural Kenyan population with relatively low cost. Scaling up this design may provide similar services to larger populations at lower cost per person. PMID:22189090

  10. Galaxy clustering and the origin of large-scale flows

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, R.; Yahil, A.

    1989-01-01

    Peebles's 'cosmic virial theorem' is extended from its original range of validity at small separations, where hydrostatic equilibrium holds, to large separations, in which linear gravitational stability theory applies. The rms pairwise velocity difference at separation r is shown to depend on the spatial galaxy correlation function xi(x) only for x less than r. Gravitational instability theory can therefore be tested by comparing the two up to the maximum separation for which both can reliably be determined, and there is no dependence on the poorly known large-scale density and velocity fields. With the expected improvement in the data over the next few years, however, this method should yield a reliable determination of omega.

  11. Charged-Particle Transport in the Data-Driven, Non-Isotropic Turbulent Mangetic Field in the Solar Wind

    NASA Astrophysics Data System (ADS)

    Sun, P.; Jokipii, J. R.; Giacalone, J.

    2016-12-01

    Anisotropies in astrophysical turbulence has been proposed and observed for a long time. And recent observations adopting the multi-scale analysis techniques provided a detailed description of the scale-dependent power spectrum of the magnetic field parallel and perpendicular to the scale-dependent magnetic field line at different scales in the solar wind. In the previous work, we proposed a multi-scale method to synthesize non-isotropic turbulent magnetic field with pre-determined power spectra of the fluctuating magnetic field as a function of scales. We present the effect of test particle transport in the resulting field with a two-scale algorithm. We find that the scale-dependent turbulence anisotropy has a significant difference in the effect on charged par- ticle transport from what the isotropy or the global anisotropy has. It is important to apply this field synthesis method to the solar wind magnetic field based on spacecraft data. However, this relies on how we extract the power spectra of the turbulent magnetic field across different scales. In this study, we propose here a power spectrum synthesis method based on Fourier analysis to extract the large and small scale power spectrum from a single spacecraft observation with a long enough period and a high sampling frequency. We apply the method to the solar wind measurement by the magnetometer onboard the ACE spacecraft and regenerate the large scale isotropic 2D spectrum and the small scale anisotropic 2D spectrum. We run test particle simulations in the magnetid field generated in this way to estimate the transport coefficients and to compare with the isotropic turbulence model.

  12. Satellite power system (SPS) concept definition study. Volume 3: Experimental verification definition

    NASA Technical Reports Server (NTRS)

    Hanley, G. M.

    1980-01-01

    An evolutionary Satellite Power Systems development plan was prepared. Planning analysis was directed toward the evolution of a scenario that met the stated objectives, was technically possible and economically attractive, and took into account constraining considerations, such as requirements for very large scale end-to-end demonstration in a compressed time frame, the relative cost/technical merits of ground testing versus space testing, and the need for large mass flow capability to low Earth orbit and geosynchronous orbit at reasonable cost per pound.

  13. Detecting cancer clusters in a regional population with local cluster tests and Bayesian smoothing methods: a simulation study

    PubMed Central

    2013-01-01

    Background There is a rising public and political demand for prospective cancer cluster monitoring. But there is little empirical evidence on the performance of established cluster detection tests under conditions of small and heterogeneous sample sizes and varying spatial scales, such as are the case for most existing population-based cancer registries. Therefore this simulation study aims to evaluate different cluster detection methods, implemented in the open soure environment R, in their ability to identify clusters of lung cancer using real-life data from an epidemiological cancer registry in Germany. Methods Risk surfaces were constructed with two different spatial cluster types, representing a relative risk of RR = 2.0 or of RR = 4.0, in relation to the overall background incidence of lung cancer, separately for men and women. Lung cancer cases were sampled from this risk surface as geocodes using an inhomogeneous Poisson process. The realisations of the cancer cases were analysed within small spatial (census tracts, N = 1983) and within aggregated large spatial scales (communities, N = 78). Subsequently, they were submitted to the cluster detection methods. The test accuracy for cluster location was determined in terms of detection rates (DR), false-positive (FP) rates and positive predictive values. The Bayesian smoothing models were evaluated using ROC curves. Results With moderate risk increase (RR = 2.0), local cluster tests showed better DR (for both spatial aggregation scales > 0.90) and lower FP rates (both < 0.05) than the Bayesian smoothing methods. When the cluster RR was raised four-fold, the local cluster tests showed better DR with lower FPs only for the small spatial scale. At a large spatial scale, the Bayesian smoothing methods, especially those implementing a spatial neighbourhood, showed a substantially lower FP rate than the cluster tests. However, the risk increases at this scale were mostly diluted by data aggregation. Conclusion High resolution spatial scales seem more appropriate as data base for cancer cluster testing and monitoring than the commonly used aggregated scales. We suggest the development of a two-stage approach that combines methods with high detection rates as a first-line screening with methods of higher predictive ability at the second stage. PMID:24314148

  14. On identifying relationships between the flood scaling exponent and basin attributes.

    PubMed

    Medhi, Hemanta; Tripathi, Shivam

    2015-07-01

    Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.

  15. V/STOL wind-tunnel testing

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.

    1984-01-01

    Factors influencing effective program planning for V/STOL wind-tunnel testing are discussed. The planning sequence itself, which includes a short checklist of considerations that could enhance the value of the tests, is also described. Each of the considerations, choice of wind tunnel, type of model installation, model development and test operations, is discussed, and examples of appropriate past and current V/STOL test programs are provided. A short survey of the moderate to large subsonic wind tunnels is followed by a review of several model installations, from two-dimensional to large-scale models of complete aircraft configurations. Model sizing, power simulation, and planning are treated, including three areas is test operations: data-acquisition systems, acoustic measurements in wind tunnels, and flow surveying.

  16. Steps Towards Understanding Large-scale Deformation of Gas Hydrate-bearing Sediments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Kossel, E.

    2016-12-01

    Marine sediments bearing gas hydrates are typically characterized by heterogeneity in the gas hydrate distribution and anisotropy in the sediment-gas hydrate fabric properties. Gas hydrates also contribute to the strength and stiffness of the marine sediment, and any disturbance in the thermodynamic stability of the gas hydrates is likely to affect the geomechanical stability of the sediment. Understanding mechanisms and triggers of large-strain deformation and failure of marine gas hydrate-bearing sediments is an area of extensive research, particularly in the context of marine slope-stability and industrial gas production. The ultimate objective is to predict severe deformation events such as regional-scale slope failure or excessive sand production by using numerical simulation tools. The development of such tools essentially requires a careful analysis of thermo-hydro-chemo-mechanical behavior of gas hydrate-bearing sediments at lab-scale, and its stepwise integration into reservoir-scale simulators through definition of effective variables, use of suitable constitutive relations, and application of scaling laws. One of the focus areas of our research is to understand the bulk coupled behavior of marine gas hydrate systems with contributions from micro-scale characteristics, transport-reaction dynamics, and structural heterogeneity through experimental flow-through studies using high-pressure triaxial test systems and advanced tomographical tools (CT, ERT, MRI). We combine these studies to develop mathematical model and numerical simulation tools which could be used to predict the coupled hydro-geomechanical behavior of marine gas hydrate reservoirs in a large-strain framework. Here we will present some of our recent results from closely co-ordinated experimental and numerical simulation studies with an objective to capture the large-deformation behavior relevant to different gas production scenarios. We will also report on a variety of mechanically relevant test scenarios focusing on effects of dynamic changes in gas hydrate saturation, highly uneven gas hydrate distributions, focused fluid migration and gas hydrate production through depressurization and CO2 injection.

  17. ''Football'' test coil: a simulated service test of internally-cooled, cabled superconductor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marston, P.G.; Iwasa, Y.; Thome, R.J.

    Internally-cooled, cabled superconductor, (ICCS), appears from small-scale tests to be a viable alternative to pool-boiling cooled superconductors for large superconducting magnets. Potential advantages may include savings in helium inventory, smaller structure and ease of fabrication. Questions remain, however, about the structural performance of these systems. The ''football'' test coil has been designed to simulate the actual ''field-current-stress-thermal'' operating conditions of a 25 ka ICCS in a commercial scale MHD magnet. The test procedure will permit demonstration of the 20 year cyclic life of such a magnet in less than 20 days. This paper describes the design, construction and test ofmore » that coil which is wound of copper-stabilized niobium-titanium cable in steel conduit. 2 refs.« less

  18. Hybrid LES RANS technique based on a one-equation near-wall model

    NASA Astrophysics Data System (ADS)

    Breuer, M.; Jaffrézic, B.; Arora, K.

    2008-05-01

    In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid-scale and RANS models. Here, in addition to the study on the behavior of the suggested hybrid LES RANS approach, special emphasis is put on the investigation of suitable interface criteria and the adjustment of the RANS model. To investigate these issues, two different test cases are considered. Besides the standard plane channel flow test case, the flow over a periodic arrangement of hills is studied in detail. This test case includes a pressure-induced flow separation and subsequent reattachment. In comparison with a wall-resolved LES prediction encouraging results are achieved.

  19. Using HLM to Explore the Effects of Perceptions of Learning Environments and Assessments on Students' Test Performance

    ERIC Educational Resources Information Center

    Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.

    2014-01-01

    The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…

  20. Using Raters from India to Score a Large-Scale Speaking Test

    ERIC Educational Resources Information Center

    Xi, Xiaoming; Mollaun, Pam

    2011-01-01

    We investigated the scoring of the Speaking section of the Test of English as a Foreign Language[TM] Internet-based (TOEFL iBT[R]) test by speakers of English and one or more Indian languages. We explored the extent to which raters from India, after being trained and certified, were able to score the TOEFL examinees with mixed first languages…

Top